Fusion 17 Manual
Fusion 17 Manual
Reference Manual
Fusion
Welcome
Welcome to Fusion for Mac, Linux and Windows!
Fusion is the world’s most advanced compositing software for visual effects artists,
broadcast and motion graphic designers and 3D animators. With over 30 years of
development, Fusion has been used on over 1000 major Hollywood blockbuster feature
films! Fusion features an easy and powerful node based interface so you can construct
complex effects simply by connecting various types of processing together. That’s super
easy and extremely fast! You get a massive range of features and effects included,
so you can create exciting broadcast graphics, television commercials, dramatic title
sequences and even major feature film visual effects!
Fusion Studio customers can also use DaVinci Resolve Studio to get a complete set of
editing, advanced color correction and professional Fairlight audio post production tools.
Clips in the DaVinci Resolve timeline can be shared with Fusion so you can collaborate
on your complex compositions within your VFX team and then render the result back
directly into the DaVinci Resolve timeline. We hope you enjoy reading this manual and
we can’t wait to see the work you produce with Fusion.
Grant Petty
CEO Blackmagic Design
Fusion 17 Welcome 2
Contents
Menu Descriptions 6
PART 1
Fusion Fundamentals
1 Introduction to Compositing in Fusion 8
2 Exploring the Fusion Interface 13
3 Getting Clips into Fusion 59
4 Rendering Using Saver Nodes 80
5 Working in the Node Editor 103
6 Node Groups, Macros, and Fusion Templates 147
7 Using Viewers 167
8 Editing Parameters in the Inspector 205
9 Animating in Fusion’s Keyframes Editor 228
10 Animating in Fusion’s Spline Editor 245
11 Animating with Motion Paths 274
12 Using Modifiers, Expressions, and Custom Controls 291
13 Bins 303
14 Fusion Connect 322
15 Preferences 335
PART 2
2D Compositing
16 Controlling Image Processing and Resolution 381
17 Managing Color for Visual Effects 390
18 Understanding Image Channels 402
19 Compositing Layers in Fusion 436
20 Rotoscoping with Masks 458
21 Paint 480
22 Using the Tracker Node 504
23 Planar Tracking 536
24 Using Open FX, Resolve FX, and Fuse Plug-Ins 542
PART 3
3D Compositing
25 3D Compositing Basics 546
26 3D Camera Tracking 598
27 Particle Systems 615
PART 4
Advanced Compositing Techniques
28 Optical Flow and Stereoscopic Nodes 624
Fusion 17 Contents 3
PART 5
Fusion Page Effects
29 3D Nodes 638
30 3D Light Nodes 748
31 3D Material Nodes 761
32 3D Texture Nodes 788
33 Blur Nodes 813
34 Color Nodes 837
35 Composite Nodes 892
36 Deep Pixel Nodes 906
37 Effect Nodes 922
38 Film Nodes 953
39 Filter Nodes 970
40 Flow Nodes 986
41 Flow Organizational Nodes 989
42 Fuses 994
43 Generator Nodes 996
44 I/O Nodes 1033
45 LUT Nodes 1056
46 Mask Nodes 1065
47 Matte Nodes 1102
48 Metadata Nodes 1155
49 Miscellaneous Nodes 1162
50 Optical Flow 1197
51 Paint Node 1210
52 Particle Nodes 1219
53 Position Nodes 1276
54 Resolve Connect 1294
55 Shape Nodes 1300
56 Stereo Nodes 1331
57 Tracker Nodes 1361
58 Transform Nodes 1406
59 VR Nodes 1431
60 Warp Nodes 1441
61 Modifiers 1469
PART 6
Other Information
62 Regulatory Notices, Safety Information and Warranty 1506
Fusion 17 Contents 4
Navigation Guide
Chapter 1
Introduction
to Compositing
in Fusion
This introduction is designed explicitly to help users who are new to Fusion get
started learning this exceptionally powerful environment for creating and editing
visual effects and motion graphics right from within DaVinci Resolve or using the
stand-alone Fusion Studio application.
This documentation covers both the Fusion Page inside DaVinci Resolve and the
stand-alone Fusion Studio application.
Fusion 17
Menu Descriptions
For ease of use navigating this manual, each menu item is listed here, and by clicking
on the name of the menu function, you will be taken to the appropriate part of the
manual that describes that function.
Fusion
Show Toolbar – Page 33
Toggles the Fusion toolbar on or off.
Import
that describes that function.
Specific file format import for Fusion.
> Alembic Scene – Page 639
> FBX Scene – Page 586
> PSD – Page 78
> Shapes – Page 74
> SVG – Page 74
> Tracks – Page 73
Menu Descriptions
For ease of use navigating this manual, each menu item is listed here, and by clicking
on the name of the menu function, you will be taken to the appropriate part of the
manual that describes that function.
Fusion
Show Toolbar – Page 33
Toggles the Fusion toolbar on or off.
Reset Composition
Resets a Fusion composition to its initial state.
Import
Specific file format import for Fusion.
> Alembic Scene –
Page 639
> FBX Scene –
Page 586
> PSD – Page 78
> Shapes –
Page 74
> SVG – Page 74
> Tracks –
Page 73
Fusion
Fundamentals
Chapter 1
Introduction
to Compositing
in Fusion
This introduction is designed explicitly to help users who are new to Fusion get
started learning this exceptionally powerful environment for creating and editing
visual effects and motion graphics right from within DaVinci Resolve or using the
stand-alone Fusion Studio application.
This documentation covers both the Fusion Page inside DaVinci Resolve and the
stand-alone Fusion Studio application.
Contents
What Is Fusion? 9
The Fusion Page within DaVinci Resolve 9
The Fusion Studio Stand-Alone Application 10
What Kinds of Effects Does Fusion Offer? 11
How Hard Will This Be to Learn? 12
The Fusion page in DaVinci Resolve, showing viewers, the Node Editor, and the Inspector
3D Compositing
Fusion has powerful 3D nodes that include extruded 3D text, simple geometry, and the ability to
import 3D models. Once you’ve assembled a 3D scene, you can add cameras, lighting, and material
shaders, and then render the result with depth-of-field effects and auxiliary channels to integrate with
more conventional layers of 2D compositing, for a sophisticated blending of 3D and 2D operations in
the very same node tree.
Particles
Fusion also has an extensive set of nodes for creating particle systems that have been used in major
motion pictures, with particle generators capable of spawning other generators, 3D particle
generation, complex simulation behaviors that interact with 3D objects, and endless options for
experimentation and customization. You can create particle system simulations for VFX or more
abstract particle effects for motion graphics.
Exploring the
Fusion Interface
This chapter provides an orientation on the Fusion user interface, providing a
quick tour of what tools are available, where to find things, and how the different
panels fit together to help you build and refine compositions in this powerful
node‑based environment.
Contents
The Fusion User Interface 15
The Work Area 16
Interface Toolbar 16
Choosing Which Panel Has Focus 17
Viewers 17
Zooming and Panning into Viewers 19
Loading Nodes Into Viewers 19
Clearing Viewers 20
Viewer Controls 20
Time Ruler and Transport Controls 22
Time Ruler Controls in the Fusion Page 22
Time Ruler Controls in Fusion Studio 23
The Playhead 23
Zoom and Scroll Bar 24
Transport Controls in the Fusion Page 24
Audio Monitoring 26
Transport Controls in Fusion Studio 27
Changing the Time Display Format 31
Keyframe Display in the Time Ruler 31
The Fusion RAM Cache for Playback 31
However, Fusion doesn’t have to be that complicated, and in truth, you can work very nicely with only
the viewer, Node Editor, and Inspector open for a simplified experience.
The work area showing the Node Editor, the Spline Editor, and the Keyframes Editor
Interface Toolbar
At the very top of Fusion is a toolbar with buttons that let you show and hide different parts of the user
interface (UI). Buttons with labels identify which parts of the UI can be shown or hidden. In
DaVinci Resolve’s Fusion page, if you right-click anywhere within this toolbar, you have the option of
displaying this bar with or without text labels.
Viewers
The viewer area displays either one or two viewers at the top of the Fusion page, and this is
determined via the Viewer button at the far right of the Viewer title bar. Each viewer can show a single
node’s output from anywhere in the node tree. You assign which node is displayed in which viewer.
This makes it easy to load separate nodes into each viewer for comparison. For example, you can load
a Keyer node into the left viewer and the final composite into the right viewer, so you can see the
image you’re adjusting and the final result at the same time.
Ordinarily, each viewer shows 2D nodes from your composition as a single image. However, when
you’re viewing a 3D node, you have the option to set that viewer to one of several 3D views. A
perspective view gives you a repositionable stage on which to arrange the elements of the world
you’re creating. Alternatively, a quad view lets you see your composition from four angles, making it
easier to arrange and edit objects and layers within the XYZ axes of the 3D space in which
you’re working.
TIP: In Perspective view, you can hold down the Option key and drag in the viewer to pivot
the view around the center of the world. All other methods of navigating viewers
work the same.
The viewers have a variety of capabilities you can use to compare and evaluate images. This section
provides a short overview of viewer capabilities to get you started.
Clearing Viewers
To clear an image from a viewer, click in the viewer to make it active; a thin red highlight is displayed at
the top of the active viewer. With the viewer active, press the Tilde (~) key. This key is usually found to
the left of the 1 key on U.S. keyboards. The fastest way to remove all the images from all the viewers is
to make sure none of the viewers is the active panel, and then press the Tilde key.
Viewer Controls
A series of buttons and pop-up menus in the viewer’s title bar provides several quick ways of
customizing the viewer display.
– Zoom menu: Lets you zoom in on the image in the viewer to get a closer look, or zoom out to get
more room around the edges of the frame for rotoscoping or positioning different layers. Choose
Fit to automatically fit the overall image to the available dimensions of the viewer.
– Split Wipe button and A/B Buffer menu: You can actually load two nodes into a single viewer
using that viewer’s A/B buffers by choosing a buffer from the menu and loading a node into the
viewer. Turning on the Split Wipe button (press Forward Slash) shows a split wipe between the
two buffers, which can be dragged left or right via the handle of the onscreen control, or rotated
by dragging anywhere on the dividing line on the onscreen control. Alternatively, you can switch
between each full-screen buffer to compare them (or to dismiss a split-screen) by pressing Comma
(A buffer) and Period (B buffer).
– SubView type: (These aren’t available in 3D viewers.) Clicking the icon itself enables or disables
the current “SubView” option you’ve selected, while using the menu lets you choose which
SubView is enabled. This menu serves one of two purposes. When displaying ordinary 2D nodes,
it lets you open up SubViews, which are viewer “accessories” within a little pane that can be used
to evaluate images in different ways. These include an Image Navigator (for navigating when
zoomed far into an image), Magnifier, 2D viewer (a mini-view of the image), 3D Histogram scope,
Color Inspector, Histogram scope, Image Info tooltip, Metadata tooltip, Vectorscope, or Waveform
scope. The Swap option (Shift-V) lets you switch what’s displayed in the viewer with what’s being
displayed in the Accessory pane. When displaying 3D nodes, this button lets you turn on the
quad-paned 3D viewer.
– Node name: The name of the currently viewed node is displayed at the center of the
viewer’s title bar.
By default, when using DaVinci Resolve, the viewers in the Fusion page show you the image prior to
any grading done in the Color page, since the Fusion page comes before the Color page in the
DaVinci Resolve image processing pipeline. When you’re working on clips that have been converted
to linear color space for compositing, it is desirable to composite and make adjustments to the image
relative to a normalized version of the image that appears close to what the final will be. Enabling the
LUT display lets you do this as a preview, without permanently applying color adjustments to
the image.
– Option menu: This menu contains various settings that pertain to the viewers in Fusion.
– Snap to Pixel: When drawing or adjusting a polyline mask or spline, the control points will snap
to pixel locations.
– Show Controls: Toggles whatever onscreen controls are visible for the currently selected node.
– Region: Provides all the settings for the Region of Interest in the viewer.
– Smooth Resize: This option uses a smoother bilinear interpolated resizing method when
zooming into an image in the viewer; otherwise, scaling uses the nearest neighbor method and
shows noticeable aliasing artifacts. However, this is more useful when you zoom in at a pixel
level since there is no interpolation.
– Show Square Pixels: Overrides the auto aspect correction when using formats with
non-square pixels.
– Normalized Color Range: Allows for the visualization of brightness values outside of the normal
viewing range, particularly when working with floating-point images or auxiliary channels.
– Checker Underlay: Toggles a checkerboard underlay that makes it easy to see areas of
transparency.
– Gain/Gamma: Exposes a simple pair of Gain and Gamma sliders that let you adjust the viewer’s
brightness.
– 360 View: Used to properly display spherical imagery in a variety of formats, selectable from
this submenu.
– Stereo: Used to properly display stereoscopic imagery in a variety of formats, selectable from
this submenu.
The transport controls under the Time Ruler include playback controls, audio monitoring, as well as
number fields for the composition duration and playback range. Additional controls enable motion blur
and proxy settings.
The Time Ruler displaying ranges for a clip in the Timeline via yellow marks (the playhead is red)
If you’ve created a Fusion clip or a compound clip, then the “working range” reflects the entire
duration of that clip.
The Time Ruler displaying ranges for a Fusion clip in the Timeline
Render Range
The render range determines the range of frames that are visible in the Fusion page and that are used
for interactive playback, disk caches, and previews. Frames outside the default render range are not
visible in the Fusion page and are not rendered or played.
You can modify the duration of the render range for preview and playback only. Making the range
shorter or longer does not trim the clip in the Edit or Cut page Timelines.
You can change the render range in the Time Ruler by doing one of the following:
– Hold down the Command key and drag a new range within the Time Ruler.
– Drag either the start or end yellow line to modify the start or end of the range.
– Right-click within the Time Ruler and choose Set Render Range from the contextual menu.
– Enter new ranges in the Range In and Out fields to the left of the transport controls.
– Drag a node from the Node Editor to the Time Ruler to set the range to the duration of that node.
The Time Ruler displaying ranges for a clip in the Timeline via yellow marks (the playhead is red)
You can change the global range by doing one of the following:
– To change the global range for all new compositions, choose Fusion Studio > Preferences on
macOS or File > Preferences on Windows or Linux. In the Global and Default Settings panel, enter
a new range in the Global range fields.
– To change the Global range for the current composition, enter a new range in the Global Start and
End fields to the left of the transport controls.
– Dragging a node from the Node Editor to the Time Ruler automatically sets the Global and Render
Range to the extent of the node.
Render Range
The render range determines the range of frames used for interactive playback, disk caches, and
previews. Frames outside the render range are not rendered or played, although you can still drag the
playhead to these frames to see the unused frames.
To preview or render a specific range of a composition, you can modify the render range in a
variety of ways.
You can set the render range in the Time Ruler by doing one of the following:
– Hold down the Command key and drag a new range within the Time Ruler.
– Right-click within the Time Ruler and choose Set Render Range from the contextual menu to set
the Render Range based on the selected Node’s duration.
– Enter new ranges in the Range In and Out fields to the left of the transport controls.
– Drag a node from the Node Editor to the Time Ruler to set the range to the duration of that node.
The Playhead
A red playhead within the Time Ruler indicates the currently viewed frame. Clicking anywhere within
the Time Ruler jumps the playhead to that frame, and dragging within the Time Ruler drags the
playhead within the available duration of that clip or composition.
TIP: Holding the middle mouse button and dragging in the Time Ruler lets you scroll the
visible range.
Controlling Playback
There are six transport controls underneath the Time Ruler in the Fusion page. These buttons include
Composition First Frame, Play Reverse, Stop, Play Forward, Composition Last Frame, and Loop.
Navigation Shortcuts
Many standard transport control keyboard shortcuts you may be familiar with work in Fusion, but some
are specific to Fusion’s particular needs.
To move the playhead in the Time Ruler using the keyboard, do one of the following:
– Spacebar: Toggles forward playback on and off.
– JKL: Basic JKL playback is supported, including J to play backward, K to stop,
and L to play forward.
– Back Arrow: Moves 1 frame backward.
– Forward Arrow: Moves 1 frame forward.
– Shift-Back Arrow: Moves to the clip’s Global End frame.
– Shift-Forward Arrow: Moves to the clip’s Global Start frame.
– Command-Back Arrow: Jumps to the Render Range In point.
– Command-Forward Arrow: Jumps to the Render Range Out point.
Looping Options
The Loop button can be toggled to enable or disable looping during playback. You can right-click this
button to choose the looping method that’s used:
– Playback Loop: The playhead plays to the end of the Time Ruler and starts from the
beginning again.
– Ping-pong Loop: When the playhead reaches the end of the Time Ruler, playback reverses
until the playhead reaches the beginning of the Time Ruler, and then continues to ping-pong
back and forth.
TIP: If the Mute button is enabled on any Timeline tracks, audio from those tracks will not be
heard in Fusion.
For Fusion Studio, audio can be loaded using the Loader node’s Audio tab. The audio functionality is
included in Fusion Studio for scratch track (aligning effects to audio and clip timing) purposes. Final
renders should almost always be performed without audio. Audio can be heard if it is brought in
through a Loader node.
High Quality
As you build a composition, often the quality of the displayed image is less important than the
speed at which you can work. The High Quality setting gives you the option to either display
images with faster interactivity or at final render quality. When you turn off High Quality, complex
and time-consuming operations such as area sampling, anti-aliasing, and interpolation are skipped
to render the image to the viewer more quickly. Enabling High Quality forces a full-quality render
to the viewer that’s identical to what is output during final delivery.
Motion Blur
The Motion Blur button is a global setting. Turning off Motion Blur temporarily disables motion blur
throughout the composition, regardless of any individual nodes for which it’s enabled. This can
Proxy
The Proxy setting is a draft mode used to speed processing while you’re building your composite.
Turning on Proxy reduces the resolution of the images that are rendered to the viewer, speeding
render times by causing only one out of every x pixels to be processed, rather than processing
every pixel. The value of x is decided by adjusting a slider in the General panel of the Fusion
Settings, found in the Fusion menu.
Auto Proxy
The Auto Proxy setting is a draft mode used to speed processing while you’re building your
composite. Turning on Auto Proxy reduces the resolution of the image while you click and drag to
adjust a parameter. Once you release that control, the image snaps back to its original resolution.
This lets you adjust processor-intensive operations more smoothly, without the wait for every
frame to render at full quality causing jerkiness. You can set the auto proxy ratio by adjusting a
slider in the General panel of the Fusion Settings found in the Fusion menu.
Selective Updates
When working in Fusion, only the tools needed to display the images in the viewer are updated. The
Selective Update options select the mode used during previews and final renders.
The options are available in the Fusion Preferences General panel. The three options are:
– Update All (All): Forces all the nodes in the current node tree to render. This is primarily used
when you want to update all the thumbnails displayed in the Node Editor.
– Selective (Some): Causes only nodes that directly contribute to the current image to be rendered.
So named because only selective nodes are rendered. This is the default setting.
– No Update (None): Prevents rendering altogether, which can be handy for making many changes
to a slow-to-render composition.
Controlling Playback
There are eight transport controls underneath the Time Ruler in Fusion Studio. These buttons include
Composition First Frame, Step Backward, Play Reverse, Stop, Play Forward, Step Forward,
Composition Last Frame, and Loop.
To move the playhead in the Time Ruler using the keyboard, do one of the following:
– Spacebar: Toggles forward playback on and off.
– JKL: Basic JKL playback is supported, including J to play backward, K to stop,
and L to play forward.
– Back Arrow: Moves 1 frame backward.
– Forward Arrow: Moves 1 frame forward.
– Shift-Back Arrow: Moves to the clip’s Global End frame.
– Shift-Forward Arrow: Moves to the clip’s Global Start frame.
– Command-Back Arrow: Jumps to the Render Range In point.
– Command-Forward Arrow: Jumps to the Render Range Out point.
Range Fields
The four time fields on the left side of the transport controls are used to quickly modify the global
range and render range in Fusion Studio.
Audio
The Audio button is a toggle that mutes or enables any audio associated with the clip. Additionally,
right-clicking on this button displays a drop-down menu that can be used to select a WAV file, which
can be played along with the composition, and to assign an offset to the audio playback.
Render
Clicking the Render button in the transport controls displays the composition’s Render Settings dialog.
This dialog is used to configure the render options and initiate rendering of any Saver nodes in the
composition. Shift-clicking on the button skips the dialog, using default render values (full resolution,
high quality, motion blur enabled).
NOTE: Many fields in Fusion can evaluate mathematical expressions that you type into them.
For example, typing 2 + 4 into most fields results in the value 6.0 being entered. Because
Feet + Frames uses the + symbol as a separator symbol rather than a mathematical symbol,
the Current Time field will not correctly evaluate mathematical expressions that use the +
symbol, even when the display format is set to Frames mode.
HiQ
As you build a composition, often the quality of the displayed image is less important than the speed
at which you can work. The High Quality setting gives you the option to either display images with
faster interactivity or at final render quality. When you turn off High Quality, complex and time-
consuming operations such as area sampling, anti-aliasing, and interpolation are skipped to render the
image to the viewer more quickly. Enabling High Quality forces a full-quality render to the viewer that’s
identical to what will be output during final delivery.
MB
The Motion Blur button is a global setting. Turning off Motion Blur temporarily disables motion blur
throughout the composition, regardless of any individual nodes for which it’s enabled. This can
significantly speed up renders to the viewer. Individual nodes must first have motion blur enabled
before this button has any effect.
Prx
A draft mode to speed processing while you’re building your composite. Turning on Proxy reduces the
resolution of the images that are rendered to the viewer, speeding render times by causing only one
out of every x pixels to be processed, rather than processing every pixel. The value of x is decided by
adjusting a slider in the General panel of the Fusion Preferences, found under the Fusion menu on
macOS or the File menu on Windows and Linux.
Aprx
A draft mode to speed processing while you’re building your composite. Turning on Auto Proxy
reduces the resolution of the image while you click and drag to adjust a parameter. Once you release
that control, the image snaps back to its original resolution. This lets you adjust processor-intensive
operations more smoothly, without the wait for every frame to render at full quality causing jerkiness.
You can set the auto proxy ratio by adjusting a slider in the General panel of the Fusion Preferences,
found under the Fusion menu on macOS or the File menu on Windows and Linux.
Selective Updates
The last of the five buttons on the right of the transport controls is a three-way toggle that determines
when nodes update images in the viewer. By default, when working in Fusion, any node needed to
display the image in the viewer is updated. The Selective Update button can change this behavior
during previews and final renders.
The three options are:
– Update All (All): Forces all the nodes in the current node tree to render. This is primarily used
when you want to update all the thumbnails displayed in the Node Editor.
– Selective (Some): Causes only nodes that directly contribute to the current image to be rendered.
So named because only selective nodes are rendered. This is the default setting.
– No Update (None): Prevents rendering altogether, which can be handy for making a lot of changes
to a slow-to-render composition.
The options are also available in the Fusion Preferences General panel.
When the size of the cache reaches the Fusion Caching/Memory Limits setting found in the Memory
panel of the Preferences, then lower-priority cache frames are automatically discarded to make room
for new caching. You can keep track of the RAM cache usage via a percentage indicator on the far
right of the Status bar at the bottom of the Fusion window.
The green lines indicate frames that have been cached for playback.
There’s one exception to this, however. When you cache frames at the High Quality setting, and you
then turn off High Quality, the green frames won’t turn red. Instead, the High Quality cached frames
are used even though the HiQ setting has been disabled.
The toolbar has buttons for adding commonly used nodes to the Node Editor.
The default toolbar is divided into sections that group commonly used nodes together. As you hover
the pointer over any button, a tooltip shows you that node’s name.
– Loader/Saver nodes (Fusion Studio Only): The Loader node is the primary node used to
select and load clips from the hard drive. The Saver node is used to write or render your
composition to disk.
– Generator/Title/Paint nodes: The Background and FastNoise generators are commonly used to
create all kinds of effects, and the Title generator is obviously a ubiquitous tool, as is Paint.
– Color/Blur nodes: ColorCorrector, ColorCurves, HueCurves, and BrightnessContrast are the four
most commonly used color adjustment nodes, while the Blur node is ubiquitous.
– Compositing/Transform nodes: The Merge node is the primary node used to composite one
image against another. ChannelBooleans and MatteControl are both essential for reassigning
channels from one node to another. Resize alters the resolution of the image, permanently altering
the available resolution, while Transform applies pan/tilt/rotate/zoom effects in a resolution-
independent fashion that traces back to the original resolution available to the source image.
– Mask nodes: Rectangle, Ellipse, Polygon, and BSpline mask nodes let you create shapes to use
for rotoscoping, creating garbage masks, or other uses.
– Particle system nodes: Three particle nodes let you create complete particle systems when you
click them from left to right. pEmitter emits particles in 3D space, while pMerge lets you merge
multiple emitters and particle effects to create more complex systems. pRender renders a 2D
result that can be composited against other 2D images.
– 3D nodes: Seven 3D nodes let you build sophisticated 3D scenes. These nodes auto attach to
one another to create a quick 3D template when you click from left to right. ImagePlane3D lets you
connect 2D stills and movies for compositing into 3D scenes. Shape3D lets you create geometric
primitives of different kinds. Text3D lets you build 3D text objects. Merge3D lets you composite
multiple 3D image planes, primitive shapes, and 3D text together to create complex scenes, while
SpotLight lets you light the scenes in different ways, and Camera3D lets you frame the scene in
whatever ways you like. Renderer3D renders the final scene and outputs 2D images and auxiliary
channels that can be used to composite 3D output against other 2D layers.
When you’re first learning to use Fusion, these nodes are really all you need to build most common
composites. Once you’ve become a more advanced user, you’ll still find that these are truly the most
common operations you’ll use.
You can then composite images together by connecting the output from multiple nodes to certain
nodes such as the Merge node that combines multiple inputs into a single output.
By default, new nodes are added from left to right in the Node Editor, but they can also flow from top
to bottom, right to left, bottom to top, or in all directions simultaneously. Connections automatically
reorient themselves along all four sides of each node to maintain the cleanest possible presentation as
you rearrange other connected nodes.
Nodes can be oriented in any direction; the input arrows let you follow the flow of image data.
There are other standard methods of panning and zooming around the Node Editor.
When using the vertical layouts, enabling the Flow > Vertical Build Direction option in the Fusion
settings will cause all new Node trees to build vertically, leaving maximum room for Fusion’s
animation tools.
You can then save alternative layouts based on these two vertical presets using the Workspace >
Layout Presets submenu.
When you want to return to the default horizontal Node Editor layout, just choose Workspace > Layout
Presets > Fusion Presets > Default.
These Layout options are not available in Fusion Studio, however, you can use the Floating Frame to
position the Node Editor wherever you like.
Keeping Organized
As you work, it’s important to keep the node trees that you create tidy to facilitate a clear
understanding of what’s happening. Fortunately, the Fusion Node Editor provides a variety of methods
and options to help you with this, found within the Options and Arrange Tools submenus of the Node
Editor contextual menu.
Status Bar
The Status bar in the lower-left corner of the Fusion window shows you a variety of up-to-date
information about things you’re selecting and what’s happening in Fusion. For example, hovering the
pointer over a node displays information about that node in the Status bar. Additionally, the currently
achieved frame rate appears whenever you initiate playback, and the percentage of the RAM cache
that’s used appears at all times. Other information, updates, and warnings appear in this area
as you work.
Occasionally the Status bar will display a badge to let you know there’s a message in the console you
might be interested in. The message could be a log, script message, or error.
Effects Library
The Effects Library in Fusion shows all the nodes and effects available in Fusion, including third-party
OFX plug-ins, if installed. If you are using DaVinci Resolve, Resolve FX also appear in the OFX
category. While the toolbar shows many of the most common nodes you’ll use in any composite, the
Effects Library contains every single tool available in Fusion, organized by category, with each node
ready to be quickly added to the Node Editor. Suffice it to say that there are many, many more nodes
available in the Effects Library than on the toolbar, spanning a wide range of uses.
The hierarchical category browser of the Effects Library is divided into several sections depending on
whether you are using Fusion Studio or the Fusion page within DaVinci Resolve. The Tools section is
the most often used since it contains every node that represents an elemental image-processing
operation in Fusion. The OpenFX section contains third-party plug-ins, and if you are using the Fusion
page, it also contains ResolveFX, which are included with DaVinci Resolve. A third section, only visible
The Effects Library’s list can be made full height or half height using a button at the far left of
the UI toolbar.
The Inspector
The Inspector is a panel on the right side of the Fusion window that you use to display and manipulate
the parameters of one or more selected nodes. When a node is selected in the Node Editor, its
parameters and settings appear in the Inspector.
Other nodes display node-specific items here. For example, Paint nodes show each brush stroke as an
individual set of controls in the Modifiers panel, available for further editing or animating.
– Set Color: A pop-up menu that lets you assign one of 16 colors to a node, overriding a
node’s own color.
– Versions: Clicking Versions reveals another toolbar with six buttons. Each button can hold an
individual set of adjustments for that node that you can use to store multiple versions of an effect.
– Pin: The Inspector is also capable of simultaneously displaying all parameters for multiple nodes
you’ve selected in the Node Editor. Furthermore, a Pin button in the title bar of each node’s
parameters lets you “pin” that node’s parameters into the Inspector so that they remain there even
when that node is deselected, which is valuable for key nodes that you need to adjust even while
inspecting other nodes of your composition.
– Lock: Locks that node so that no changes can be made to it.
– Reset: Resets all parameters within that node.
Parameter Tabs
Many nodes expose multiple tabs’ worth of controls in the Inspector, seen as icons at the top of the
parameter section for each node. Click any tab to expose that set of controls.
The Keyframes Editor is used to adjust the timing of clips, effects, and keyframes.
A timeline ruler provides a time reference, as well as a place in which you can scrub the playhead.
At the left, a track header contains the name of each layer, as well as controls governing that layer.
– A lock button lets you prevent a particular layer from being changed.
– Nodes that have been keyframed have a disclosure control, which when opened displays a
keyframe track for each animated parameter.
In the middle, the actual editing area displays all layers and keyframe tracks available in the current
composition.
At the bottom-left, Time Stretch and Spreadsheet mode controls provide additional ways to manipulate
keyframes.
At the bottom right, the Time/TOffset/TScale drop-down menu and value fields let you numerically
alter the position of selected keyframes either absolutely, relatively, or based on their distance from
the playhead.
To change the position of a keyframe using the toolbar, do one of the following:
– Select a keyframe, and then enter a new frame number in the Time Edit box.
– Choose T Offset from the Time Editor pop-up, select one or more keyframes, and enter a
frame offset.
For example, if you’re animating a motion path, then the “Key Frame” row shows the frame each
keyframe is positioned at, and the “Path1Displacement” row shows the position along the path at each
keyframe. If you change the Key Frame value of any keyframe, you’ll move that keyframe to a new
frame of the Timeline.
Spline Editor
The Spline Editor provides a more detailed environment for editing the timing, value, and interpolation
of keyframes. Using control points at each keyframe connected by splines (also called curves), you can
adjust how animated values change over time. The Spline Editor has four main areas: the Zoom and
Framing controls at the top, the Parameter list at the left, the Graph Editor in the middle, and the
toolbar at the bottom.
A Timeline ruler provides a time reference, as well as a place in which you can scrub the playhead.
The Parameter list at the left is where you decide which splines are visible in the Graph view. By
default, the Parameter list shows every parameter of every node in a hierarchical list. Checkboxes
beside each name are used to show or hide the curves for different keyframed parameters. Color
controls let you customize each spline’s tint to make splines easier to see in a crowded situation.
The Graph view that takes up most of this panel shows the animation spline along two axes. The
horizontal axis represents time, and the vertical axis represents the spline’s value. Selected control
points show their values in the edit fields at the bottom of the graph.
Lastly, the toolbar at the bottom of the Spline Editor provides controls to set control point interpolation,
spline looping, or choose Spline editing tools for different purposes.
– Smooth: Creates automatically adjusted Bézier curves to create smoothly interpolating animation.
– Flat: Creates linear interpolation between control points.
– Invert: Inverts the vertical position of non-animated LUT splines. This does not operate on
animation splines.
– Step In: For each keyframe, creates sudden changes in value at the next keyframe to the right.
Similar to a hold keyframe in After Effects® or a static keyframe in the DaVinci Resolve Color page.
– Step Out: Creates sudden changes in value at every keyframe for which there’s a change in value
at the next keyframe to the right. Similar to a hold keyframe in After Effects or a static keyframe in
the DaVinci Resolve Color page.
– Reverse: Reverses the horizontal position of selected keyframes in time, so the
keyframes are backward.
– Set Loop: Repeats the same pattern of keyframes over and over.
– Set Ping Pong: Repeats a reversed set of the selected keyframes and then a duplicate set of the
selected keyframes to create a more seamless pattern of animation.
– Set Relative: Repeats the same pattern of selected keyframes but with the values of each
repeated pattern of keyframes being incremented or decremented by the trend of all keyframes
in the selection. This results in a loop of keyframes where the value either steadily increases or
decreases with each subsequent loop.
– Select All: Selects every keyframe currently available in the Spline Editor.
– Click Append: Click once to select this tool and click again to de-select it. This tool lets you add
or adjust keyframes and spline segments (sections of splines between two keyframes) depending
on the keyframe mode you’re in. With Smooth or Linear keyframes, clicking anywhere above or
below a spline segment adds a new keyframe to the segment at the location where you clicked.
With Step In or Step Out keyframes, clicking anywhere above or below a line segment moves that
segment to where you’ve clicked.
– Time Stretch: If you select a range of keyframes, you can turn on the Time Stretch tool to show a
box you can use to squeeze and stretch the entire range of keyframes relative to one another, to
change the overall timing of a sequence of keyframes without losing the relative timing from one
keyframe to the next. Alternatively, you can turn on Time Stretch and draw a bounding box around
the keyframes you want to adjust to create a time-stretching boundary that way. Click Time Stretch
a second time to turn it off.
– Shape Box: Turn on the Shape Box to draw a bounding box around a group of control points you
want to adjust in order to horizontally squish and stretch (using the top/bottom/left/right handles),
corner pin (using the corner handles), move (dragging on the box boundary), or corner stretch
(Command-drag the corner handles).
– Show Key Markers: Turning on this control shows keyframes in the top ruler that correspond to the
frame at which each visible control point appears. The colors of these keyframes correspond to
the color of the control points they’re indicating.
The Thumbnail timeline lets you navigate the Timeline and manage versions of compositions.
TIP: If you drag one or more clips from the Media Pool onto a connection line between two
nodes in the Node Editor, the clips are automatically connected to that line via enough Merge
nodes to connect them all.
For more information on using the myriad features of the Media Pool, see Chapter 17, “Adding and
Organizing Media with the Media Pool” in the DaVinci Resolve Reference Manual.
To add media by dragging one or more clips from the Finder to the Fusion page Media Pool
(macOS only):
1 Select one or more clips in the Finder.
2 Drag those clips into the Media Pool of DaVinci Resolve, or to a bin in the Bin list. Those clips are
added to the Media Pool of your project.
For more information on importing media using the myriad features of the Media page, see Chapter 17,
“Adding and Organizing Media with the Media Pool” in the DaVinci Resolve Reference Manual.
Similar to the Media Pool in DaVinci Resolve, when adding an item to the Fusion bins, a link is created
between the item on disk and the bins. Fusion does not copy the file into its own cache or hard drive
space. The file remains in its original format and in its original location.
Bins Interface
The Bins window is actually a separate application used to save content you may want to reuse at a
later time. The Bins window is separated into two panels. The sidebar on the left is a bin list where
items are placed into categories, while the panel on the right displays the selected bin’s content.
The Bin list organizes content into bins or folders using a hierarchical List view. These folders can be
organized to suit your workflow, but standard folders are provided for Clips, Compositions, Favorites,
Settings, and Tools. Parent folders contain subfolders that hold the content. For instance, the Tools bin
is a parent folder to all the categories of tools. To access subfolders, click the disclosure arrow to the
left of the parent folder’s name.
When you select a bin from the bin list, the contents of the folder are displayed in the Contents panel
as thumbnail icons.
A toolbar along the bottom of the bin provides access to organization, playback, and editing controls.
For more information on Bins and the Studio Player, see Chapter 74, “Bins” in the DaVinci Resolve
Reference Manual or Chapter 13 in the Fusion Reference Manual.
The Console
The Console is a window in which you can see the error, log, script, and input messages that may
explain something Fusion is trying to do in greater detail. The Console is also where you can read
FusionScript outputs, or input FusionScripts directly. In DaVinci Resolve, the Console is available by
choosing Workspace > Console or choosing View > Console in Fusion Studio. There is also a Console
button in the Fusion Studio User Interface toolbar.
Occasionally the Status bar displays a badge to let you know there’s a message in the Console you
might be interested in.
A toolbar at the top of the Console contains controls governing what the Console shows. At the top
left, the Clear Screen button clears the contents of the Console. The next four buttons toggle the
visibility of error messages, log messages, script messages, and input echoing. Showing only a
particular kind of message can help you find what you’re looking for when you’re under the gun at
3:00 in the morning. The next three buttons let you choose the input script language. Lua 5.1 is the
default and is installed with Fusion. Python 2.7 and Python 3.6 require that you install the appropriate
Python environment on your computer. Because scripts in the Console are executed immediately, you
can switch between input languages at any time.
At the bottom of the Console is an Entry field. You can type scripting commands here for execution in
the current comp context. Scripts are entered one line at a time, and are executed immediately. For
more information on scripting, see the Fusion Scripting Manual.
Customizing Fusion
This section explains how you can customize Fusion to accommodate whatever workflow
you’re pursuing.
In DaVinci Resolve, configure and resize the panels you want displayed and then:
– Choose Workspace > Layout Presets > Save Layout Presets.
In Fusion Studio, configure and resize the panels you want displayed and then:
– Click the Grab Document Layout button in the Preferences > Layout panel to save the layout for all
new Compositions.
– Click the Grab Program Layout button to remember the size and position of any floating views, and
enable the Create Floating Views checkbox to automatically create the floating windows when
Fusion restarts.
When using multiple monitors, you can choose to have floating panels spread across your displays for
greater flexibility.
Once you’ve selected a step to undo to, the menu closes and the project updates to show you its
current state.
The Undo History window lets you browse the entire available undo stack of the current page.
Getting Clips
into Fusion
This chapter details the various ways you can move clips into Fusion
as you build your compositions.
Contents
Preparing Compositions in the Fusion Page 60
Working on Single Clips in the Fusion Page 60
Turning One or More Clips into Fusion Clips 61
Adding Fusion Composition Generators 62
Creating a Fusion Composition Clip in a Bin 63
Using Fusion Transitions 63
Adding Clips from the Media Pool 64
Adding Clips from the File System 65
Using MediaIn Nodes 65
MediaIn Node Inputs 65
Inspector Properties of MediaIn Nodes 65
Using Loader and Saver Nodes in the Fusion Page 68
Preparing Compositions in Fusion Studio 70
Setting Up a Composition 72
Reading Clips into Fusion Studio 73
Aligning Clips in a Fusion Studio Composition 74
Loader Node Inputs 75
Using Proxies for Better Performance 75
Presetting Proxy Quality 76
File Format Options 77
Loading Audio WAV Files in Fusion Studio 79
The default node tree that appears when you first open the
Fusion page while the playhead is parked on a clip.
This initial node structure makes it easy to quickly use the Fusion page to create relatively simple
effects using the procedural flexibility of node-based compositing.
For example, if you have a clip that’s an establishing shot, with no camera motion, that needs some
fast paint to cover up a bit of garbage in the background, you can open the Fusion page, add a Paint
node, and use the Clone mode of the Stroke tool to paint it out quickly.
TIP: The resolution of a single clip brought into Fusion via the Edit or Cut page Timeline is the
resolution of the source clip, not the Timeline resolution.
Once you’ve finished, simply go back to the Edit or Cut page and continue editing, because the entire
Fusion composition is encapsulated within that clip, similarly to how grades in the Color page are also
encapsulated within a clip. However you slip, slide, ripple, roll, or resize that clip, the Fusion effects
you’ve created and the Color page grades you’ve made follow that clip’s journey through your
edited Timeline.
The nice thing about creating a Fusion clip is that every superimposed clip in a stack is automatically
connected into a cascading series of Merge nodes that create the desired arrangement of clips. Note
that whatever clips were in the bottom of the stack in the Edit page appear at the top of the Node
Editor in the Fusion page, but the arrangement of background and foreground input connections is
appropriate to recreate the same compositional order.
The initial node tree of the three clips we turned into a Fusion clip.
TIP: Fusion clips change the working resolution of the individual clips to match the Timeline
resolution. For instance, if two 4K clips are stacked one on top of the other in an HD Timeline,
creating a Fusion clip resizes the clips to HD. The full resolution of the individual 4K clips is
not available in Fusion. To maintain the full resolution of course clips, bring only one clip into
the Fusion composition from the Edit or Cut page Timeline, and then bring other clips into the
Fusion composition using the Media Pool.
NOTE: Audio on a track beneath a Fusion Composition effect cannot be heard in the
Fusion page.
To learn about creating custom Fusion Transitions that appear in the Effects Library, go to Chapter 67,
“Node Groups, Macros, and Fusion Templates” in the DaVinci Resolve Reference Manual or Chapter 6
in the Fusion Reference Manual.
Dragging a clip from the Media Pool (Left), and dropping it onto your composition (Right).
When you add a clip by dragging it into an empty area of the Node Editor, it becomes another MediaIn
node, disconnected, that’s ready for you to merge into your current composite in any one of a
variety of ways.
TIP: Dragging a clip from the Media Pool on top of a connection line between two other
nodes in the Node Editor adds that clip as the foreground clip to a Merge node.
When you add additional clips from the Media Pool, those clips becomes a part of the composition,
similar to how Ext Matte nodes you add to the Color page Node Editor become part of that
clip’s grade.
To hear audio from a clip brought in through the Media Pool, do the following:
1 Select the clip in the Node Editor.
2 In the Inspector, click the Audio tab and select the clip name from the Audio Track
drop-down menu.
3 Right-click the speaker icon in the toolbar, then choose the MediaIn for the Media Pool
clip to solo its audio.
TIP: If you connect a mask node without any shapes drawn, that mask outputs full
transparency, with the result that the image output by the MediaIn node is uselessly blank. If
you want to rotoscope over a MediaIn node, first create a disconnected mask node, and with
the mask node selected (exposing its controls in the Inspector) and the MediaIn node loaded
into the viewer, draw your mask. Once the shape you’re drawing has been closed, you can
connect the mask node to the MediaIn node’s input, and you’re good to go.
TIP: All content in the DaVinci Resolve Fusion page is processed using 32-bit floating-point
bit depth, regardless of the content’s actual bit depth.
Audio Tab
The Inspector for the MediaIn node contains an Audio tab, where you can choose to solo the audio
from the clip or hear all the audio tracks in the Timeline.
The Audio tab in the MediaIn node is used to select the track for playback, slip the audio timing, and
reset the audio cache
If the audio is out of sync when playing back in Fusion, the Audio tab’s Sound Offset wheel allows you
to slip the audio in subframe frame increments. The slipped audio is only modified in the Fusion page.
All other pages retain the original audio placement.
To purge the audio cache after any change to the audio playback:
– Click the Purge Audio Cache button in the Inspector.
The audio will be updated when you next play back the composition.
If the composition has unsaved changes, a dialog box appears allowing you to save before closing.
TIP: Compositions that have unsaved changes will display an asterisk (*) next to the
composition’s name in the Fusion Studio title bar and in the composition’s tab.
Auto Save
Auto save automatically saves the composition to a temporary file at preset intervals. Auto saves help
to protect you from loss of work due to power loss, software issues, or accidental closure.
To enable auto save for new compositions, choose Fusion Studio > Preferences, and then locate
Global > General > Auto Save in the Preferences dialog.
An auto-save file does not overwrite the current composition in the file system. A file with the same
name is created in the same folder as the composition but with the extension .autosave instead of .
comp. Unsaved compositions will place the autosave file in the default folder specified by the Comp:
path in the Paths panel of the Global Preferences.
If an auto-save file is present when Fusion Studio loads a composition, a dialog will appear asking to
load the auto-saved or original version of the composition.
To import a composition from Fusion Studio into the Fusion page within DaVinci Resolve:
1 From within Fusion Studio, open the composition you want to move into the Fusion page.
2 From within DaVinci Resolve, switch to the Fusion page with an empty composition. The
composition you import will completely replace the existing composition in the Fusion page
Node Editor.
3 Choose File > Import Fusion Composition.
4 In the Open dialog, navigate to the Fusion comp and click Open.
The new comp is loaded into the Node Editor, replacing the previously existing composition.
TIP: To keep an existing comp in the Fusion page and merge a new comp from Fusion Studio,
open Fusion Studio, select all the nodes in the Node Editor, and press Command-C to copy
the selected nodes. Then, open DaVinci Resolve and switch the Fusion page with the
composition you want, click in an empty location in the Node Editor, and press Command-V to
paste the Fusion Studio nodes. Proceed to connect the pasted node tree into the existing
one using a Merge or Merge 3D node.
Setting Up a Composition
Source media can come in a variety of formats, including HD, UHD, and 4K or larger. Often you will
have different formats within a single comp. Each format has different properties, from resolution to
color depth and gamma curve. Fusion can mix and match material of different formats together in a
single composite, but it is important to note how Fusion Studio configures and combines materials of
different formats when loading and merging them together.
When you open Fusion Studio, an empty composition is created. The first thing you do when starting
on a new composition is to set the preferences to match the intended final output format. The
preferences are organized into separate groups: one for global preferences, and one for the
preferences of the currently opened composition.
Although the final output resolution is determined in the Node Editor, the Frame Format preferences
are used to determine the default resolution used for new Creator tools (i.e., text, background, fractals,
etc.), aspect ratio, as well as the frame rate used for playback.
If the same frame format is used day after day, the global Frame Format preferences should match the
most commonly used footage. For example, on a project where the majority of the source content will
be 1080p high definition, it makes sense to set up the global preferences to match the frame format of
the HD source content you typically use.
To set up the default Frame Format for new compositions, do the following:
1 Choose Fusion Studio > Preferences.
2 Click the Global and Default Settings disclosure triangle in the sidebar to open the Globals group.
3 Select the Frame Format category to display its options.
When you set options in the Global Frame Format category, they determine the default frame format
for any new composition you create. They do not affect existing compositions or the composition
currently open. If you want to make changes to existing compositions, you must open the comp. You
can then select the Frame Format controls listed under the comp’s name in the sidebar.
For more information on preferences, see Chapter 76, “Preferences” in the DaVinci Resolve Reference
Manual or Chapter 15 in the Fusion Reference Manual.
If multiple files are dragged into the Node Editor, a separate Loader is added for each file. However, if
you drag a single frame from an image sequence, the entire series of the image sequence is read into
the comp using one Loader, as long as the numbers are sequential.
TIP: Using File > Import > Footage creates a new composition along with a Loader node for
the footage. The selected media is automatically used for the name of the composition.
For more information about the Loader node, see Chapter 105, “I/O Nodes” in the DaVinci Resolve
Reference Manual or Chapter 44 in the Fusion Reference Manual.
Below the filename in the Inspector is a Trim In and Out range slider. This range slider determines the
start frame and end frame of the clip. Dragging the Trim In will remove frames from the start of the clip,
and dragging the Trim Out will remove frames from the end of the clip.
Although you may remove frames from the start of a clip, the Global In always determines where in
time the clip begins in your comp. For instance, if the Loader has a Global In starting on frame 0, and
you trim the clip to start on frame 10, then frame 10 of the source clip will appear at the comp’s starting
point on frame 0.
Instead of using the Inspector to adjust timing, it is visually more obvious if you use the Keyframes
Editor. For more information on the Keyframes Editor and adjusting a clip’s time, see Chapter 70,
“Animating in Fusion’s Keyframe Editor” in the DaVinci Resolve Reference Manual or Chapter 9 in the
Fusion Reference Manual.
TIP: If you connect a Mask node without any shapes drawn, that mask outputs full
transparency, so the result is that the image output by the MediaIn node is blank. If you want
to rotoscope over a MediaIn node, first create a disconnected Mask node, and with the Mask
node selected and the Media In node loaded into the viewer, draw your mask. Once the
shape you’re drawing has been closed, connect the Mask node to the MediaIn node’s input,
and you’re good to go.
– In Fusion Studio, click the Proxy (Prx) button in the transport area to enable the usage of proxies.
The Proxy option reduces the resolution of the images as you view and work with them. Instead of
displaying every pixel, the Proxy option processes one out of every x pixels interactively. In Fusion
Studio, the value of x is determined by right-clicking the Prx button and selecting a proxy ratio
from the drop-down menu. For instance, choosing 5 from the menu sets the ratio at 5:1. In the
Fusion page, the proxy ratio is set by choosing Fusion > Fusion Settings and setting the Proxy
slider in the General panel.
The Auto Proxy button enables Fusion to interactively degrade the image only while adjustments are
made. The image returns to normal resolution when the control is released. Similar to the Prx button in
Fusion Studio, you can set the Auto Proxy ratio by right-clicking the APrx button and choosing a ratio
from the menu.
When a Loader node is selected, the Inspector includes a Proxy Filename field where you can specify
a clip that will be loaded when the Proxy mode is enabled. This allows smaller versions of the image to
be loaded to speed up file I/O from disk and processing. This is particularly useful when working with
high resolution files like EXR that might be stored on a remote server. Lower resolution versions of the
elements can be stored locally, reducing network bandwidth, interactive render times, and
memory usage.
The proxy clip that you create must have the same number of frames as the original clip, and if using
image sequences, the sequence numbers for the clip must start and end on the same frame numbers.
If the proxies are the same format as the original files, the proxies will use the same format options in
the Inspector as the originals.
DPX
The Format tab in Fusion Studio’s Loader node for DPX files is used to convert image data from
logarithmic to linear. These settings are often left in bypass mode, and the Log to Linear conversion is
handled using a Cineon Log node.
OpenEXR
The OpenEXR format provides a compact and flexible high dynamic range (float) format. The format
supports a variety of extra non-RGBA channels and metadata. These channels can be viewed and
enabled in the Format tab of the Inspector.
To load all layers individually from a PSD file, with appropriate blend modes, do one of the following:
– In DaVinci Resolve, switch to the Fusion page and choose Fusion > Import > PSD.
– In Fusion Studio, choose File > Import > PSD.
Using either of the methods above creates a node tree where each PSD layer is represented by a
node and one or more Merge nodes are used to combine the layers. The Merge nodes are set to the
Apply mode used in the PSD file and automatically named based on the Apply mode setting.
QuickTime
QuickTime files can potentially contain multiple tracks. You can use the Format tab in the Inspector to
select one of the tracks.
You can either load the audio file independent of any nodes, or load an audio file into the Saver node.
The benefit of using a Saver node to load the audio is that you can view the audio waveforms in the
Keyframes Editor.
When you want to find the precise location of an audio beat, transient, or cue, you can slowly drag
over the audio waveform to hear the audio.
Rendering Using
Saver Nodes
This chapter covers how to render compositions using Saver nodes in Fusion Studio
and the Fusion page in DaVinci Resolve. It also covers how to render using multiple
computers over a network when using Fusion Studio.
Contents
Rendering Overview 81
Rendering in the Fusion Page 81
Rendering in Fusion Studio 82
Rendering with the Saver Node 82
Setting Filenames for Export 83
Using the Render Settings Dialog 84
Render Settings Dialog Options 85
Rendering Previews 86
Setting Up Network Rendering in Fusion Studio 86
Licensing for Network Rendering 87
Configuring the Render Master and Render Nodes 88
Setting Up the Render Manager 90
Submitting Comps to Network Render 91
Using the Render Settings Dialog for Network Rendering 91
Using the Render Manager Window for Network Rendering 92
Working with Node Groups 93
Viewing the Render Log 94
Using Third-Party Render Managers with Fusion Studio 94
Rendering Overview
When you have finished creating a composition in Fusion, you need to render the files out to disk for
playback and integration into a larger timeline. Fusion Studio and the Fusion page in DaVinci Resolve
use very different workflows for rendering. To finish a composite in the Fusion Page, you use a
MediaOut node to cache the results into the Edit or Cut page Timeline. The DaVinci Resolve Deliver
page handles the final render of the entire Timeline. To get completed composites out of Fusion
Studio, you configure and render them starting with a Saver node in the Node Editor. Fusion Studio is
also capable of distributing a variety of rendering tasks to other machines on a network.
A single Saver node is added to the end of a node tree to render the final composite.
You can attach multiple Saver nodes anywhere along the node tree to render out different parts of a
composite. In the example below, three Saver nodes are added at different points in the node tree.
The top two render out each half of the composite while the bottom renders the results of the entire
composite.
You can also use multiple Saver nodes stemming from the same node in order to create several output
formats. The below example uses the three Savers to export different formats of the same shot.
Adding a Saver node to a node tree automatically opens a Save dialog where you name the file and
navigate to where the exported file is saved. You can then use the Inspector to configure the
output format.
For more information on the Saver node, see Chapter 105, “I/O Nodes” in the DaVinci Resolve
Reference Manual or Chapter 44 in the Fusion Reference Manual.
If you decide to output an image sequence, a four-digit frame number is automatically added before
the filename extension. For example, naming your file image_name.exr results in files named image_
name0000.exr, image_name0001.exr, and so on. You can specify the frame padding by adding
several zeroes to indicate the number of digits. For example, entering a filename as image_
name_000.exr results in a sequence of images with the names Image_name_000.exr, Image_
name_001.exr, Image_name_002.exr, and so on.
NOTE: The starting frame number always uses the Time Ruler start frame number.
The Render Settings dialog opens providing options for the rendered output.
Ensure that the frame range and other parameters are correct and click Start Render.
Settings
When the Configuration section is set to Preview, the Settings section of the Render dialog includes
three options that determine the overall quality and appearance of your final output. These buttons
also have a significant impact on render times. When the Configurations setting is set to Final, these
options cannot be disabled
– HiQ: When enabled, this setting renders in full image quality. If you need to see what the final
output of a node would look like, then you would enable the HiQ setting. If you are producing a
rough preview to test animation, you can save yourself time by disabling this setting.
– MB: The MB in this setting stands for Motion Blur. When enabled, this setting renders with motion
blur applied if any node is set to produce motion blur. If you are generating a rough preview and
you aren’t concerned with the motion blur for animated elements, then you can save yourself time
by disabling this setting.
– Some: When Some is enabled, only the nodes specifically needed to produce the image of the
node you’re previewing are rendered.
Size
When the Configurations section is set to Preview, you can use the Size options to render out frame
sizes lower than full resolution. This is helpful when using the Render dialog to create proxies or just
creating a smaller file size.
Network
The Network setting controls the distribution of rendering to multiple computers. For more information,
see the network rendering section in this chapter.
Shoot On
Again, this option is only available when Configurations is set to Preview. The Shoot On setting allows
you to skip frames when rendering. You can choose to render every second, third, or fourth to save
render time and get faster feedback. You can use the Step parameter to determine the interval at
which frames are rendered.
Frame Range
Regardless of whether the Configurations is set to Final or Preview, this option defaults to the current
Render In/Out Range set in the Time Ruler to determine the start and end frames for rendering. You
can modify the range to render more or fewer frames.
Configurations
When set to Final, the Render Settings are set to deliver the highest quality results, and you cannot
modify most of the options in this dialog. When set to Preview, you can set the options to gain faster
rendering performance. Once you’ve created a useful preview configuration, you can save it for later
use by clicking the Add button, giving it a name, and clicking OK.
TIP: Option-Shift-dragging a node into a viewer will skip the Render dialog and
previously used settings.
By default, the Render Node application will be added to the Start Menu on Windows under
Blackmagic Design. On macOS, it is added to the menu bar, and on Linux it appears in the app
launcher. Each time you log in to the computer, the Render Node application will run automatically.
To disable the Render Node application from starting up automatically, choose Quit from the Render
Node icon in the macOS menu. On Linux, right-click over the icon and choose Kill Process, and on
Windows, delete the shortcut from the Windows Startup directory.
Multi-License Dongles
Using a multi-license dongle, you can license 10 copies of Fusion Studio by connecting the dongle to
any computer on the same subnet. Since these licenses “float” over a network, Fusion Studio does not
have to be running on the same computer where the dongle is connected. As long as Fusion Studio is
on the same subnet, it can automatically find the license server and check out an available license.
Multi-seat dongles can be combined together to tailor the number of Fusion seats in a larger facility.
For example, three dongles each licensed for 10 Fusion Studios would serve up 30 licenses. This also
allows for redundancy. For instance, in the example above, three computers can act as license
servers. If the first server fails for some reason, Fusion Studio will automatically try the next server.
Alternatively, multiple dongles can also be plugged into a single computer.
NOTE: The use of straight quotes (" ") in the environment variables above are intentional and
should not be replaced with typographer’s, or curly, quotes (“ ”).
Once a computer is enabled to act as the master, use the Render Manager to add the Render Nodes it
will manage. The Render Manager dialog is described in detail later in this chapter.
In Fusion Studio, you can enable the computer to be used as a Render Node in two ways:
– Choose File > Allow Network Renders.
– Enable the Allow This Machine to Be Used as a Network Slave in the Global >
Network Preferences.
The Render Manager is used to reorder, add, and remove compositions from a render queue.
The Render Master is always listed as the first computer in the Slave list along the right side. This
allows the Render Manager to render local queues without using the network. For the Render Master
to control additional Render Nodes, the nodes must be added to the Slave list.
Right-clicking in the Slave list allows you to add Render Nodes by entering the Render Node’s name or
IP address. You can also choose Scan to have the Render Manager look for Render Nodes on the
local network.
In the Add Slave dialog that opens, enter the name or the IP address of the remote Render Node.
The Render Manager will attempt to resolve names into IP addresses and IP addresses into names
automatically. You can use this method to add Render Nodes to the list when they are not currently
available on the network.
NOTE: Distributed network rendering works for image sequences like EXR, TIFF, and DPX.
You cannot use network rendering for Quicktime, H264, ProRes, or MXF files.
When a render is submitted to the network, it is automatically sent to the All group. However, you can
choose to submit it to other groups in the list.
Continuing with the group example above, five Render Nodes are contained in the All group,
and two of those Render Nodes are also in the Hi-Performance group. If you submit a render to the
Hi‑Performance group, only two of the computers on the network are used for rendering.
If a composition is then submitted to the All group, the remaining three machines will start rendering
the new composition. Once the two Render Nodes in the Hi_Performance group complete the first
job, they join the render in progress on the All group.
Groups are optional and do not have to be used. However, groups can make managing large networks
of Render Nodes easier and more efficient.
There are two modes for the Render Log: a Verbose mode and a Brief mode. Verbose mode logs all
events from the Render Manager, while Brief mode logs only which frames are assigned to each
Render Node and when they are completed.
This would start up, render frames from 101 to 110, and then quit.
Command Description
TIP: An X11 virtual frame buffer is required to make a headless Linux command line
interface work.
File paths can use relative paths based on the location of the saved comp file.
In this situation, using the Comp:\ path means your media location starts from your comp file’s location.
The relative path set in the Loader node would then be:
Comp:\Greenscreen\0810Green_0000.exr
If your source media’s actual file path uses a subfolder in the same folder as the comp file’s folder:
Volumes\Project\Shot0810\Footage\Greenscreen\0810Green_0000.exr
The relative path set in the Loader node would then be:
Comp:\..\Footage\Greenscreen\0810Green_0000.exr
TIP: Some Path Maps are not set up on a Fusion Render Node automatically. For instance,
you must manually add an entry for macros if you are using macros in your comp.
Flipbook Previews
Fusion Studio is able to use Render Nodes to accelerate the production of Flipbook Previews, allowing
for lightning fast previews. Frames for previews that are not network rendered are rendered directly
into memory. Select the Use Network checkbox and the Render Nodes to render the preview frames
to the folder set in the Preferences Global > Path > Preview Renders. This folder should be accessible
for all Render Nodes that will participate in the network render. The default value is Temp\, which is a
virtual path pointing to the system’s default temp folder. This will need to be changed before network
rendered previews can function. Once the preview render is completed, the frames that are produced
by each Render node are spooled into memory on the local workstation. As each frame is copied into
memory, it is deleted from disk.
Disk Cache
Right-clicking a node in the Node Editor and choosing Cache to Disk opens a dialog used to create
the disk cache. If you enabled the Use Network checkbox and click the Pre-Render button to submit
the disk cache, the network Render Nodes are used to accelerate the creation of the disk cache.
Render Nodes can be used for disk caching as well as final renders.
Frame Timeouts
Frame timeouts are a fail-safe method of canceling a Render Node’s render if a frame takes longer
than the specified time (with a default of 60 minutes, or one hour). The frame timeout ensures that an
overnight render will continue if a composition hangs or begins swapping excessively and fails to
complete its assigned frame.
The timeout is set per composition in the queue. To change the timeout value for a composition from
the default of 60 minutes, right-click on the composition in the Render Manager’s queue list and select
Set Frame Timeout from the contextual menu.
To change the frame timeout value, choose Set Frame Time Out from the Render Manager’s Misc
menu and enter the number of seconds you want for the Time Out.
Heartbeats
Often, the network environment is made up of computers with a variety of CPU and memory
configurations. The memory settings used on the workstation that created a composition may not be
appropriate for all the Render Nodes in the network. The Render Node software offers the ability to
override the memory settings stored in the composition and use custom settings more suited to the
system configuration of a specific Render Node.
The number of heartbeats in a row that must be missed before a Render Node is removed from the list
by the manager, as well as the interval of time between heartbeats, can be configured in the Network
Preferences panel of the master. The default settings for these options are fine for 90% of cases.
If the compositions that are rendered tend to use more memory than is physically installed, this will
cause swapping of memory to disk. It may be preferable to increase these two settings somewhat to
compensate for the sluggish response time until more RAM can be added to the slave.
Time Stretching
Compositions using the Time Stretcher and Time Speed tools may encounter difficulties when
rendered over the network. Speeding up or slowing down compositions and clips requires fetching
multiple frames before and after the current frame that is being rendered, resulting in increased I/O to
the file server. This may worsen bottlenecks over the network and lead to inefficient rendering. If the
composition uses the Time Stretcher or Time Speed tools, make certain that the network is up to the
load or pre-render that part of the composition before network rendering.
Linear Tools
Certain tools cannot be network rendered properly. Particle systems from third-party vendors, such as
Genarts’s Smoke and Rain, and the Fusion Trails node cannot render properly over the network. These
tools generally store the previously rendered result and use it as part of the next frame’s render, so
every frame is dependent on the one rendered before it. This data is local to the tool, so these tools
do not render correctly over a network.
NOTE: The above does not apply to network rendered previews, which are previews created
over the network that employ spooling to allow multi-frame formats to render successfully.
Only final renders are affected by this limitation.
Troubleshooting
There are some common pitfalls when rendering across a network. Virtually all problems with network
rendering have to do with path names or plug-ins. Return to the “Preparing Compositions for Network
Rendering” section in this chapter to review some of the essential setup requirements. Verify that all
Render Nodes can load the compositions and the media, and that all Render Nodes have installed the
plug-ins used in the composition.
If some difficulties persist, contact Blackmagic Design’s technical support using the support section on
the Blackmagic Design website. Save a copy of the render.log file to send to technical support.
Working in the
Node Editor
This chapter discusses how to work in the Node Editor, including multiple ways to
add, connect, rearrange, and remove nodes to create any effect you can think of.
Contents
Learning to Use the Node Editor 105
Navigating within the Node Editor 106
Automatic Node Editor Navigation 106
Using the Node Navigator 106
Node View Bookmarks 107
Adding Nodes to a Composition 109
Adding, Inserting, and Replacing Nodes Using the Toolbar 109
Adding Nodes Quickly Using the Select Tool Window 110
Adding Nodes from the Effects Library 111
Adding, Inserting, and Replacing Nodes Using the Contextual Menu 114
Deleting Nodes 114
Disconnected Nodes 114
Selecting and Deselecting Nodes 115
Selecting Nodes 115
The Active Node 115
Deselecting Nodes 116
Loading Nodes into Viewers 116
Viewed Nodes When You First Open Fusion 117
Node View Indicators 117
Drag and Drop Nodes into a Viewer 117
This chapter discusses how to work in the Node Editor in greater detail, showing you how to add,
connect, rearrange, and remove nodes to create any effect you can think of.
To pan the Node Editor using the Node Navigator, do the following:
– Drag within the Node Navigator to move around different parts of your node tree.
– Within the Navigator, drag with two fingers on a track pad to move around different
parts of your node tree.
The first nine saved bookmarks are given keyboard shortcuts and listed in the Options menu. They are
also listed in the Go To Bookmarks dialog along with any saved bookmarks beyond the initial nine.
TIP: You can return the Node Editor to the default scale by right-clicking in the Node Editor
and choosing Scale > Default Scale or pressing Cmd-1.
If your Node Tree changes and you want to update Bookmark names or delete bookmarks, those
tasks can be done in the Manage Bookmarks dialog.
Using Bookmarks
You can jump to a Bookmark view by selecting a bookmark listed in the Options menu or choosing
Go To Bookmarks to open the Go To Bookmarks dialog. The Go To Bookmarks dialog has all the
bookmarks listed in the order they were created in the current composition. Double-clicking on any
entry in the dialog will update the Node Editor to that view and close the Go To Bookmarks dialog.
If you have a long list of bookmarks, you can use the search field at the bottom of the dialog to enter
the name of the bookmark you want to find.
TIP: If you don’t know which node a particular icon corresponds to, just hover your pointer
over any toolbar button and a tooltip will display the full name of that tool.
To replace a node in the Node Editor with a node from the toolbar:
1 Drag a button from the toolbar so that it’s directly over the node in the Node Editor that you want
replaced. When the node underneath is highlighted, drop the node.
TIP: When you replace one node with another, any settings that are identical between the
two nodes are copied into the new node. For example, replacing a Transform node with a
Merge will copy the existing center and angle values from the Transform to the Merge.
TIP: Whenever you use the Select Tool window, the text you entered is remembered the next
time you open it, so if you want to add another node of the same kind—for example, if you
want to add two Blur nodes in a row—you can just press Shift-Spacebar and then press
Return to add the second Blur node.
The Effects Library appears at the upper-left corner of the Fusion window, and consists of two panels.
A category list at the left shows all categories of nodes and presets that are available, and a list at the
right shows the full contents of each selected category.
By default, the category list shows two primary sets of effects, Tools and Templates, with disclosure
controls to the left that hierarchically show all subcategories within each category. The top two
categories are:
– Tools: Tools consist of all the effects nodes that you use to build compositions, organized by
categories such as 3D, Blur, Filter, Mask, Particles, and so on. If you have third-party OFX plug-ins
on your workstation, those appear in here as well.
– Templates: When using the Fusion page in DaVinci Resolve, templates consist of presets, macros,
and utilities that have been created to get you started quickly. For example, Backgrounds consists
of a variety of customizable generators that have been created using a combination of Fusion
tools. Lens flares presents a wide variety of multi-element lens flares that you can add to any
composition. Particles has a selection of pre-made particle systems that you can customize for
your own use. Shaders has a variety of materials that you can use as texture maps for 3D text and
geometry that you create in Fusion. And there are many, many other categories’ worth of useful
presets and macros that you can learn from and use in your own projects.
To replace a node in the Node Editor with a node from the Effects Library:
1 Drag a node from the browser of the Effects Library so it’s directly over the node in the Node
Editor that you want replaced. When that node is highlighted, drop it.
2 Click OK in the dialog to confirm the replacement.
Other times, such as when adding an item from the “How to” category, dragging a single item from the
Node Editor results in a whole node tree being added to the Node Editor. Fortunately, all nodes of the
incoming node tree are automatically selected when you do this, so it’s easy to drag the entire node
tree to another location in the Node Editor where there’s more room. When this happens, the nodes of
the incoming effect are exposed so you can reconnect and reconfigure it as necessary to integrate the
effect with the rest of your composition.
Adding a LightWrap effect from the “How to” bin of the Templates category of the Effects Library.
TIP: When you replace one node with another, any settings that are identical between the
two nodes are copied into the new node. For example, replacing a Transform node with a
Merge will copy the existing center and angle values from the Transform to the Merge.
Deleting Nodes
To delete one or more selected nodes, press Delete (macOS) or Backspace (Windows), or right-click
one or more selected nodes and choose Delete from the contextual menu. The node is removed from
the Node Editor, and whichever nodes are connected to its primary input and output are now
connected together. Nodes connected to other inputs (such as mask inputs) become disconnected.
Before deleting a node from a node tree (top), and after upstream and
downstream nodes have automatically reconnected (bottom).
Disconnected Nodes
It’s perfectly fine to have disconnected nodes, or even entire disconnected branches of a node tree, in
the Node Editor alongside the rest of a composition. All disconnected nodes are simply ignored while
being saved for possible future use. This can be useful when you’re saving nodes that you’ve
customized but later decided you don’t need. It’s also useful for saving branches of trees that you’ve
since exported to be self-contained media that’s re-imported to take the place of the original effect,
but you want to save the original nodes just in case you need to redo your work.
Selecting Nodes
Selecting nodes is one of the most fundamental things you can do to move nodes or target them for
different operations. There are a variety of methods you can use.
The active node is highlighted orange, while other selected nodes are highlighted white.
To set the active node when there are multiple selected nodes:
– Option-click one of the selected nodes in the Node Editor to make that one the active node.
– Open the Inspector (if necessary), and click a node’s header bar to make it the active node.
As seen in the screenshot above, you’ll want to load the upstream MediaIn or Loader node into a
viewer while the Polygon node is selected for editing in order to see the full image you’re rotoscoping
while keeping the Polygon node’s spline visible.
For complex compositions, you may need to open additional viewers. For example, one viewer may be
used to display the end result of the final comp, while another viewer displays the source, a third
viewer displays a mask, and a fourth viewer might be a broadcast monitor connected via a Blackmagic
DeckLink card or other display hardware. When you have more than two viewers, additional View
indicators are added and each one is assigned a consecutive number between 3 and 9.
The more viewers you add, the more you may need help remembering which viewer is represented by
which View indicator. Positioning the pointer over the View indicator in question will display a tooltip
with the name of the viewer it represents.
Clearing Viewers
Whenever you load a node into a viewer, you prompt that node, all upstream nodes, and other related
nodes to be rendered. If you load nodes into both viewers, this is doubly true. If you want to prevent
your computer from processing views that aren’t currently necessary, you can clear each viewer.
Create/Play Preview
You can right-click a node, and choose an option from the Create/Preview Play On submenu of the
contextual menu to render and play a preview of any node’s output on one of the available viewers.
The Render Settings dialog is displayed, and after accepting the settings, the tool will be rendered and
the resulting frames stored in RAM for fast playback on that view.
TIP: Hold the Shift key when selecting the viewer from the menu to bypass the Render dialog
and to start creating the preview immediately using the default settings or the last settings
used to create a preview.
Node Basics
Each node displays small colored knots around the edges. One or more arrows represent inputs, and
the square represent the tool’s processed output, of which there is always only one. Outputs are white
if they’re connected properly, gray if they’re disconnected, or red to let you know that something’s
wrong and the node cannot process properly.
Each node takes as its input the output of the node before it. By connecting a MediaIn node’s output
to a Blur node, you move image data from the MediaIn node to the Blur node, which does something
to process the image before the Blur node’s output is in turn passed to the next node in the tree.
If you drop a connection on top of a node that already has the background input connected, then the
second most important connection will be attached, which for multi-input nodes is the foreground
input, and for other single-use nodes may be the Effects Mask input.
Some multi-input nodes are capable of adding inputs to accommodate many connections, such as
the Merge3D node. These nodes simply add another input whenever you drop a connection
onto them.
Before (left) and after (right) dragging a connection from a Mask node and dropping it on top of a MatteControl node.
TIP: Rather than remembering the different knot types, press the right mouse button, hold
Option, and drag from the output of a node to the center of another tool. When you release
the mouse, a tooltip will appear allowing you to select the knot you want to connect to.
As you can see above, connecting the Defocus node first, followed by the TV node, means that while
the initial image is softened, the TV effect is sharp. However, if you reverse the order of these two
nodes, then the TV effect distorts the image, but the Defocus node now blurs the overall result, so that
the TV effect is just as soft as the image it’s applied to. The explicit order of operations you apply
makes a big difference.
As you can see, the node tree that comprises each composition is a schematic of operations with
tremendous flexibility. Additionally, the node tree structure facilitates compositing by giving you the
ability to direct each node’s output into separate branches, which can be independently processed
and later recombined in many different ways, to create increasingly complex composites while
eliminating the need to precompose, nest, or otherwise compound layers together, which would impair
the legibility of your composition.
In the following example, several graphics layers are individually transformed and combined with a
series of Merge nodes. The result of the last Merge node is then transformed, allowing you to move
the entire collection of previous layers around at once. Because each of these operations is clearly
represented via the node tree, it’s easy to see everything that’s happening, and why.
TIP: To help you stay organized, there are Select > Upstream/Downstream commands in the
Node Editor contextual menu for selecting all upstream or downstream nodes to move them,
group them, or perform other organizational tasks.
By clicking and/or dragging these two halves, it’s possible to quickly disconnect, reconnect, and
overwrite node connections, which is essential to rearranging your node tree quickly and efficiently.
Hovering the pointer over a node highlights the color of all connections,
telling you what kinds of inputs are connected.
Additionally, positioning the pointer over a connection causes a tooltip to appear that displays the
output and input that connection is attached to.
Branching
A node’s input can only have one connection attached to it. However, a tool’s output can be
connected to inputs on as many nodes as you require. Splitting a node’s output to inputs on multiple
nodes is called branching. There are innumerable reasons why you might want to branch a node’s
output. A simple example is to process an image in several different ways before recombining these
results later on in the node tree.
Two MediaIn nodes and a DeltaKeyer node attached to a Merge node, creating a composite.
Additionally, If you drag two or more nodes from an OS window into the Node Editor at the same time,
Merge nodes will be automatically created to connect them all, making this a fast way to initially
build a composite.
If you like, you can change how connections are drawn by enabling orthogonal connections, which
automatically draws lines with right angles to avoid having connections overlap nodes.
Functionally, there’s no difference to your composition; this only affects how your node tree appears.
Routers are tiny nodes with a single input and an output, but with no parameters except for a
comments field (available in the Inspector), which you can use to add notes about what’s
happening in that part of the composition.
Even more usefully, you can branch a router’s output to multiple nodes, which makes routers even
more useful for keeping node trees neat in situations where you want to branch the output of a
node in one part of your node tree to other nodes that are all the way on the opposite end of that
same node tree.
Before swapping node inputs (left), and after swapping node inputs (right),
the connections don’t move but the colors change.
Inputs can move freely around the node, so swapping two inputs doesn’t move the connection lines;
instead, the inputs change color to indicate you’ve reversed the background (orange) and foreground
(green) connections.
After you’ve extracted a node, you can re-insert it into another connection somewhere else. You can
only insert one node at a time.
To insert a disconnected node in the Node Editor between two compatible nodes:
1 Hold down the Shift key and drag a disconnected node directly over a connection between two
other nodes.
2 Once the connection highlights, drop the node, and then release the Shift key. That node is now
attached to the nodes coming before and after it.
TIP: If you hold down the Shift key, you can extract a node and re-insert it somewhere else
with a single drag.
TIP: When you paste a MediaIn, Loader, or Generator node so it will be inserted after a
selected node in the node tree, a Merge tool is automatically created and used to composite
the pasted node by connecting it to the foreground input. While this can save you a few
steps, some artists may prefer to perform these sorts of merges manually, so this can be
changed using the Default Preferences panel in the Global preferences.
And you then paste into a new text editing document, you get the following:
At this point, you have the option of editing the text (if you know what you’re doing), emailing it to
colleagues, or storing it in a digital notepad of some sort for future use. To use this script in Fusion
again, you need only copy it and paste it back into the Node Editor.
TIP: This is a very easy way to pass specific node settings back and forth between artists
who may not be in the same room, city, or country.
Instancing Nodes
Normally, when you use copy and paste to create a duplicate of a node, the new node is completely
independent from the original node, so that changes made to one aren’t rippled to the other. However,
there are times when two nodes must have identical settings at all times. For example, when you’re
making identical color corrections to two or more images, you don’t want to constantly have to adjust
one color correction node and then manually adjust the other to match. It’s a hassle, and you risk
forgetting to keep them in sync if you’re working in a hurry.
While there are ways to publish controls in one node and connect them to matching controls in
another node, this becomes prohibitively complex and time consuming for nodes in which you’re
making adjustments to several controls. In these cases, creating “instanced” nodes is a real time-saver,
as well as an obvious visual cue in your node tree as to what’s going on.
However you paste an instance, the name of that instanced node takes the form “Instance_
NameOfNode.” If you paste multiple instances, each instance is numbered “Instance_
NameOfNode_01.”
A green link line shows an instanced Blur node’s relationship to the original Blur node it was copied from.
When a node tree contains instanced nodes, a green line shows the link between the original node
and its instances. You have the option to hide these green link lines to reduce visual clutter in the
Node Editor.
To toggle the visibility of green instance link lines in the Node Editor:
1 Right-click anywhere in the background of the Node Editor.
2 Choose Options > Show Instance Links from the contextual menu.
If you’ve been using an instance of a node and you later discover you need to use it to apply separate
adjustments, you can “de-instance” the node.
NOTE: If you’ve de-instanced a node and you cannot undo the operation because you’ve
restarted DaVinci Resolve, you can only recreate an instance by copying the original and
pasting an instance again.
Moving Nodes
Selecting one or more nodes and dragging them moves them to a new location, which is one of the
simplest ways of organizing a node tree, by grouping nodes spatially according to the role they play in
the overall composition.
Keep in mind that the location of nodes in the Node Editor is purely aesthetic, and does nothing to
impact the output of a composition. Node tree organization is purely for your own peace of mind, as
well as that of your collaborators.
TIP: Once you’ve arranged the nodes in a composition in some rational way, you can use the
Sticky Note and Underlay tools to add information about what’s going on and to visually
associate collections of nodes more definitively. These tools are covered later in this section.
TIP: You can set “Arrange to Grid” or “Arrange to Connected” as the default for new
compositions by choosing Fusion > Fusion Settings in DaVinci Resolve or File > Preferences
in Fusion Studio, and turning the Fusion > Node Editor > Arrange To Grid or Arrange to
Connected checkboxes on.
Renaming Nodes
Each node that’s created is automatically assigned a name (based on its function) and a number
(based on how many of that type of node have been created already). For example, the first Blur node
added to a composition will be called Blur1, the second will be Blur2, and so on. Although initially
helpful, larger compositions may benefit from important nodes having more descriptive names to make
it easier to identify what they’re actually doing, or to make it easier to reference those nodes in
expressions.
To rename a node:
1 Do one of the following:
– Right-click a node and choose Rename from the contextual menu.
– Select a node and press F2.
2 When the Rename dialog appears, type a new name, and then click OK or press Return.
NOTE: If multiple nodes are selected, multiple dialogs will appear asking for a name for each tool.
Since Fusion can be scripted and use expressions, the names of nodes must adhere to a scriptable
syntax. Only use alphanumeric characters (no special characters), and do not use any spaces.
Sticky Notes are yellow boxes in which you can type whatever text you want. They can be resized,
moved, and collapsed when they’re not being edited, but once created they remain attached to the
background of the Node Editor where you placed them until you either move them or delete them.
Underlay Boxes can be named to identify the purpose of that collection of nodes, and they can be
colored to be distinct from other Underlay Boxes or to adhere to some sort of color code for your
compositions.
– To delete an Underlay Box and all nodes within: Select an Underlay Box and press the Delete
key to delete both the Underlay Box and all nodes found inside it. If you don’t also want to delete
the nodes, first drag the nodes out of the box.
– To delete an Underlay Box but keep all nodes within: Option-click the Underlay Box to select it
and not the nodes, and then press the Delete key. The nodes within remain where they were.
Node Thumbnails
Once a source or an effect has been added to the Node Editor, it’s represented by a node. By default,
nodes are rectangular and thin, making it easier to fit reasonably complicated grades within a relatively
small area. However, if you like, you can also display node thumbnails.
Nodes can be displayed as a small rectangle or as a larger square. The rectangular form displays the
node’s name in the center, while the square form shows either the tool’s icon or a thumbnail of the
image it is outputting.
TIP: Even if you’re not displaying node thumbnails, you can quickly obtain detailed
information about a node and the data it’s processing by hovering your pointer over it in the
Node Editor and viewing the tooltip bar below.
NOTE: If Show Thumbnails is enabled, nodes may not update until the playhead is moved in
the Time Ruler.
When you’ve manually enabled thumbnails for different nodes, they’ll remain visible whether or not
those nodes are selected.
Finding Nodes
Modern visual effects require detailed work that often results in compositions with hundreds of nodes.
For such large node trees, finding things visually would have you panning around the Node Editor for a
long, long time. Happily, you can quickly locate nodes in the Node Editor using the Find dialog.
The Find window closes. If either the Find Next, Find Previous, or Find All operations are successful,
the found node or nodes are selected. If not, a dialog appears letting you know that the string could
not be found.
TIP: Finding all the nodes of a particular type can be very useful if you want, for example, to
disable all Resize nodes. Find All will select all the nodes based on the search term, and you
can temporarily disable them by pressing the shortcut for Bypass, Command-P.
Character Sets
Any characters typed between two brackets [ ] will be searched for. Here are some examples of
character set searches that work in Fusion.
[a-z]
Finds: Every lower caps letter from a to d, and will find nodes with a, b, c, or d
[Tt]
Finds: Every numeral from five to seven, and will find nodes numbered with 5, 6, or 7
TIP: You can also save six different settings for a node in the Node Editor using the Version
buttons at the top of the Inspector. For more information, see Chapter 69, “Editing Parameters
in the Inspector” in the DaVinci Resolve Reference Manual or Chapter 8 in the Fusion
Reference Manual.
If you browse this directory, the settings for each node are saved using a name taking the form
INTERNALNAME_PUBLICNAME.settings, where INTERNALNAME is the internal name of the Fusion
tool, and PUBLICNAME is the name of the Node that’s derived from the internal Fusion tool. For
example, the default setting for a Blur node would be called Blur_Blur.setting. This naming convention
is partly to ensure that third-party plug-in nodes don’t overwrite the defaults for built-in Fusion nodes
that happen to have the same name.
Resetting Defaults
Even if you’ve created new default settings for new nodes, you can always reset individual parameters
to the original default setting. In addition, it’s easy to restore the original default settings for new nodes
you create.
NOTE: When you use the Settings > Reset Default command, the default .setting file is
deleted. If you want to save a node’s settings as alternate settings, you should use the
Settings > Save As command.
TIP: If you drop a setting directly onto a connection line, the new node will be inserted onto
that connection.
Toggling any one of these node modes displays a badge within that node indicating its state.
If you wait a few moments later, a more elaborate presentation of the same information appears
within a floating tooltip in the Inspector. This tooltip gives you additional information about the
Domain (Image and DoD) and the data range used by that clip.
Node Groups,
Macros, and
Fusion Templates
This chapter reveals how to use groups, macros, and templates in Fusion so working
with complex effects becomes more organized, more efficient, and easier.
Contents
Groups 148
Creating Groups 148
Deleting Groups 149
Expanding and Collapsing Groups 149
Panning and Scaling within Open Group Windows 149
Ungrouping Nodes 149
Saving and Reusing Groups 149
Macros 150
Creating Macros 150
Using Macros 152
Re-Editing Macros 152
Other Macro Examples 152
Creating Fusion Templates 152
Getting Started with a Fusion Title Template 152
Saving a Title Macro 153
Using Your New Title Template 156
Getting Started with a Fusion Transition Template 157
Creating a Fusion Transition Template 158
Groups
When you work on complex visual effects, node trees can become sprawling and unwieldy, so
grouping tools together can help you better organize all the nodes and connections. Groups are
containers in your node tree that can hold multiple nodes, similar to the way a folder on your Desktop
holds multiple files. There is no limit to the number of nodes that can be contained within a group, and
you can even create subgroups within a group.
Creating Groups
Creating a group is as simple as selecting the nodes you want to group together and using the
Group command.
To create a group:
1 Select the nodes you want grouped together.
2 Right-click one of the selected nodes and choose Group from the contextual menu (Command-G).
Several nodes selected in preparation for making a group (left), and the resulting group (right).
The selected nodes are collapsed into a group, which is displayed as a single node in the Node Editor.
The Group node can have inputs and outputs, depending on the connections of the nodes within the
group. The Group node only displays inputs for nodes that are already connected to nodes outside
the group. Unconnected inputs inside the group will not have an Input knot displayed on the
Group node.
When you open a group, a floating window shows the nodes within that group. This floating window is
its own Node Editor that can be resized, zoomed, and panned independently of the main Node Editor.
Within the group window, you can select and adjust any node you want to, and even add, insert, and
delete nodes while it is open. When you’re ready to collapse the group again, click the minimize button
at the top left corner of the floating window, or use the keyboard shortcut (Cmd-E).
Ungrouping Nodes
If you decide you no longer need a particular group, or you simply find it easier to have constant
access to all the nodes in the group at once, you can decompose or “ungroup” the group without
deleting the nodes within it to eliminate the group but keep the contents in the Node Editor.
In Fusion Studio, you can also save and reuse groups from the Bins window:
– To save a group: Drag the group rom the Node Editor into the opened Bin window.
A dialog will appear to name the group setting file and the location where it should be saved
on disk. The .settings file will be saved in the specified location and placed in the bins for easy
access in the future.
Macros
Some effects aren’t built with one tool, but with an entire series of operations, sometimes in complex
branches with interconnected parameter controls. Fusion provides many individual effects nodes for
you to work with but gives users the ability to repackage them in different combinations as
self‑contained “bundles” that are either macros or groups. These “bundles” have several advantages:
– They reduce visual clutter in your node tree.
– They ensure proper user interaction by allowing you to restrict which controls from each node of
the macro are available to the user.
– They improve productivity by allowing artists to quickly leverage solutions to common compositing
challenges and creative adjustments that have already been built and saved.
Macros and groups are functionally similar, but they differ slightly in how they’re created and presented
to the user. Groups can be thought of as a quick way of organizing a composition by reducing the
visual complexity of a node tree. Macros, on the other hand, take longer to create because of how
customizable they are, but they’re easier to reuse in other comps.
Creating Macros
While macros let you save complex functions for future use in very customized ways, they’re actually
pretty easy to create.
TIP: If you want to control the order in which each node’s controls will appear in the
macro you’re creating, Command-click each node in the order in which you want it
to appear.
2 Right-click one of the selected nodes and choose Macro > Create Macro from the
contextual menu.
The macro editor with a Blur node and Color Corrector node.
3 First, enter a name for the macro in the field at the top of the Macro Editor. This name should
be short but descriptive of the macro’s purpose. No spaces are allowed, and you should
avoid special characters.
4 Next, open the disclosure control to the left of each node that has controls you want to expose
to the user and click the checkbox to the right of each node output, node input, and node control
that you want to expose.
The controls you check will be exposed to users in the order in which they appear in this list, so
you can see how controlling the order in which you select nodes in Step 1, before you start editing
your macro, is useful. Additionally, the inputs and outputs that were connected in your node tree
are already checked, so if you like these becoming the inputs and outputs of the macro you’re
creating, that part is done for you.
For each control’s checkbox that you turn on, a series of fields to the left of that control’s row lets
you edit the default value of that control as well as the minimum and maximum values that control
will initially allow.
5 When you’re finished choosing controls, click Close.
6 A dialog prompts you to save the macro. Click Yes.
7 A Save Macro As dialog appears in which you can re-edit the Macro Name (if necessary), and
choose a location for your macro.
To have a macro appear in the Fusion page Effects Library Tools > Macros category, save it in the
following locations:
– On macOS: Macintosh HD/Users/username/Library/Application Support/Blackmagic Design/
DaVinci Resolve/Fusion/Macros/
– On Windows: C:\Users\username\AppData\Roaming\Blackmagic Design\DaVinci Resolve\
Support\Fusion\Macros
– On Linux: home/username/.local/share/DaVinciResolve/Fusion/Macros
Using Macros
Macros can be added to a node tree using the Add Tool > Macros or Replace Tool > Macros submenus
of the Node Editor contextual menu.
Re-Editing Macros
To re-edit an existing macro, just right-click anywhere within the Node Editor and choose the macro
you want to edit from the Macro submenu of the same contextual menu. The Macro Editor appears,
and you can make your changes and save the result.
TIP: If you want to control the order in which node controls will be displayed later on, you can
Command-click each node you want to include in the macro, one by one, in the order in
which you want controls from those nodes to appear. This is an extra step, but it keeps things
better organized later on.
The Macro Editor window appears, filled to the brim with a hierarchical list of every parameter in the
composition you’ve just selected.
The Macro Editor populated with the parameters of all the nodes you selected.
Closing the top node’s parameters reveals a simple list of all the nodes we’ve selected. The Macro
Editor is designed to let you choose which parameters you want to expose as custom editable controls
for that macro. Whichever controls you choose will appear in the Inspector whenever you select that
macro, or the node or clip that macro will become.
So all we have to do now is to turn on the checkboxes of all the parameters we’d like to be able to
customize. For this example, we’ll check the Text3D node’s Styled Text checkbox, the Cloth node’s
Diffuse Color, Green, and Blue checkboxes, and the SpotLight node’s Z Rotation checkbox, so that
only the middle word of the template is editable, but we can also change its color and tilt its lighting
(making a “swing-on” effect possible).
Selecting the checkboxes of parameters we’d like to edit when using this as a template.
Once we’ve turned on all the parameters we’d like to use in the eventual template, we click the Close
button, and a Save Macro As dialog appears.
To have the Title template appear in the Effects Library > Titles category of DaVinci Resolve, save the
macro in the following locations:
– On macOS: Macintosh HD/Users/username/Library/Application Support/Blackmagic Design/
DaVinci Resolve/Fusion/Templates/Edit/Titles
– On Windows: C:\Users\username\AppData\Roaming\Blackmagic Design\DaVinci Resolve\
Support\Fusion\Templates\Edit\Titles
– On Linux: home/username/.local/share/DaVinciResolve/Fusion/Templates/Edit/Titles
Custom titles appear in the Fusion Titles section of the Effects Library.
Editing this template into the Timeline and opening the Inspector, we can see the parameters we
enabled for editing, and we can use these to customize the template for our own purposes.
The Fusion page opens, displaying the node tree used to create the Fusion transition.
The MediaIn 1 node represents the outgoing clip in the Edit page Timeline. The MediaIn 2 clip
represents the incoming clip. You can modify or completely change the Cross Dissolve effect to
create your own custom transition using any of Fusion’s nodes.
The Fusion Cross Dissolve node tree replaced with Transforms and a Merge node.
TIP: To modify the duration of the Fusion transition from the Edit page Timeline, you must
apply the Resolve Parameter Modifier to any animated parameter. In place of keyframing the
transition, you create the transition using the Scale and Offset parameters of the Resolve
parameter modifier.
TIP: Since the transition template must include the MediaIn and MediaOut nodes, the final
steps for saving a transition template must be performed in DaVinci Resolve’s Fusion page
and cannot be performed in Fusion Studio.
Having made this selection, right-click one of the selected nodes and choose Macro > Create Macro
from the contextual menu.
The Macro Editor displaying the parameters of all the nodes you selected.
The Macro Editor window appears, displaying a hierarchical list of every parameter in the composition
you’ve just selected. The order of nodes is based on the order they were selected in the Node Editor
prior to creating the macro.
The Macro Editor is designed to let you choose which parameters you want to display as custom
controls in the Edit page Inspector when the transition is applied.
For transitions, you can choose not to display any controls in the Inspector, allowing only duration
adjustments in the Timeline. However, you can choose a simplified set of parameters for customization
by enabling the checkboxes next to any parameter name.
Applying this transition to a cut in the Timeline and opening the Inspector shows the parameters you
enabled for editing, if any.
The Fusion page opens, displaying the node tree that is used to create the Fusion Generator.
An empty Fusion page with a single MediaOut node opens, ready for you to create a
Fusion Generator.
The Fusion Generator is a solid image generated from any number of tools combined to create a static
or animated background. You can choose to combine gradient colors, masks, paint strokes, or
particles in 2D or 3D to create the background generator you want.
The Macro Editor displaying the parameters of all the nodes you selected.
The Macro Editor window appears, displaying a hierarchical list of every parameter in the composition
you’ve just selected. The order of nodes is based on the order they were selected in the Node Editor
prior to creating the macro.
The Macro Editor is designed to let you choose which parameters you want to display as custom
controls in the Edit page Inspector when the Generator is applied. You can choose a simplified set of
parameters for customization by enabling the checkboxes next to any parameter name.
Once you enable all the parameters you want to use in the eventual template, click the Close button,
and a Save Macro As dialog appears. Here, you can enter the name of the Transition, as it should
appear in the Edit page Effects Library.
To have the Generator template appear in the Effects Library > Fusion Generators category of
DaVinci Resolve, save the macro in the following locations:
– On macOS: Macintosh HD/Users/username/Library/Application Support/Blackmagic Design/
DaVinci Resolve/Fusion/Templates/Edit/Generators
– On Windows: C:\Users\username\AppData\Roaming\Blackmagic Design\DaVinci Resolve\
Support\Fusion\Templates\Edit\Generators
– On Linux: home/username/.local/share/DaVinciResolve/Fusion/Templates/Edit/Generators
Applying this Generator to the Timeline and opening the Inspector shows the parameters you enabled
for editing, if any.
To see the effect in the Edit page Effects Library, you’ll need to quit DaVinci Resolve and
relaunch the application.
When you restart DaVinci Resolve, the icon you created will be embedded in the template thumbnail
across all the Effects Libraries in the program.
A Custom Icon added to a fisheye template, before (above) and after (below)
IMPORTANT
The Fusion Template Bundle contains all the templates in one file. It does not uncompress
them into separate template files again. Therefore if you delete the .drfx file, all associated
templates inside that bundle will be removed as well.
Using Viewers
This chapter covers working with viewers in Fusion, including using onscreen controls
and toolbars, creating groups and subviews, managing viewer Lookup Tables (LUTs),
working with the 3D viewer, and setting up viewer preferences and options.
Contents
Viewer Overview 169
Single vs. Dual Viewers 169
Floating Viewers in Fusion Studio 170
Video Output 170
Clean Feed 170
Loading Nodes into Viewers 170
Clearing Viewers 171
Position and Layout 171
The Viewer Divider 171
Zooming and Panning into Viewers 172
Flipbook Previews 172
Creating Flipbook Previews 172
Playing Flipbook Previews 173
Removing Flipbook Previews 174
Flipbook Preview Render Settings 174
Onscreen Controls 175
Showing and Hiding Onscreen Controls 175
Making Fine Adjustments to Onscreen Controls 176
Toolbars 176
Viewer Toolbar 176
Node Toolbars 176
A/B Buffers 177
Flipping between Buffers 177
Split Wipes between Buffers 177
Moving the Wipe Divider 178
Additionally, you can expose “subviews” including color inspectors, magnifiers, waveforms,
histograms, and vectorscopes to help you analyze the image as you work.
Video Output
When using DaVinci Resolve or Fusion Studio, if Blackmagic video hardware is present in the
computer, then you can select a node to preview directly on that display. While video output can’t be
used for manipulating onscreen controls such as center crosshairs or spline control points, they’re
extremely valuable for evaluating your composition via the output format, and for determining image
accuracy using a properly calibrated display.
The video hardware is configured from the DaVinci Resolve and Fusion Studio preferences.
Clean Feed
When using DaVinci Resolve with dual computer monitors, a full-screen viewer can be displayed on
the secondary monitor from the Fusion page. This displays a third view indicator button under each
node to control what is shown on the second display. To activate this monitor, make sure you do not
have Dual Screen enabled under the Workspace menu and then select Workspace > Video Clean
Feed and select your second computer display from the submenu.
When a node is being viewed, a View Indicator button appears at the bottom left. This is the same
control that appears when you hover the pointer over a node. Not only does this control let you know
which nodes are loaded into which viewer, but they also expose little round buttons for changing
which viewer they appear in.
Clearing Viewers
To clear an image from a viewer, click in the viewer to make it active; a light purple outline is displayed
around the active panel. With the viewer active, press the Tilde (~) key. This key is usually found to the
left of the 1 key on U.S. keyboards. The fastest way to remove all images from all viewers is to make
sure none of the viewers is the active panel, and then press the Tilde key.
Flipbook Previews
As you build increasingly complex compositions, and you find yourself needing to preview specific
branches of your node tree to get a sense of how various details you’re working on are looking, you
may find it useful to create targeted RAM previews at various levels of quality right in the viewer by
creating a RAM Flipbook. RAM Flipbook Previews are preview renders that exist entirely within RAM
and allow you to render a node’s output at differing levels of quality for quick processing in order to
watch a real-time preview.
3 When you’ve chosen the settings you want to use, click Start Render.
The current frame range of the Time Ruler is rendered using the settings you’ve selected, and the
result is viewable in the viewer you selected or dragged into.
Once you’ve created a Flipbook Preview within a particular viewer, right-clicking that viewer presents
Flipbook-specific commands and options to Play, Loop, or Ping-Pong the Flipbook, to open it Full
Screen, to Show Frame Numbers, and to eliminate it.
TIP: If you want to create a Flipbook Preview and bypass the Render Settings dialog by just
using either the default setting or the settings that were chosen last, hold down Shift-Option
while you drag a node into the viewer. The Settings dialog will not appear, and rendering the
preview will start right away.
To scrub through a Flipbook frame-by-frame using the keyboard, do one of the following:
– Press the Left or Right Arrow keys to move to the previous or next frame.
– Hold Shift and press the Left or Right Arrow keys to jump back or forward 10 frames.
– Press Command-Left Arrow to jump to the first frame.
– Press Command-Right Arrow to jump to the last frame.
Settings
The Settings section of the Preview Render dialog includes three buttons that determine the overall
quality and appearance of your Flipbook Preview. These buttons also have a significant impact on
render times.
– HiQ: When enabled, this setting renders the preview in full image quality. If you need to see
what the final output of a node would look like, then you would enable the HiQ setting. If you are
producing a rough preview to test animation, you can save yourself time by disabling this setting.
– MB: The MB in this setting stands for Motion Blur. When enabled, this setting renders with motion
blur applied if any node is set to produce motion blur. If you are generating a rough preview and
you aren’t concerned with the motion blur for animated elements, then you can save yourself time
by disabling this setting.
– Some: When Some is enabled, only the nodes specifically needed to produce the image of the
node you’re previewing are rendered.
Size
Since RAM Flipbook Previews use RAM, it’s helpful to know how many frames you can render into
RAM before you run out of memory. The Flipbook Preview dialog calculates the currently available
memory and displays how many frames will fit into RAM. If you have a small amount of RAM in your
computer and you cannot render the entire range of frames you want, you can choose to lower the
resolution to a setting that delivers the best quality/duration ratio for your preview.
Network
Network rendering is only available in Fusion Studio. For more information on network rendering,
see Chapter 65, “Rendering Using Saver Nodes” in the DaVinci Resolve Reference Manual or
Chapter 4 in the Fusion Reference Manual.
Shoot On
Sometimes you may not want to render every single frame, but instead every second, third, or fourth
frame to save render time and get faster feedback. You can use the Step parameter to determine the
interval at which frames are rendered.
Frame Range
This field defaults to the current Render Range In/Out set in the Time Ruler to determine the start and
end frames for rendering. You can modify the range to render more or fewer frames.
Configurations
Once you’ve created a useful preview configuration, you can save it for later use by clicking the Add
button, giving it a name, and clicking OK.
Onscreen Controls
When it comes to adjusting images, the Control Panel provides very precise numerical values, but
sometimes visually positioning an element using onscreen controls can get you where you want to go
with less tweaking. The viewers show onscreen controls for manipulating the parameters of the
currently selected node. Common onscreen controls include crosshairs, angle indicators, polylines,
and paint strokes. Each of these controls can be manipulated directly in the viewer using the mouse
or keyboard.
The controls shown in viewers are determined by which nodes are selected, not by the node
displayed in the viewer. For example, a downstream blur is easily viewed while manipulating the
controls for a selected polygon mask or merge. If multiple nodes are selected, the controls for every
selected node are shown simultaneously.
You can disable some nodes, like the Polygon node, on a per-node basis.
Toolbars
There are two toolbars in the viewer: a viewer toolbar, which always appears at the top of each viewer
and gives you control over what that viewer shows, and an optional node toolbar that appears
underneath that gives you contextual controls based on the node you’ve selected in the Node Editor.
Viewer Toolbar
A viewer toolbar runs across the top of each viewer, providing access to many of the most commonly
used viewer-related settings, as well as an indication of the status of many of the most important
settings. Most of the menus and buttons found on this toolbar are described in detail throughout
this chapter.
Node Toolbars
In addition to the viewer toolbar, a node toolbar is displayed underneath, at the top of the viewer
display area, whenever you select a node that exposes special nodes. Examples of nodes that expose
a toolbar include the text, masks, paths, paint strokes, and the 3D environment.
TIP: Each buffer can be set to different display settings—for example, showing different
channels or different viewing LUTs, either applied to different nodes or applied to two
buffered versions of the same node.
The wipe divider can be adjusted for comparing different areas of the A and B images
Even when you wipe, you can choose different display channels, view LUTs, or other display options
for each buffer individually by clicking on the half of the wipe you want to alter, and then choosing the
options you want that buffer to use. This allows easy comparison of different channels, LUTs, or other
viewer settings while wiping the same image, or different images.
Subviews
A subview is a “mini” viewer that appears within the main viewer. A subview is usually used to show
different information about the image.
To enable the currently selected subview in the Subview menu of a viewer, do one of the following:
– Click the Subview button in the View toolbar.
– Choose Views > Subview > Enabled from the contextual menu.
– Click a viewer, and press the V key.
The Subview drop-down menu and contextual menu show all the available subview types. Once you
choose an option from the list, that view will be displayed in the subview, and the Subview button will
show and hide it as you wish.
To swap the contents of the subview with the main view, do one of the following:
– Press Shift-V.
– Right-click in a viewer and choose Views > SubView > Swap from the contextual menu.
2D Viewer
The 2D Viewer is the default type for showing images. When used as a subview, a different node than
the one used in the main viewer can be displayed by dragging the node into the subview.
This is the only subview type that is not just a different view of the same node in the main viewer.
3D Image Viewer
The 3D Image Viewer is available when viewing a node from the 3D category.
Histogram
The Histogram Viewer is an analysis node that can be used to identify problems with the contrast and
dynamic range in an image. The graph shows the frequency distribution of colors in the image,
including out-of-range colors in floating-point images. The horizontal axis shows the colors from
shadows to highlights. The vertical axis shows the number of pixels in the image that occur at
each level.
The Histogram Viewer will also display gradient information. You can use the From Image and Perturb
modifiers to output gradients. If you need to see the gradient represented in a histogram, drag the
modifier’s title bar into the viewer.
3D Histogram
The more advanced 3D Histogram Viewer shows the color distribution in an image within a 3D cube.
One advantage to a 3D Histogram is that it can accurately represent the out-of-range colors commonly
found in floating-point and high-dynamic-range images. It can also be used to look at vector images
like position, normal, velocity, and so on.
Vectorscope
The Vectorscope Viewer duplicates the behavior of a specific type of video test equipment, displaying
a circular graph that helps to visualize the intensity of chrominance signals.
Navigator
The Navigator can only be used in a subview. It provides a small overview of the entire image, with a
rectangle that indicates the portion of the image that is actually visible in the main viewer. This is useful
when zooming in on an image in the main view.
Magnifier
The Magnifier can be used only in a subview. It shows a zoomed-in version of the pixels under the
cursor in the main viewer.
Color Inspector
The Color Inspector can only be used in a subview. The Color Inspector shows information about the
color channels of the pixel under the cursor. It will show all channels present, even the auxiliary
channels such as Z buffer, XYZ normals, and UV mapping channels.
Metadata
The content of this subview is based entirely on the amount of metadata in your image. Most Loaders
will give the color space and file path for the image. Much more information can be displayed if it exists
in the image.
TIP: These rotation controls can be used with the 3D Histogram subview as well.
The Camera3D’s controls will inherit the viewer’s position and angle values.
TIP: The Copy PoV To command uses the object’s own coordinate space; any transformations
performed downstream by another node are not taken into account.
POV Labels
As you switch the POV of the viewer, you can keep track of which POV is currently displayed via a text
label at the bottom-left corner of the viewer. Right-clicking directly on this label, or on the axis control
above it, acts as a shortcut to the Camera submenu, allowing you to easily choose another viewpoint.
When you’re ready to add your own lighting to a scene, you can connect light nodes in various ways to
a Merge 3D node for the scene you’re working on. Once you connect a light to a Merge 3D node, you
need to switch the 3D Viewer over to showing the new, proper lighting.
A 3D scene using default lights (top), and the same scene with lighting turned on (bottom)
TIP: Attempting to load a Light node into a viewer all by itself will result in an empty
scene, with nothing illuminated. To see the effects of lights, you must view the Merge 3D
node the light is connected to.
Similar to lights, the default 3D Viewer has shadows turned off. To see shadows cast from the
lighting you’ve created, you must turn them on.
NOTE: The shadows shown in the 3D Viewer are always hard edged. Soft shadows are
available for output to the rest of your composition in the software renderer of the
Renderer3D node.
Transparency in 3D Viewers
Image planes and 3D objects are obscured by other objects in a scene depending on the X, Y, and Z
position coordinates of each object in 3D space. The default method used to determine which
polygons are hidden and which are shown based on these coordinates is called Z-buffering.
Z-buffering is extremely fast but not always accurate when dealing with multiple transparent layers in a
scene. Fortunately, there is another option for more complex 3D scenes with transparency: Sorted.
The Sorted method can be significantly slower in some scenes but will provide more accurate results
no matter how many layers of transparency happen to be in a scene.
The default behavior in the viewer is to use Z-buffering, but if your scene requires the Sorted method,
you can easily change this.
Grid
The 3D Viewer displays a grid that’s used to provide a plane of reference in the 3D scene. By default,
the grid is 24 x 24 units in size, centered on the origin at (0,0,0), and subdivided into large squares of 2
units with small squares of 0.25 units each. These defaults can be altered in the 3D View panel of the
Fusion Settings window, available from the Fusion menu.
The default grid of the 3D Viewer grid with its origin at x = 0, y = 0 and z = 0
Vertex Normals
Normals indicate what direction each vertex of 3D geometry is facing, and they are used when
calculating lighting and texturing on an object. When viewing any kind of 3D geometry, including an
image plane or a full FBX mesh, you can display the normals for each object in a scene.
Quad View
3D compositing often requires you to view the scene from different points of view to better control
transformations in three dimensions. While you can switch the 3D Viewer to different points of view,
doing so frequently can become cumbersome. Happily, you can instead enable a Quad view, which
divides the viewer into four panes. These panes can then display four different angles of the scene
at one time.
While there are four panes in the Quad view, they all show the same scene. When assigning views
within a Quad view, you can choose between displaying Front, Left, Top, Bottom, and Perspective
orthographic views, or you can choose the view through any camera or spotlight that’s present in
the scene.
The DoD is shown as two XY coordinates indicating the corners of an axis-aligned bounding box (in pixels)
For the most part, the DoD is calculated automatically and without the need for manual intervention.
For example, all the nodes in the Generator category automatically generate the correct DoD. For
nodes like Fast Noise, Mandelbrot, and Background, this is usually the full dimensions of the image. In
the case of Text+ and virtually all of the Mask nodes, the DoD will often be much smaller or larger.
The OpenEXR format is capable of storing the data window of the image, and Fusion will apply this as
the DoD when loading such an image through a Loader node and will write out the DoD through the
Saver node.
To reset the RoI to the full width and height of the current image, do one of the following:
– Choose Reset from the viewer menu next to the RoI button.
– Right-click anywhere within the viewer and choose Region > Reset Region from the contextual
menu or from the toolbar button menu.
– Disable the ROI control, which will also reset it.
TIP: Right-clicking in a viewer and choosing Options > Show Controls for showing onscreen
controls will override the RoI, forcing renders of pixels for the entire image.
Image LUTs
Image LUTs can be applied to each viewer. In fact, you can even apply separate Image LUTs for the A
and B buffers of a single viewer. These LUTs can only be applied to 2D images and not to 3D scenes.
Image LUTs are routinely used to get from one scene-referred color space to another. For example, if
you’re working with log-encoded media but want to see how the image will look in the final color
space, you can choose a LUT to make the image transform as a preview.
Buffer LUTs
The Buffer LUT is applied to the viewers regardless of contents, including 3D scenes, 3D materials,
and subview types. Only one Buffer LUT can be applied. If a 2D image is being displayed with an
Image LUT applied, then the Buffer LUT is applied to the result of the image LUT. Buffer LUTs are
typically used to simulate another output color space that’s unique to the display you’re using—for
instance, making a DCI-P3 projector show the image as it would look on an sRGB monitor.
When dealing with nonlinear files from many of today’s digital cinema cameras, a modern workflow
would be to convert everything to linear at the beginning of the node tree, then create your composite,
and then apply an Image LUT or Buffer LUT that matches the color space you want it to be in for either
grading in the Color page or for final output.
However, in more elaborate production pipelines, you may need to apply multiple LUTs consecutively.
Macro LUTs
Any macro node can also be used as a viewer LUT simply by saving the macro’s .setting file to the
correct Fusion directory.
For this to work, the macro must have one image input and one image output. Any controls exposed
on the macro will be available when the Edit option is selected for the LUT. For more information about
creating macros, see Chapter 67, “Node Groups, Macros, and Fusion Templates” in the
DaVinci Resolve Reference Manual or Chapter 6 in the Fusion Reference Manual.
LUT Presets
All LUTs available to DaVinci Resolve are also accessible to the Fusion page, which includes custom
LUTs you’ve installed, as well as preset LUTs that come installed with DaVinci Resolve, such as the
highly useful VFX IO category that includes a wide variety of miscellaneous to Linear and Linear to
miscellaneous transforms. All of these LUTs appear by category in the viewer LUT menu.
Fuse LUTs
Fuses are scriptable plug-ins that are installed with the application or that you create in Fusion. A fuse
named CT_ViewLUTPlugin can be applied as a LUT to a viewer. You can also script fuses that use
graphics hardware shaders embedded into the LUT for real-time processing. Since fuse LUTs require
shader-capable graphics hardware, they cannot be applied in software. For more information about
Fuses, see the Fusion Scripting Guide located on the Blackmagic Design website.
Buffer LUTs are often useful for applying monitor corrections, which do not usually change
between projects.
Order of processing
For either 2D or 3D, the result may be drawn to an offscreen buffer where a Buffer LUT can be applied,
along with dithering, a full view checker underlay, and any stereo processing. The final result is then
drawn to the viewer and any onscreen controls are drawn on top.
A complete stacked LUT configuration can be saved to and loaded from a .viewlut file, as
described below.
LUT Settings
The most straightforward way to save a LUT you have created using the Fusion View LUT Editor is to
use the LUT > Save menu found in the viewer contextual menu. The settings are saved as an ASCII file
with the extension .viewlut in the LUTs folder. Any files with this extension found in that folder will
appear in the Image LUT menus for ease of loading. You can also load the settings that are not found
in the menu by choosing LUT > Load from the viewer’s contextual menu.
The Import LUT option will load LUT files back into the Curve Editor, or alternatively, if the file has been
saved in Fusion’s LUTs folder, it will appear in the LUT drop-down menu list.
TIP: This is one way to move LUTs between viewers or to and from the Color Curves node or
any other LUT Editor in Fusion.
LUT Files
Any supported LUT files in the LUTs folder can be used by choosing them either from the LUT drop-
down menu or the viewer’s contextual menu. This includes 1D and 3D LUTs such as Fusion’s .lut, .alut
and .alut3 formats, as well as .cube, .shlut, .look, .3dl, and .itx formats. This is a convenient way to
access standard format LUT files for different projects.
This allows almost any combination of nodes to be used as a viewer LUT. This is the most flexible
approach but is also potentially the slowest. The LUT nodes must be rendered solely on the CPU,
whereas other methods are GPU-accelerated.
Viewer Settings
It is often preferable to switch between entirely different viewer configurations while working. For
example, while keying, the image may be in the main viewer, and the alpha channel may be in a
subview. Viewer settings toward the end of a project may consist of the histogram, vectorscope, and
waveform, as well as the image in a view set to Quad view.
Fusion provides the ability to quickly load and save viewer settings to help reduce the amount of effort
required to change from one configuration to another.
Show Controls
When onscreen controls are not necessary or are getting in the way of evaluating the image, you can
temporarily hide them using the Show Controls option. This option is toggled using Command-K.
Checker Underlay
The Checker Underlay shows a checkerboard beneath transparent pixels to make it easier to identify
transparent areas. This is the default option for 2D viewers. Disabling this option replaces the
checkerboard with black.
Pixel Grid
Enabling this option will show a light black grid that outlines the exact boundaries of pixels in the
image when the image is scaled past a certain threshold. The default is Off.
Smooth Resize
The Smooth Resize option uses a smoother bilinear interpolated resizing method when zooming into
an image in the viewer. When Smooth Resize is disabled, scaling uses the nearest neighbor method
and shows noticeable aliasing artifacts but is more useful for seeing the actual pixels of the viewed
image when you zoom all the way down to a pixel level since there is no interpolation. This option is
enabled by default and can be toggled by clicking on the SmR button in the viewer toolbar.
Gain/Gamma
Exposes or hides a simple pair of Gain and Gamma sliders that let you adjust the viewed image.
Especially useful for “gamma slamming” a composite to see how well it holds up with a variety of
gamma settings. Defaults to no change.
360º View
Sets the Fusion page viewer to properly display spherical imagery in a variety of formats, selectable
from this submenu. Disable toggles 360 viewing on or off, while Auto, LatLong, Vert Cross, Horiz
Cross, Vert Strip, and Horiz Strip let you properly display different formats of 360º video.
Alpha Overlay
When you enable the alpha overlay, the viewer will show the alpha channel overlaid on top of the color
channels. This can be helpful when trying to see where one image stops and another begins in a
composite. This option is disabled by default.
Overlay Color
When you turn the alpha overlay on, the default color is to show white for the area the alpha covers.
There are times when white does not show clearly enough, depending on the colors in the image.
You can change the color by choosing a color from the list of Overlay Color options.
Follow Active
Enabling the Follow Active option will cause the viewer to always display the currently active node in
the Node Editor. This option is disabled by default, so you can view a different node than what you
control in the Control Panel.
Show Controls
When onscreen controls are not necessary or are getting in the way of evaluating the image, you can
temporarily hide them using the Show Controls option. This option is toggled using Command-K.
Show Labels
The Show Labels option lets you toggle the display of the text that sometimes accompanies onscreen
controls in the viewer without disabling the functions that are showing those overlays, and without
hiding the onscreen controls themselves.
Editing Parameters
in the Inspector
The Inspector is where you adjust the parameters of each node to do what needs to
be done. This chapter covers the various node parameters and methods for working
with the available controls.
Contents
Overview of the Inspector 206
The Tools and Modifiers Panels 207
Customizing the Inspector 207
Inspector Height 207
Inspector Display Preferences 208
Opening Nodes in the Inspector 208
Pinning Multiple Nodes in the Inspector 209
Hiding Inspector Controls 210
Using the Inspector Header 210
Selecting and Viewing Nodes in the Inspector 211
Using Header Controls 211
Versioning Nodes 212
Parameter Tabs 212
The Settings Tab 212
Inspector Controls Explained 216
Fusion Slider Controls 216
Thumbwheel 217
Range Controls 217
Checkboxes 217
Inspector Height
A small arrow button at the far right of the UI toolbar lets you toggle the Inspector between full-height
and half-height views, depending on how much room you need for editing parameters.
In maximized height mode, the Inspector takes up up the entire right side of the UI, letting you see
every control that a node has available, or creating enough room to see the parameters of two or three
pinned nodes all at once. In half-height mode, the top of the Inspector is aligned with the tops of the
viewers, expanding the horizontal space that’s available for the Node Editor.
– Auto Control Open: When enabled (the default), whichever node is active automatically opens its
controls in the Inspector. When disabled, selecting an active node opens that node’s Inspector
header in the Inspector, but the parameters remain hidden unless you click the Inspector header.
– Auto Control Hide: When enabled (the default), only selected nodes are visible in the Inspector,
and all deselected nodes are automatically removed from the Inspector to reduce clutter. When
disabled, parameters from selected nodes remain in the Inspector, even when those nodes are
deselected, so that the Inspector accumulates the parameters of every node you select over time.
– Auto Control Close Tools: When enabled (the default), only the parameters for the active node
can be exposed. When disabled, you can open the parameters of multiple nodes in the Inspector
if you want.
– Auto Controls for Selected: When enabled (the default), selecting multiple nodes opens multiple
control headers for those nodes in the Inspector. When disabled, only the active node appears in
the Inspector; multi-selected nodes highlighted in white do not appear.
When you select a single node so that it’s highlighted orange in the Node Editor, all of its parameters
appear in the Inspector. If you select multiple nodes at once, Inspector headers appear for each
selected node (highlighted in white in the Node Editor), but the parameters for the active node
(highlighted in orange) are exposed for editing.
Only one node’s parameters can be edited at a time, so clicking another node’s Inspector header
opens that node’s parameters and closes the parameters of the previous node you were working on.
This also makes the newly opened node the active node, highlighting it orange in the Inspector.
While the Pin button is on, that node’s parameters remain open in the Inspector. If you select another
node in the Node Editor, that node’s parameters appear beneath any pinned nodes.
You can have as many pinned nodes in the Inspector as you like, but the more you have, the more
likely you’ll need to scroll up or down in the Inspector to get to all the parameters you want to edit.
To remove a pinned node from the Inspector, just turn off its Pin button in the Inspector header.
An orange underline indicates the currently selected version, which is the version that’s currently
being used by your composition. To clear a version you don’t want to use any more, right-click that
version number and choose Clear from the contextual menu.
Parameter Tabs
Underneath the Inspector header is a series of panel tabs, displayed as thematic icons. Clicking one of
these icons opens a separate tab of parameters, which are usually grouped by function. Simple nodes,
such as the Blur node, consist of two tabs where the first contains all of the parameters relating to
blurring the image, and the second is the Settings tab.
More complicated nodes have more tabs containing more groups of parameters. For example, the
Delta Keyer has seven tabs: separating Key, Pre-Matte, Matte, Fringe, Tuning, and Mask parameters,
along with the obligatory Settings tab. These tabs keep the Delta Keyer from being a giant scrolling list
of settings and make it easy to keep track of which part of the keying process you’re finessing
as you work.
The following controls are common to most nodes, although some are node-specific. For example,
Motion Blur settings have no purpose in a Color Space node.
Blend
The Blend control is found in all nodes, except the Loader, MediaIn, and Generator nodes. It is used to
blend between the node’s unaltered image input and the node’s final processed output. When the
blend value is 0.0, the outgoing image is identical to the incoming image. Ordinarily, this will cause the
node to skip processing entirely, copying the input straight to the output. The default for this node
is 1.0, meaning the node will output the modified image 100%.
In these cases, the Common Control channel checkboxes are instanced to the channel boxes found
elsewhere in the node. Blur, Brightness/Contrast, Erode/Dilate, and Filter are examples of nodes that
all have RGBY checkboxes in the main Controls tab of the Inspector, in addition to the Settings tab.
TIP: The Apply Mask Inverted checkbox option operates only on effects masks, not on
garbage masks.
Pick Controls
The Pick Controls are only displayed once the Use Object or Use Material checkbox is enabled. These
controls select which ID is used to create a mask from the Object or Material channels saved in the
image. You use the Pick button to grab IDs from the image in the viewer, the same way you use the
Color Picker to select a color. The image or sequence must have been rendered from a 3D software
package with those channels included.
Correct Edges
The Correct Edges checkbox is only displayed once the Use Object or Use Material checkbox is
enabled. When the Correct Edges checkbox is enabled, the Coverage and Background Color
channels are used to separate and improve the effect around the edge of the object. When disabled
(or no Coverage or Background Color channels are available), aliasing may occur on the edge
of the mask.
Motion Blur
For nodes that are capable of introducing motion, such as Transform nodes, Warp nodes, and so on,
the Motion Blur checkbox toggles the rendering of motion blur on or off for that node. When this
checkbox is enabled, the node’s predicted motion is used to produce the blur caused by a virtual
camera shutter. When the control is disabled, no motion blur is created.
When Motion Blur is disabled, no additional controls are displayed. However, turning on Motion Blur
reveals four additional sliders with which you can customize the look of the motion blur you’re adding
to that node.
Quality
Quality determines the number of samples used to create the blur. The default quality setting of 2 will
create two samples on either side of an object’s actual motion. Larger values produce smoother
results but will increase the render time.
Shutter Angle
Shutter Angle controls the angle of the virtual shutter used to produce the Motion Blur effect. Larger
angles create more blur but increase the render times. A value of 360 is the equivalent of having the
shutter open for one whole frame exposure. Higher values are possible and can be used to create
interesting effects. The default value for this slider is 100.
Center Bias
Center Bias modifies the position of the center of the motion blur. Adjusting the value allows for the
creation of trail-type effects.
Sample Spread
Adjusting Sample Spread modifies the weight given to each sample. This affects the brightness of the
samples set with the Quality slider.
Comments
A Comments field is found on every node and contains a single text field that is used to add comments
and notes to that node. To enter text, simply click within the field to place a cursor, and begin typing.
When a note is added to a node, the comments icon appears in the Control Header and can be seen
in a node’s tooltip when the cursor is placed over the node in the Node Editor. The contents of the
Comments tab can be animated over time, if required.
Additional controls appear under this tab if the node is a Loader. For more information, see Chapter
104, “Generator Nodes.” In the DaVinci Resolve Reference Manual or Chapter 43 in the Fusion
Reference Manual.
Once you click directly on a slider handle, you can make changes to its value using the Left and Right
Arrow keys. The Command and Shift keys can again be used to modify the value in larger or smaller
increments.
While slider controls use a minimum and maximum value range, entering a value in the edit box
outside that range will often expand the range of the slider to accommodate the new value. For
example, it is possible to enter 500 in a Blur Size control, even though the Blur Size sliders default
maximum value is 100. The slider will automatically adjust its maximum displayed value to allow entry
of these larger values.
If the slider has been altered from its default value, a small circular indicator will appear below the
gutter. Clicking on this circle will reset the slider to its default.
You can use the arrowheads at either end of the control to fine tune your adjustments. Once the
thumbwheel has been selected either by dragging or using the arrow keys, you can use the Left and
Right Arrows on your keyboard to further adjust the values. As with the slider control, the Command
and Shift keys can be used to increase or decrease the change in value in smaller or larger
increments.
If the thumbwheel has been altered from its default value, a small circular indicator will appear below
above the thumbwheel. Clicking on this circle will reset the thumbwheel to its default.
Range Controls
The Range controls are actually two separate controls, one for setting the Low Range value and one
for the High Range value. To adjust the values, drag the handles on either end of the Range bar. To
slide the high and low values of the range simultaneously, drag from the center of the Range bar. You
can also expand or contract the range symmetrically by holding Command and dragging either end of
the Range bar. You find Range controls on parameters that require a high and low threshold, like the
Matte Control, Chroma Keyer, and Ultra Keyer nodes.
TIP: You can enter floating-point values in the Range controls by typing the values in using
the Low and High numeric entry boxes.
Checkboxes
Checkboxes are controls that have either an On or Off value. Clicking on the checkbox control will
toggle the state between selected and not selected. Checkboxes can be animated, with a value of 0
for Off and a value of 1.0 or greater for On.
Drop-down menu selections can be animated, with a value of 0 representing the first item in the list, 1
representing the second, and so forth.
Button Arrays
Button arrays are groups of buttons that allow you to select from a range of options. They are almost
identical in function to drop-down menu controls, except that in the case of a button array it is possible
to see all of the available options at a glance. Often button arrays use icons to make the options more
immediately comprehensible.
The Color panel is extremely flexible and has four different techniques for selecting and
displaying colors.
TIP: Color can be represented by 0–1, 0.255, or 0–65000 by setting the range you want in
the Preferences > General panel.
Each operating system has a slightly different layout, but the general idea is the same. You can choose
a color from the swatches provided—the color wheel on macOS, or the color palette on Windows.
However you choose your color, you must click OK for the selection to be applied.
Gradients
The Gradient Control bar is used to create a gradual blend between colors. The Gradient bar displays
a preview of the colors used from start to end. By default, there are two triangular color stops: one on
the left that determines the start color, and one on the right that determines the end color.
Gradient Type
The Gradient Type button array is used to select the form used to draw the gradient. Linear draws the
gradient along a straight line from the starting color stop to the ending color stop.
Linear gradient
Reflect draws the gradient by mirroring the linear gradient on either side of the starting point.
Reflect gradient
Square draws the gradient by using a square pattern when the starting point is at the center of
the image.
Square gradient
Cross gradient
Radial draws the gradient in a circular pattern when the starting point is at the center of the image.
Radial gradient
Angle draws the gradient in a counter-clockwise sweep when the starting point is at the center of
the image.
Angle gradient
Interpolation Space
The Gradient Interpolation Method pop-up menu lets you select what color space is used to calculate
the colors between color stops.
Offset
When you adjust the Offset control, the position of the gradient is moved relative to the start and end
markers. This control is most useful when used in conjunction with the repeat and ping-pong modes
described below.
Once/Repeat/Ping-Pong
These three buttons are used to set the behavior of the gradient when the Offset control scrolls the
gradient past its start and end positions. The Once button is the default behavior, which keeps the
color continuous for offset. Repeat loops around to the start color when the offset goes beyond the
end color. Ping-pong repeats the color pattern in reverse.
Modifiers
Modifiers are expressions, calculations, trackers, paths, and other mathematical components that you
attach to a parameter to extend its functionality. When a modifier is attached to a parameter, its
controls will appear separately in the Inspector Modifiers tab.
To attach a modifier:
1 Right-click over the parameter to which you want to attach a modifier.
2 Make a selection from the Modifier submenu in the contextual menu.
Orange Keyframe buttons in the Inspector show there’s a keyframe at that frame
Once you’ve keyframed one or more parameters, the node containing the parameters you
keyframed displays a Keyframe badge, to show that node has been animated.
Once you’ve started keyframing node parameters, you can edit their timing in the Keyframes Editor
and/or Spline Editor. For more information about keyframing in Fusion, see Chapter 60, “Animating in
Fusion’s Keyframe Editor” in the DaVinci Resolve Reference Manual or Chapter 9 in the Fusion
Reference Manual.
TIP: If you change the default spline type from Bézier, the contextual menu will display the
name of the current spline type.
TIP: Disabling the Auto Control Close node’s General preference, and then selecting two
nodes in the Node Editor will allow you to pick whip two parameters from different nodes.
Contextual Menus
There are two types of contextual menus you can invoke within the Inspector.
In the Input attributes, you can select an existing control or create a new one, name it, define the type,
and assign it to a tab. In the Type attributes, you define the input controls, the defaults and ranges, and
whether it has an onscreen preview control. The Input Ctrl attributes box contains settings specific to
the selected node control, and the View Ctrl attributes box contains settings for the preview
control, if any.
We could use the Center input control, along with its preview control, to set an angle and distance
from directly within the viewer using expressions.
1 Right-click the label for the Length parameter, choose Expression from the contextual menu, and
then paste the following expression into the Expression field that appears:
-sqrt(((Center.X-.5)*(Input.XScale))^2+((Center.Y-.5)*(Input.YScale)*(Input.
Height/Input. Width))^2)
2 Next, right-click the label for the Angle parameter, choose Expression from the contextual menu,
and then paste the following expression into the Expression field that appears:
atan2(Center.Y-.5)/(Input.OriginalWidth/Input.X , .5-Center.X) * 180 / pi
Once that’s done, we can use the UserControls to hide the Type control.
To make a new MultiButton, run the UserControl script, and add a new control ID, TypeNew. You
can set the Name to be Type, as the Names do not need to be unique, just the IDs. Set the Type to
Number, the Page to Controls, and the Input Ctrl to MultiButtonControl. In the Input Ctrl attributes,
we can enter the names of our buttons. Let’s do Linear and Centered. Type them in and hit Add for
each. Press OK, and we have our new buttons with the unneeded options removed. To make this
new control affect the original Type, add a SimpleExpression to the Type:
iif(TypeNew==0, 0, 2).
Once that’s done, we can use the UserControls to hide the original Type control.
Animating in Fusion’s
Keyframes Editor
This chapter covers how you can keyframe effects in the Inspector, and how you can
edit clips, effects, and keyframes in the Keyframes Editor.
Contents
Keyframing in the Inspector 229
Removing Animation in the Inspector 230
Attaching a Parameter to an Existing Animation Curve 230
Keyframes Editor Overview 230
Keyframes Editor Tracks 231
The Timeline Header 231
The Playhead 232
Spreadsheet 232
Scaling and Panning the Timeline 233
Working with Segments in the Timeline 233
Moving Segments in the Timeline 233
Trimming Segments 234
Holding the First or Last Frame 234
Working with Keyframes in the Timeline 234
Drag and Drop Keyframe Editing 235
Keyframe Editing Using the Time Editor 235
The Keyframe Spreadsheet 235
Duplicating Spline Keyframes 236
Time Stretching Keyframes 236
Showing Keyframe Values 236
Timeline Filters 236
Selected Filtering 238
Once you’ve started keyframing node parameters, you can edit their timing in the Keyframes
Editor and/or Spline Editor.
The Playhead
As elsewhere in Fusion, the playhead is a red vertical bar that runs through the Timeline view to
indicate the position of the current frame or time. The Keyframes Editor playhead is locked to the
viewer playhead, so the image you’re viewing is in sync.
You must click on the playhead directly to drag it, even within the Timeline ruler (clicking and dragging
anywhere else in the Timeline ruler scales the Timeline). Additionally, you can jump the playhead to a
new location by holding down the Command-Option keys and clicking in the track area (not the
Timeline ruler).
Spreadsheet
If you turn on the Spreadsheet and then click on the name of a layer in the keyframe track, the numeric
time position and value (or values if it’s a multi-dimensional parameter) of each keyframe appear as
entries in the cells of the Spreadsheet. Each column represents one keyframe, while each row
represents a single aspect of each keyframe.
For example, if you’re animating a blur, then the Key Frame row shows the frame each keyframe is
positioned at, and the Blur1BlurSize row shows the blur size at each keyframe. If you change the Key
Frame value of any keyframe, you’ll move that keyframe to a new frame of the Timeline.
TIP: Selecting a node’s name from the Timeline header also selects the node’s tile in the
Node Editor, with its controls displayed in the Inspector.
TIP: Shortening the duration of effects nodes can optimize processing. Imagine a Loader or
MediaIn node that represents a clip that’s 100 frames long and is connected to a Defocus
node that’s animated from frames 80–100. There is little to no point in processing the defocus
node between frames 0–79, so trimming the defocus segment to start at frame 80 in the
Timeline will effectively prevent it from rendering and consuming either memory or processor
time until needed.
The Drip1 segment has its keyframe tracks exposed, while the Text1 segment has
its keyframe tracks collapsed so they’re displayed within the segment.
To change the position of a keyframe using the toolbar, do one of the following:
– Select a keyframe, and then enter a new frame number in the Time Edit box.
– Choose T Offset from the Time Editor drop-down, select one or more keyframes, and enter a
frame offset.
– Choose T Scale from the Time Editor drop-down, select one or more keyframes,
and enter a frame offset.
The Time button can switch to Time Offset or Time Scale for moving keyframes.
For example, if you’re animating a blur, then the Key Frame row shows the frame each keyframe is
positioned at, and the Blur1BlurSize row shows the blur size at each keyframe. If you change the Key
Frame value of any keyframe, you’ll move that keyframe to a new frame of the Timeline.
Timeline Filters
When a composition grows to include hundreds of nodes, locating specific node layers can quickly
become difficult. Timeline filters can be created and applied to sift out nodes that are not necessary to
the current operation. The Global Timeline preferences include a number of pre-made filters that you
can enable, or you can create new ones as needed.
2 Click the New button, enter a name for your new filter setting, and click OK. The filter you created
is now selected in the Filter pop-up menu at the top.
3 Use the “Settings for filters” list to turn on the checkboxes of nodes you want to be seen and turn
off the checkboxes of nodes you want to filter out. Each category of node can be turned on and
off, or you can open up a category’s disclosure control to turn individual nodes on and off. Clicking
Invert All immediately turns off all node categories.
4 When you’re finished creating filters, click the Save button to hide the Fusion Settings window.
Filters that you’ve created in the Timeline panel of the Fusion Settings window appear in the
Keyframes Editor Option menu.
Selected Filtering
Choosing “Show only selected tools” from the Keyframes Editor Option menu filters out all segments
except for layers corresponding to selected nodes. This option can be turned on or off.
TIP: When “Show only selected tools” is enabled, you can continue to select nodes in the
Node Editor to update what’s displayed in the Keyframes Editor.
If you begin numbering nodes in the track header and change your mind or decide on a different
order, you can choose Restart to begin numbering again or choose Cancel to keep the current order.
– All Tools: Forces all tools currently in the Node Editor to be displayed in the Keyframes Editor.
– Hierarchy: Sorts with the most background layers at the top of the header, through to the most
foreground layers at the bottom, following the connections of the nodes in the Node Editor.
– Reverse: The opposite of Hierarchy, working backward from the last node in the Node Editor
toward the most background source node.
– Names: Sorts by the alphabetical order of the nodes, starting at the top with the beginning
of the alphabet.
– Start: Orders layers based on their starting point in the composition. Nodes that start earlier
in the Global project time are listed at the top of the header, while nodes that start later are
at the bottom.
– Animated: Restricts the Timeline to showing animated layers only. This is an excellent mode to
use when adjusting the timing of animations on several nodes at once.
Markers
Markers help identify important frames in a project that might affect how you keyframe animation.
They may indicate the frame where a dragon breathes fire at a protagonist, the moment that someone
passes through a portal, or any other important frame in a composition that you need to keep track of.
Markers added to the Timeline in the Cut, Edit, Fairlight, or Color page will appear in the Keyframes
Editor and Spline Editor of the Fusion page. They can also be added from the Keyframes Editor or the
Spline Editor while working in Fusion Studio or the Fusion page. Markers in Fusion appear as a small
handle with a line extending vertically through the graph view when selected.
The most important attribute of a marker is its position. For it to add value, a marker must be placed on
the frame you intended it to be on. Hovering the cursor over a marker displays a tooltip with its current
frame position. If it is on the wrong frame, you can drag it along the Time Ruler to reposition it.
Markers added to the Time Ruler are editable in the Fusion page, and the changes appear back in the
other DaVinci Resolve pages. Time Ruler markers can be added, moved, deleted, renamed, and given
descriptive notes from within Fusion’s Keyframes or Spline Editor.
NOTE: Markers attached to clips in the Edit page Timeline are visible on MediaIn nodes in
Fusion’s Keyframes Editor but not editable. They are not visible in the Spline Editor.
Jumping to Markers
Double-clicking a marker jumps the playhead to that marker’s position.
Renaming Markers
By default, a marker uses the frame number in its name, but you can give it a more descriptive name to
go along with the frame number, making it easier to identify. To rename a marker in Fusion, right-click
over the marker and choose Rename Guide from the contextual menu. Enter a name in the dialog
and click OK.
There is a pair of checkboxes beside the names of each marker. One is for the Spline Editor, and one
is for the Keyframes Editor. By default, markers are shown in both the Spline Editor and Keyframes
Editor, but you can deselect the appropriate checkbox to hide the markers in that view.
Deleting Markers
You can delete a marker by dragging it up beyond the Time Ruler and releasing the mouse. You can
also use the marker’s contextual menu to choose Delete Marker.
Autosnap
To help with precisely positioning keyframes and the start and end of segments as you drag in the
Timeline, you can have them snap to a field, a frame, or to markers. The Autosnap option is accessed
through the Keyframes Editor’s contextual menu. There are two submenu options for autosnapping.
One option controls the snapping behavior when you drag keyframes, control points, or the starting
and ending edges of segments. The other option controls the snapping behavior of markers.
Autosnap Points
When you drag keyframes or the edges of segments, often you want them to fall on a specific frame.
Autosnap restricts the placement of keyframes and segment edges to frame boundaries by default,
but you have other options found in the contextual menu. To configure autosnapping on keyframes
and segment edges, right-click anywhere within the Keyframes Editor and choose Options > Autosnap
Points from the contextual menu. This will display the Autosnap Points submenu with options for the
snapping behavior. The options are:
– None: None allows free positioning of keyframes and segment edges with subframe accuracy.
– Frame: Frame forces keyframes and segment edges to snap to the nearest frame.
– Field: Field forces keyframes and segment edges to snap to the nearest field,
which is 0.5 of a frame.
– Guides: When enabled, the keyframes and segment edges snap to markers.
Autosnap Markers
When you click to create a new marker, the default behavior is that it will snap to the closest frame. If
you reposition the marker, it also snaps to the nearest frame as you drag. This behavior can be
changed in the Keyframes Editor’s contextual menu by choosing from the Options > Autosnap Markers
submenu. The options are:
– None: Markers can be placed anywhere with subframe accuracy.
– Frame: Frame forces all markers to snap to the nearest frame.
– Field: Field forces all markers to snap to the nearest field.
To reveal the Spreadsheet Editor, click on the Spreadsheet button in the toolbar. The Spreadsheet will
split the Work Area panel and appear below the Keyframes Editor’s interface.
TIP: Entering a frame number using a decimal point (e.g., 10.25 or 15.75) allows you to set
keyframes on a subframe level to create more natural animations.
Inserting Keyframes
You can also add new keyframes to an animation by clicking in an empty keyframe cell and entering
the desired time for the new keyframe. Using the cell under the new keyframe, you can enter a value
for the parameter.
TIP: You can use the Tab and Shift-Tab key shortcuts to move the selection right or left in the
Spreadsheet Editor.
Line Size
The Line Size option controls the height of each Timeline segment individually. It is often useful to
increase the height of a Timeline bar, especially when editing or manipulating complex splines.
Keyframes displayed as bars (left), and keyframes displayed as Point Values (right).
Waveforms are displayed in the Keyframes Editor for all MediaIn nodes
TIP: Right-clicking a track in the Keyframes Editor and choosing All Line Size > Minimum/
Small/Medium/Large/Huge changes all the tracks and audio waveforms in the
Keyframes Editor.
Animating in Fusion’s
Spline Editor
This chapter covers how you can keyframe effects and control animations in
Fusion’s Spline Editor.
Contents
Spline Editor Overview 246
Spline Editor Interface 247
The Graph, Header, and Toolbar 247
Renaming Splines 248
Changing Spline Colors 249
Navigating Around the Spline Editor 249
Markers 250
Autosnap 252
Creating Animation Splines 253
Animating with Different Spline Types 253
Working with Keyframes and Splines 255
Adding Keyframes 255
Locked and Unlocked Controls Points 256
Selecting, Moving, and Deleting Keyframes 257
Showing Key Markers 258
Copying and Pasting Keyframes 258
Time and Value Editors 260
Modifying Spline Handles 261
Reducing Points 262
Graph
The graph is the largest area of the interface. It is here that you see and edit the animation splines.
There are two axes in the graph. The horizontal axis represents time, and the vertical axis represents
the spline’s value. A thin bar, called the playhead, runs vertically through the graph to represent the
current time as it does in the Timeline Editor. You can drag the playhead to change the current time,
updating the frame displayed in the viewers.
Playhead
The playhead is the thin red vertical bar that runs vertically through the Spline Editor graph and
represents the current time of the comp. You can drag the playhead to change the current time.
Status Bar
The status bar in the lower-right corner of the Fusion window regularly displays information about the
position of the pointer, along with the time and value axes.
Contextual Menus
There are two contextual menus accessible from the Spline Editor. The Spline contextual menu is
displayed by right-clicking over the graph, while the Guide contextual menu is displayed by right-
clicking on the Time Ruler above the graph.
Right-click on the horizontal axis Time Ruler for the Guide menu
Renaming Splines
The name of a spline in the header is based on the parameter it animates. You can change the name
of a spline by right-clicking on it in the header and choosing Rename Spline from the contextual menu.
The Zoom Height and Zoom Width sliders, Fit button, and Zoom to
Rectangle button can be used to navigate around the graph
Markers
Markers help identify important frames in a project. They may indicate a frame where a ray gun shoots
a beam in the scene, the moment that someone passes through a portal in the image, or any other
important event in the composite.
Markers added to the Timeline in the Cut, Edit, Fairlight, or Color page will appear in the Keyframes
Editor and Spline Editor of the Fusion page. They can also be added from the Keyframes Editor or the
Spline Editor while working in Fusion Studio or the Fusion page. Markers appear along the top of the
horizontal axis Spline Editor’s Time Ruler. They are displayed as small blue shapes, and when selected,
a line extends from each guide down vertically through the graph.
NOTE: Markers attached to clips in the Cut, Edit, Color, or Fairlight pages Timeline are not
visible in Fusion’s Spline Editor.
Unselected markers appear as blue shapes along the top, while selected
markers display a vertical line running through the graph
To create a marker:
– Right-click in the horizontal axis Time Ruler and choose Add Guide.
If markers currently exist in the comp, they are automatically displayed in the Marker List, regardless of
whether they were added in the Keyframes Editor or the Spline Editor or any other page in
DaVinci Resolve. You can also add markers directly from the Marker List, which can be helpful if you
have multiple markers you need to add, and you know the rough timing.
Autosnap
To assist in precisely positioning keyframe control points along the horizontal (time) axis, you can
enable the Spline Editor’s Autosnap function. Right-clicking over a spline and choosing Options >
Autosnap provides a submenu with four options.
– None: Allows free, sub-frame positioning of the keyframes.
– Frame: Keyframes snap to the nearest frame.
– Fields: Keyframes snap to the nearest field.
– Guides: Keyframes snap to the nearest marker.
To create a spline:
– Right-click on the parameter to be animated in the Inspector, and choose Animate from the
contextual menu.
Selecting Animate from the contextual menu connects the parameter to the default spline type. This is
usually a Bézier Spline unless you change the default spline in the Defaults panel of the Fusion
Preferences.
– Bézier Spline: Bézier splines are the default curve type. Three points for each keyframe on the
spline determine the smoothness of the curve. The first point is the actual keyframe, representing
the value at a given time. The other two points represent handles that determine how smoothly
the curve for the segments leading in and out of the keyframe are drawn. Bézier is the most used
spline type because Bézier splines allow you to create combinations of curves and straight lines.
Bézier spline
– Modify with > B-Spline: B-splines use a single point to determine the smoothness of the curve.
Instead of using handles, a single control point determines the value as well as the smoothness
of the curve. Holding down the W key while dragging left or right on the control point adjusts the
tension of the curve.
B-spline
– Modify with > Cubic Spline: Cubic splines are similar to Bézier splines, in that the spline
passes through the control point. However, Cubic splines do not display handles and always
make the smoothest possible curve. In this way, they are similar to B-splines. This spline type
is almost never used.
– Modify with > Natural Cubic Spline: Natural Cubic splines are similar to Cubic splines, except that
they change in a more localized area. Changing one control point does not affect other tangents
beyond the next or previous control points.
Adding Keyframes
Once you create one keyframe, additional keyframes are automatically added to a spline whenever
you move the playhead and change the value of that spline’s parameter. For example, if you change
the strength of an animated glow at frame 15, a keyframe with the new value occurs on frame 15.
In the Spline Editor, control points can also be added directly to a spline by clicking on the spline
where you want to add the new keyframe.
The displacement spline represents the relative position along a motion path
Displacement paths are composed of locked and unlocked points. Whether a point is locked is
determined by how you added it to the polyline. Locked points on the spline have an associated point
in the viewer’s motion path; unlocked points do not have a corresponding point in the viewer’s motion
path. Each has a distinct behavior, as described below.
TIP: You can convert displacement splines to X and Y coordinates by right-clicking over the
motion path in the viewer and choosing Path#: Polyline > Convert to X/Y Path.
Unlocked Points
Unlocked points are created by clicking directly on the spline in the Spline Editor. These points give
additional control over the acceleration along the motion path without adjusting the path itself.
Conversely, you can add unlocked points in the viewer to control the shape of the motion path without
changing the timing.
You can change an unlocked point into a locked point, and vice versa, by selecting the point(s),
right-clicking, and choosing Lock Point from the contextual menu.
For more information on motion paths and locked keyframes, see Chapters 60 and 62 in the
DaVinci Resolve manual or Chapters 9 and 11 in the Fusion Studio manual.
Moving Keyframes
You can freely move keyframes with the mouse, keyboard, or the edit point controls. Keyframes can
even pass over existing points as you move them. For instance, if a keyframe exists on frame 5 and
frame 10, the keyframe at frame 5 can be repositioned to frame 15.
The key markers show keyframes in the horizontal axis using the same color as the splines
You can copy a single point’s value from a group of selected points. Since this process does not
deselect the selected set, you can continue picking out values as needed without having to
reselect points.
Keyframes can also be pasted with an offset, allowing you to duplicate a spline shape but increase the
values or shift the timing using an offset to X or Y.
TIP: You cannot copy and paste between different spline types. For instance, you cannot
copy from a Bézier spline and paste into a B-spline.
Time Editor
The Time Editor is used to modify the current time of the selected keyframe. You can change the Time
mode to enter a specific frame number, an offset from the current frame, or spread the keyframes
based on the distance (scale) from the playhead. You can select one of the three modes from the Time
mode drop-down menu.
Time
The number field shows the current frame number of the selected control point. Entering a new frame
number into the field moves the selected control point to the specified frame. If no keyframes are
selected or if multiple keyframes are selected, the field is empty, and you cannot enter a time.
Time Offset
Selecting T Offset from the drop-down menu changes the mode of the number field to Time Offset. In
this mode, the number field offsets the selected keyframes positively or negatively in time. An offset of
either positive or negative values can be entered. For example, entering an offset of 2 moves a
selected keyframe from frame 10 to 12. If multiple keyframes were selected in the previous example, all
the keyframes would move two frames forward from their current positions.
Time Scale
Selecting T Scale from the drop-down menu changes the mode of the number field to Time Scale. In
this mode, the selected keyframes’ positions are scaled based on the position of the playhead. For
example, if a keyframe is on frame 10 and the playhead is on frame 5, entering a scale of 2 moves the
keyframe 10 frames forward from the playhead’s position, to frame 15. Keyframes on the left side of the
playhead would be scaled using negative values.
Value Editor
The Value Editor is used to modify the selected keyframe’s parameter value using one of three Value
modes. You can change the Value mode to enter a specific value for a parameter, an offset from the
value, or to spread the values. The mode is chosen from the Value mode drop-down menu.
Value
The number field shows the value of the currently selected keyframes. Entering a new number into the
field changes the value of the selected keyframe. If more than one keyframe is selected, the displayed
value is an average of the keyframes, but entering a new value will cause all keyframes to adopt
that value.
Value Offset
Choosing Offset from the drop-down menu sets the Value Editor to the Offset mode. In this mode, the
value for the selected keyframes are offset positively or negatively. An offset of either positive or
negative values can be entered. For example, entering a value of -2 changes a value from 10 to 8. If
multiple keyframes are selected, all the keyframes have their values modified by -2.
Value Scale
Choosing Offset from the drop-down menu sets the Value Editor to the Scale mode. Entering a new
value causes the selected keyframes’ values to be scaled or multiplied by the specified amount. For
example, entering a value of 0.5 changes a keyframe’s value from 10 to 5.
Dragging on a keyframe’s handles adjusts the slope of the segments passing through the spline. By
default, the two control handles on a control point are locked together so that if one moves, the one
on the other side moves with it. This maintains a constant tension through the keyframe. There are
situations, however, when it is desirable to modify these control handles separately for a more
pronounced curve or effect.
Reducing Points
When there are too many control points too close together on a spline, you can choose Reduce Points
to decrease their number, making it easier to modify the remaining points. The overall shape of the
spline is maintained as closely as possible while eliminating redundant points from the path.
You can set the slider value as low as possible as long as the spline still closely resembles the shape
of your original spline.
TIP: When the value is 100, no points will be removed from the spline. Use smaller values to
eliminate more points.
To create a filter:
1 From the Options menu, choose Create/Edit Filters.
2 Click the New button to create a new filter and name the new filter in the dialog box.
3 Enable a checkbox next to the entire category or the individual tools in each category to
determine the tools included in the filter.
Enable each tool you want to keep in the Spline Editor when the filter is selected
The Invert All and Set/Reset All buttons can apply global changes to all the checkboxes, toggling the
selected states as described.
To disable a filter and show all tools in the Spline Editor again:
– Choose Show All from the Options menu.
Selection States
There are three selection options, labeled Select All Tools, Deselect All Tools, and Select One Tool,
that determine how the items in the Spline Editor header behave when a checkbox or label is selected
to activate a spline. These states are located in the Options menu in the upper-right corner of the
Spline Editor.
– Select All Tools: Choosing this option activates all splines for editing.
– Deselect All Tools: Choosing this option sets all spline checkboxes to disabled.
– Select One Tool: This option is a toggle. When Select One Tool is chosen from the menu, only
one spline in the header is active and visible at a time. Clicking on any spline’s checkbox will set
it to active, and all other splines will be cleared. When disabled, multiple splines can be active
in the header.
Interpolation
Keyframes are specific frames in an animation where control points are set to exact values on a given
parameter. Interpolation is the method used to fill in the unknown values between two keyframes.
Fusion automatically interpolates between two keyframes. However, you may want to modify the
interpolation to achieve a specific style of animation. The Spline Editor includes several interpolation
methods you can choose from using the toolbar.
Smooth
A smoothed segment provides a gentle keyframe transition in and out of the keyframe by slightly
extending the direction handles on the curve. This slows down the animation as you pass through the
keyframe. To smooth the selected keyframe(s), press Shift-S or click the toolbar’s Smooth button.
TIP: Invert is used only for non-animated LUT splines, which are currently only available in the
LUT Editor window.
Step Out causes the value of the selected keyframe to hold right up to the next keyframe.
Reversing Splines
Reverse inverts the horizontal direction of a segment of an animation spline. To apply reverse, choose
a group of points in a spline and click the Reverse button, or right-click and choose Reverse from the
contextual menu, or press the V key. The group of points is immediately mirrored horizontally in the
graph. Points surrounding the reversed selection may also be affected.
Looping Splines
It is often useful to repeat an animated section, either infinitely or for a specified number of times, such
as is required to create a strobing light or a spinning wheel. Fusion offers a variety of ways to repeat a
selected segment.
Set Loop
To repeat or loop a selected spline segment, select the keyframes to be looped. Select Set Loop from
the contextual menu or click on the Set Loop button in the toolbar. The selected section of the spline
repeats forward in time until the end of the global range, or until another keyframe ends the
repeating segment.
Relative Loop
The Relative Loop mode repeats the segment like the Loop, but each repetition adds upon the last
point of the previous loop so that the values increase steadily over time.
Looping Backward
You can choose Set Pre-Loop by right-clicking in the graph area and choosing it from the contextual
menu. This option contains the same options for looping as the Loop option buttons in the toolbar,
except that the selected segment is repeated backward in time rather than forward.
Gradient Extrapolation
You can choose Gradient Extrapolation by right-clicking in the graph area and choosing it from the
contextual menu. This option continues the trajectory of the last two keyframes.
Time Stretching
Time Stretching allows for a selected group of keyframes to be proportionally stretched or squashed.
This allows you to change the duration of the animation while keeping the relative distance between
each keyframe. To enable spline stretching, select the group of keyframes that you want to time
stretch, and then choose Modes > Time Stretching from the graph’s contextual menu or click the Time
Stretch button in the toolbar.
TIP: If no keyframes are selected when you enable Time Stretch, drag a rectangle to set the
boundaries of the Time Stretch.
Shape Box
The Shape Box transform mode is similar to Time Stretching; however, it can adjust the vertical scaling
of keyframe values as well as time.
TIP: If no points are selected, or if you want to select a new group of keyframes, you can drag
out a new rectangle at any time.
Ease In/Out
For a more precise way to adjust the length of Bézier direction handles attached to selected
keyframes, you can use the Spline Ease dialog. To show the dialog, select a keyframe in the graph,
and then choose Edit > Ease In/Out from the graph’s contextual menu or press T on the keyboard.
The Ease In/Out controls appear above the graph area. You can drag over the number fields to adjust
the length of the direction handles or enter a value in the fields.
Clicking the Lock In/Out button will collapse the two sliders into one, so any adjustments apply to both
direction handles.
To export a spline:
1 Select the active spline in the Spline Editor.
2 Right-click on the spline in the graph area to display the contextual menu.
3 Choose from three format options in the submenu.
4 Enter a name and location in the file browser dialog, and then click Save.
Exporting a spline gives you three options. You can export the Samples, Key Points, or All Points.
Samples adds a control point at every frame to create an accurate representation of the spline.
Key Points replicates the control point positions and values on the spline using linear interpolation.
All Points exports the spline as you see it in the Spline Editor, using the same position, value, and
interpolation.
To import a spline:
1 Add an animation spline for the parameter.
2 In the Spline Editor, right-click on the animation spline and select Import Spline from the
contextual menu.
3 In the File Browser dialog, select the spline curve .spl file, and then click Open.
Importing a new curve will replace any existing animation on the selected spline.
Animating with
Motion Paths
Layers and 3D objects can move along a designated spline shape to create motion
path animations. This chapter discusses how you can create, edit, and use motion
paths in Fusion.
Contents
Animating Using Motion Paths 275
Types of Motion Paths 276
Polyline Path 276
Path Modifier 279
Controlling Speed and Orientation along a Path 280
XY Path 282
Types of Control Points 284
Locked Points 284
Unlocked Points 287
Locking and Unlocking Points 288
Tips for Manipulating Motion Paths 288
Compound Motion Paths Using Path Centers 288
Copying and Pasting Motion Paths 289
Removing Motion Paths 289
Recording Motion Paths 290
Importing and Exporting Polylines 290
Native Format 290
The following nodes have parameters that can be animated using path modifiers to move an image
around the composition. These include, but are not limited to:
– Transform: Center X/Y can be animated to move an image around.
– DVE: Center X/Y can be animated to move an image around.
– Merge: Center X/Y can be animated to move the Foreground connected image around.
– Paint: Stroke Controls > Center X/Y can be animated to move a stroke around.
– Camera 3D node: Translation X/Y/Z
– Shape 3D node: Translation X/Y/Z
The following nodes have parameters that can be animated using paths to alter the direction of a
visual effect. These include, but are not limited to:
– Directional Blur: Center X/Y can be animated to change the direction of the blur.
– Hot Spot: Primary Center X/Y can be animated to move the hot spot around.
– Rays: Center X/Y can be animated to change the angle at which rays are emitted.
– Polygon/BSpline/Ellipse/Rectangle/Triangle mask: Center X/Y can be animated to
move the mask.
– Corner Positioner: Top Left/Top Right/Bottom Left/Bottom Right X/Y can be animated to move
each corner of the corner-pinned effect.
– Vortex: Center X/Y can be animated to move the center of the warping effect.
NOTE: It’s not possible to add a motion path to a one-dimensional value, such as blur
strength or merge angle. However, you can use the Spline Editor to edit these kinds of values
in a visual way.
Polyline Path
Polyline paths are the easiest motion paths to work with. You can use the spline shape in the viewer to
control the shape of the path, while a single Displacement curve in the Spline Editor is used to control
the acceleration along the path. The most obvious way to create a Polyline motion path is by
keyframing the Center X/Y parameter of a Transform node in the Inspector.
To create a Polyline motion path using the Center X/Y parameter in the Inspector:
1 Position the playhead on the frame where the motion will begin.
2 In the Inspector, click the gray Keyframe button to the right of the Center X and Y parameters.
This action applies the path modifier in the Modifiers tab in the Inspector.
3 Adjust the Center X and Y for the first keyframe position.
4 Position the playhead on the frame where the motion should change or stop.
5 In the Inspector, change the Center X and Y parameters to set a keyframe for the new location
automatically.
6 In the viewer, modify and refine the motion path by selecting a control point and using any of the
spline controls in the viewer toolbar.
7 Open the Spline Editor and adjust the Displacement spline to control the speed of the object
along the path.
Keyframing Center X/Y is not the only way to apply the path modifier. An alternative method is to apply
the path modifier to the Center X/Y parameter either in the Inspector or using the coordinate control in
the viewer.
The object now has a path modifier applied, so without setting a keyframe you can drag the object
to begin creating a motion path in the viewer.
4 Move the playhead to a new frame.
5 Drag the onscreen coordinate control or adjust the Offset or Center values in the Inspector. A
keyframe is automatically created on the motion path, and a polyline is drawn from the original
keyframe to the new one.
6 The position of the center control is interpolated between the two keyframes. You can continue
adding points by moving the playhead and adjusting the object’s position until the entire motion
path you need is created. For motion paths, there’s no need to close the spline path; you can
leave it open.
7 Upon completion, set the polyline to Insert and Modify mode by selecting a point on the path
and pressing Command-I or clicking the Insert and Modify button on the toolbar. Don’t worry too
much about the overall shape of the motion path at this point. The shape can be refined further by
adding additional points to the polyline and modifying the existing points.
2 When done drawing the shape, click the Insert and Modify button in the viewer toolbar to leave
the mask shape as an open spline.
3 At this point you can select any of the control points along the spline and press Shift-S to make
them smooth or Shift-L to make them linear.
All mask polylines have animation enabled by default, but that is usually not desirable for a motion
path. You will need to remove this keyframe animation if you are using a mask shape.
4 At the bottom of the Inspector, right-click on the “Right-click here for shape animation” label and
choose Remove Polygon1Polyline.
5 Right-click at the bottom of the Inspector again and select Publish to give other nodes access to
this spline shape. (For a paint stroke, you will need to make the Stroke editable first by clicking the
Make Editable button in the Stroke Controls.)
10 At the bottom of the Modifiers tab, right-click on “Right-click here for shape animation” and choose
Connect To > Polygon1Polyline.
11 To quickly see where your object has gone, drag the Displacement slider back and forth.
12 You may want to use the Size parameter to adjust the size of the overall path.
The Displacement slider is meant to be keyframed for animating the object along the path.
Path Modifier
In terms of functionality, it makes no difference which method you use to generate the path modifier.
All the above methods are just different ways to get to the same point. Whichever way you decide to
add the path modifier, the Modifiers tab contains controls for the path.
The Displacement curve of a Poly path represents the acceleration of an object on a path.
Smaller values are closer to the beginning of a path, while larger values are increasingly closer to the
end of the path.
For instance, let’s say you have a bumblebee that bobs up and down as it moves across the screen. To
have the bee accelerate as it moves up and down but slow down as it reaches its peaks and valleys
you use the Displacement curve.
The curved shape path does not define how fast the bee moves. The speed of the bee at any point
along the path is a function of the Displacement parameter. You can control the Displacement
parameter either in the Modifiers tab or in the Spline Editor.
After the initial animation is set, you can use the Displacement curve in the Spline Editor to adjust
the timing.
TIP: Holding down the Option key while clicking on the spline path in the viewer will add a
new point to the spline path without adding a Displacement keyframe in the Spline Editor.
This allows you to refine the shape of the path without changing the speed along the path.
The Transform’s angle parameter connected to the path modifier’s Heading parameter
XY Path
Unlike a Polyline path, the XY path modifier uses separate splines in the Spline Editor to calculate
position along the X-axis and along the Y-axis.
At first glance, XY paths work like Polyline paths. To create the path once the modifier is applied,
position the playhead and drag the onscreen control where you want it. Position the playhead again
and move the onscreen control to its new position. The difference is that the control points are only
there for spatial positioning. There is no Displacement parameter for controlling temporal positioning.
Instead of dragging in the viewer, you can use the controls in the Modifiers tab to create a motion path,
while using the object’s original Inspector controls as an offset to this motion path. You can use the
XYZ parameters to position the object, the Center X/Y parameters to position the entire path, the Size
and Angle to scale and rotate the path, and the Heading Offset control to adjust the orientation.
TIP: XY path and Poly path can be converted between each other from the contextual menu.
This gives you the ability to change methods to suit your current needs without having to
redo animation.
The advantage of the XY path modifier is that you can explicitly set an XY coordinate at a specific time
for more control.
Locked Points
Locked points are the motion path equivalents of keyframes. They are created by changing the
playhead position and moving the animated control. These points indicate that the animated control
must be in the specified position at the specified frame.
The locked points are displayed as larger-sized hollow squares in the viewer. Each locked key has an
associated point on the path’s Displacement curve in the Spline Editor.
Deleting a locked point from the motion path will change the overall timing of the motion.
Moving the playhead and repositioning the bee adds a second locked point
At a value of 0.0, the control will be located at the beginning of the path. When the value of the
Displacement spline is 1.0, the control is located at the end of the path.
8 Select the keyframe at frame 45 in the Displacement spline and drag it to frame 50.
The motion path is now 50 frames long, without making any changes to the motion path’s shape.
If you try to change this point’s value from 1.0 to 0.75, it cannot be done because the point is the
last in the animation, so the value must be 1.0 in the Displacement spline.
9 Position the playhead on frame 100 and move the bee center to the upper-left corner of
the screen.
Moving locked points changes the duration of a motion path without changing its shape
This will create an additional locked point and set a new ending for the path.
Unlocked points added to the motion path are not displayed on the Displacement spline
You can add unlocked points to the Displacement spline as well. Additional unlocked points in the
Spline Editor can be used to make the object’s motion pause briefly.
Knowing the difference between locked and unlocked points gives you independent control over the
spatial and temporal aspects of motion paths.
When pasting a path, the old motion path will be overwritten with the one from the clipboard.
Native Format
To save a polyline shape in Fusion’s native ASCII format, right-click on the header of the Mask node in
the Inspector and select Settings > Save As from the contextual menu. Provide a name and path for
the saved file and select OK to write a file with the .setting extension. This file will save the shape of a
mask or path, as well as any animation splines applied to its points or controls.
To load the saved setting back into Fusion, you first create a new polyline of the same type, and then
select Settings > Load from the mask’s context menu or drag the .setting file directly into the
Node Editor.
If you want to move a polyline from one composition to another, you can also copy the
node to the clipboard, open your second composition, and paste it from the clipboard into the new
composition.
Using Modifiers,
Expressions, and
Custom Controls
Some of the most powerful aspects of Fusion are the different ways it allows you to
go beyond the standard tools delivered with the application. This chapter provides an
introduction to a variety of advanced features, including Modifiers, Expressions, and
Scripting, which can help you extend the functionality and better integrate Fusion into
your studio.
Contents
The Contextual Menu for Parameters in the Inspector 292
Using Modifiers 292
Adding the Right Modifier for the Job 292
Adding Modifiers to Individual Parameters 292
Combining Modifiers and Keyframes 293
Publishing a Parameter 293
Connecting Multiple Parameters to One Modifier 294
Adding and Inserting Multiple Modifiers 294
Performing Calculations in Parameter Fields 295
Using SimpleExpressions 295
Pick Whipping to Create an Expression 298
Removing SimpleExpressions 298
Customizing User Controls 298
FusionScript 302
Using Modifiers
Parameters can be controlled with modifiers, which are extensions to a node’s toolset. Many modifiers
can automatically create animation that would be difficult to achieve manually. Modifiers can be as
simple as keyframe animation or linking the parameters to other nodes, or modifiers can be complex
expressions, procedural functions, external data, third-party plug-ins, or fuses.
A modifier’s controls are displayed in the Modifiers tab of the Inspector. When a selected node has a
modifier applied, the Modifiers tab will become highlighted as an indication. The tab remains grayed
out if no modifier is applied.
Modifiers appear with header bars and header controls just like the tools for nodes. A modifier’s title
bar can also be dragged into a viewer to see its output.
Publishing a Parameter
The Publish modifier makes the value of a parameter available, so that other parameters can connect
to it. This allows you to simultaneously use one slider to adjust other parameters on the same or
different nodes. For instance, publishing a motion path allows you to connect multiple objects to the
same path.
For more information on all modifiers available in Fusion, see Chapter 122, “Modifiers.” In the
DaVinci Resolve Reference Manual and Chapter 61 in the Fusion Reference Manual.
Using SimpleExpressions
Simple Expressions are a special type of script that can be placed alongside the parameter it is
controlling. These are useful for setting simple calculations, building unidirectional parameter
connections, or a combination of both. You add a SimpleExpression by entering an equals sign directly
in the number field of the parameter and then pressing Return.
Inside the SimpleExpression text box, you can enter one-line scripts in Lua with some Fusion-specific
shorthand. Some examples of Simple Expressions and their syntax include:
Expression Description
Merge1:GetValue(“Blend”, time-5) This returns the value from another input, but sampled
at a different frame, in this case five frames before the
current one.
Point(Text1.Center.X, Text1. Unlike the previous examples, this returns a Point, not
Center.Y-.1) a Number. Point inputs use two members, X and Y. In
this example, the Point returned is 1/10 of the image
height below the Text1’s Center. This can be useful for
making unidirectional parameter links, like offsetting
one Text from another.
Text(“Colorspace: “..(Merge1. The string inside the quotes is concatenated with the
Background.Metadata.ColorSpaceID) metadata string, perhaps returning:
Colorspace: sRGB
”\n from the comp “..ToUNC(comp. To get a new line in the Text, \n is used. Various
Filename) attributes from the comp can be accessed with the
comp variable, like the filename, expressed as a
UNC path.
TIP: When working with long SimpleExpressions, it may be helpful to drag the Inspector panel
out to make it wider or to copy/paste from a text editor or the Console.
After setting an expression that generates animation, you can open the Spline Editor to view the
values plotted out over time. This is a good way to check how your SimpleExpression evaluates
over time.
A sine wave in the Spline Editor, generated by the expression used for Text1: Size
For more information about writing Simple Expressions, see the Fusion Studio Scripting Guide, and the
official Lua documentation.
SimpleExpressions can also be created and edited within the Spline Editor. Right-click on the
parameter in the Spline Editor and select Set Expression from the contextual menu. The
SimpleExpression will be plotted in the Spline Editor, allowing you to see the result over time.
Removing SimpleExpressions
To remove a SimpleExpression, right-click the name of the parameter, and choose Remove Expression
from the contextual menu.
You use the page list to assign the new control to one of the tabs in the Inspector. There are also
settings to determine the defaults and ranges, and whether it has an onscreen preview control. The
Input Ctrl box contains settings specific to the selected Type, and the View Ctrl attributes box contains
a list of onscreen preview controls to be displayed, if any.
All changes made using the Edit Controls dialog get stored in the current tool instance, so they can be
copied/ pasted to other nodes in the comp. However, to keep these changes for other comps, you
must save the node settings, and add them to the Bins in Fusion Studio or to your favorites.
As an example, we’ll customize the controls for a DirectionalBlur:
Let’s say we wanted a more interactive way of controlling a linear blur in the viewer, rather than using
the Length and Angle sliders in the Inspector. Using a SimpleExpression, we’ll control the length and
angle parameters with the Center parameter’s onscreen control in the viewer. The SimpleExpression
would look something like this:
For Angle:
atan2(.5-Center.Y , .5-Center.X) * 180 / pi
This admittedly somewhat advanced function does the job fine. Dragging the onscreen control adjusts
the angle and length for the directional blur. However, now the names of the parameters are
confusing. The Center parameter doesn’t function as the center anymore. It is the direction and length
of the blur. It should be named “Blur Vector” instead. You no longer need to edit the Length and Angle
controls, so they should be hidden away, and since this is only for a linear blur, we don’t need the Type
menu to include Radial or Zoom. We only need to choose between Linear and Centered. These
changes can easily be made in the Edit Controls dialog.
Finally, to remove Radial and Zoom options from the Type menu:
1 In the Edit Control dialog, select the Type from the ID list.
2 Select Controls from the Page list.
3 Select Radial from the Items list and click Del to remove it.
4 Select Zoom from the Items list and click Del to remove it.
5 Click OK.
To make this new checkbox affect the original Type menu, you’ll need to add a SimpleExpression to
the Type:
iif(TypeNew==0, 0, 2)
The “iif” operator is known as a conditional expression in Lua script. It evaluates something based on a
condition being true or false.
Other script types are available as well, such as Startup Scripts, Scriptlibs, Bin Scripts, Event Suites,
Hotkey Scripts, Intool Scripts, and SimpleExpressions. Fusion Studio allows external and command-
line scripting as well and network rendering Job and Render node scripting.
FusionScript also forms the basis for Fuses and ViewShaders, which are special scripting-based
plug-ins for tools and viewers that can be used in both Fusion and Fusion Studio.
For more information about scripting, see the Fusion Scripting Documentation, accessible from the
Documentation submenu of the Help menu.
Bins
This chapter covers the bin system in Fusion Studio. Bins allow for the storage and
organization of clips, compositions, tool settings, and macros, similar to the Media
Pool and Effects Library in DaVinci Resolve. It includes a built-in Studio Player for
creating a playlist of multiple shots and their versions. Bins can be used in a server
configuration for organizing shots and collaborating with other team members across
the studio.
Contents
Bins Overview 304
Bins Interface 304
Viewing and Sorting Bins 305
Organizing Bins 306
Adding and Using Content 308
File Type Details 309
Using Content from Bins 309
Jog and Shuttle 310
Stamp Files 310
Using the Studio Player 311
Playing a Single Clip 311
Creating a Reel 312
Connecting Bins Over a Network 319
Adding a Remote Bin Entry 320
Accessing Remote Bins 321
Permissions 321
Studio Player and Bin Server 321
Bins Interface
The Bins window is separated into two panels. The sidebar on the left is a list of the bins, while the
Content panel on the right displays the selected bin’s contents.
The sidebar organizes content into bins, or folders, using a hierarchical list view. These folders can be
organized however they suit your workflow, but standard folders are provided for Clips, Compositions,
Favorites, Projects, Reels, Settings, Templates, and Tools. The Tools category is a duplicate of all the
tools found in the Effects library. The Tools bin is a parent folder, and parent folders contain subfolders
that hold the content. For instance, Blurs is a subfolder of the Tools parent folder. Parent folders can be
identified by the disclosure arrow to the left of their name.
When you select a folder from the sidebar, the contents of the folder are displayed in the Contents
panel as thumbnail icons.
Each bin in the sidebar can be set to List view or Icon view independently of each other. So while you
may have some bins you want to see as a list, others may be easier to view as icons.
Organizing Bins
Once you begin adding your own categories and content, you can have hundreds of items that need
to be organized. To keep all of your elements accessible, you’ll want to use some basic organization,
just like keeping files and documents organized on your computer.
You can also click the New Folder icon in the toolbar.
When you drag a folder onto another folder in the sidebar, you create a hierarchical subfolder.
Dragging it to the Library parent folder at the top of the sidebar will add it to the top level of the
Bins window.
TIP: You cannot undo removing a folder from the Bins window.
TIP: Unsupported files like PDFs can also be saved in the bin, and the application that
supports it will launch when you double-click the file. It’s a useful location for scripts
and notes.
If you have an operating system file browser window open, you can drag files directly into a bin as
well. When adding an item to the bins, the files are not copied. A link is created between the content
and the bins, but the files remain in their original location.
Tool Settings
If you want to add a node with custom settings to a bin, first save the node as a setting by right-clicking
over it in the Node Editor and choosing Settings > Save As. Once the setting is saved, you can add it
to a bin by dragging it from a File Browser into the Bins window.
Media
Dragging media into the Node Editor from a bin window creates a new Loader that points to the media
on disk. Still files or photos are automatically set to loop.
Compositions
To add a composition, you must right-click and choose Open. Dragging a comp item onto an open
composition will have no effect. When a composition is added, it is opened in a new window. It is not
added to the existing composition.
Shuttle mode begins playing the clip forward or backward once you press the right mouse button and
drag either left or right. The clip continues to play until the mouse button is released or you reach the
end of the clip.
Stamp Files
Stamp files are low-resolution, local proxies of clips, which are used for playback of clips stored on a
network server, or for very large clips.
The Status bar at the top of the Bins window shows the progress of the stamp creation. Since the
stamp creation is a background process, you can queue other stamps and can continue working with
the composition.
Once you have the clip open in the Studio Player, you can click the Play button in the toolbar at the
bottom of the window.
To close the current clip in the Studio Player and return to the bins:
– Click the three-dot Options menu in the lower-left corner and choose Close.
Creating a Reel
A reel is a playlist or clip list that is viewed either as storyboard thumbnails or a timeline. In the bin, you
create a new reel item to hold and play back multiple clips. The thumbnail for a reel appears with a
multi-image border around it, making it easer to identify in the Bin window.
Once created and named, the reel appears in the current bin.
Double-clicking the reel opens the Studio Player interface along the bottom of the Bin window. An
empty Studio Player appears in the top half of the window.
The toolbar across the bottom of the interface has various controls for setting a loop, showing and
adjusting color, playback transport controls, collaboration sync, guide overlays, and frame number and
playback speed in fps.
The toolbar along the bottom of the Studio Player includes controls to customize playback
Toolbar buttons
– Set Loop In/Out: Sets the start and end frame for playing a section of the Timeline in a loop.
– Shot: Sets the loop for the entire clip.
– Reset: Disables the loop mode.
– M: Shows Metadata of the image.
– RGB and alpha: Toggles between displaying the color and alpha of the clip.
– Brightness Gamma: Adjusts the brightness and gamma of the viewer and is applied to all clips.
Individual clip color controls can also be applied using another menu.
– Video: Outputs the image to Blackmagic Design DeckLink and UltraStudio devices.
– Transport controls: Used to play forward, backward, fast forward, and fast backward, as well as go
to the start and end of a clip.
– Sync: A three-way toggle allowing the Studio Player to be controlled or to control another player
over a network. The Off setting disables this functionality.
– Guide buttons: These three buttons control the visibility of three customizable guide settings.
Creating Versions
Alternatively, you can add a version to an existing clip by dragging the new item on top.
Version Menu
You can choose which version to view by right-clicking over the clip in the storyboard and selecting it
from the Version > Select menu.
The Version menu also includes options to move or rearrange the order of the clip versions as well as
remove a version, thereby deleting it from the stack.
Shot Menu
The per-clip Shot menu includes functions to Rename the shot, Remove the shot, Trim the clip’s In and
Out points, add notes, adjust the color, and add an audio soundtrack.
When notes are added, they are time and date stamped as well as name stamped.
The naming is from the bin login name and computer name.
Selecting Color from the Shot menu allows you to make tonal adjustments per clip using Color
Decision List (CDL) style controls.
Options Menu
The three-dot Options menu in the lower left of the interface displays the menu that can be used to
switch between viewer and the bin in the top half of the window. It is also used to clear the memory
used in playback by selecting Purge Cache.
Selecting Reel > Notes opens the Notes dialog to add annotations text to the entire reel project.
The Reel > Export option saves the reel to disk as an ASCII readable format so it can be used
elsewhere or archived.
The View menu is used when you want to switch between the reel storyboard layout and a
Timeline layout.
Guides
You can assign customizable guide overlays to three Guide buttons along the bottom of the Studio
Player. Fusion includes four guides to choose from, but you can add your own using the XML Guide
format and style information provided at the end of this chapter. You assign a customizable guide to
one of the three Guide buttons by right-clicking over a button and selecting a guide from the list. To
display the guide, click the assigned button.
Elements =
{
HLine { Y1=”10%”, Pattern = 0xF0F0 },
HLine { Y1=”90%”, Pattern = 0xF0F0 },
HLine { Y1=”95%” },
HLine { Y1=”5%” },
VLine { X1=”10%”, Pattern = 0xF0F0 },
VLine { X1=”90%”, Pattern = 0xF0F0 },
VLine { X1=”95%” },
VLine { X1=”5%” },
HLine { Y1=”50%”, Pattern = 0xF0F0, Color = { R =
1.0, G = 0.75, B = 0.05, A=1.0 } },
VLine { X1=”50%”, Pattern = 0xF0F0, Color = { R =
1.0, G = 0.75, B = 0.05, A=1.0 } },
},
}
– HLine: Draws a horizontal line and requires a Y-value, which is measured from the top of the
screen. The Y-value can be given either in percent (%) or in absolute pixels (px).
– Vline: Draws a vertical line and requires an X-value, which is measured from the left of the screen.
The X-value can be given either in percent (%) or in absolute pixels (px).
– Pattern: The Pattern value is made up of four hex values and determines the visual appearance of
the line.
Examples for such patterns include:
>>FFFF draws a solid line ________________
>>EEEE a dashed line -------------------
>>ECEC dash-dot line -.-.-.-.-.-.-.-.-.-.-.
>>ECCC dash-dot-dot -..-..-..-..-..-..-..
>>AAAA dotted line ………………
– Color: The Color value is composed of four groups of two hex values each. The first three groups
define the RGB colors; the last group defines the transparency. For instance, the hex value for
pure red would be #FF000000, and pure lime green would be #00FF0000
– Rectangle: Draws a rectangle, which can be empty or filled, and supports the same pattern and
color settings described above.
It requires two X- and two Y-values to define the extent <Rectangle Pattern=“F0F0” X1=“10%”
Y1=“10%” X2=“90%” Y2=“90%”>.
– FillMode: Applies to rectangles only and defines whether the inside or the outside of the
rectangle should be filled with a color. Leave this value out to have just a bounding rectangle
without any fill.
>>FillMode = (“None”|”Inside”|”Outside”)
– FillColor: Applies to rectangles only and defines the color of the filled area specified by FillMode.
>>FillColor=“FF000020”
This panel shows a list of the available bin servers, with buttons below for entries to be added to or
deleted from the list.
Then add a User name and Password if one is needed to access the server.
The Library field lets you name the bins. So if you want to create a bin for individual projects, you
would name it in the Library field, and each project would gets its own bin.
The Application field allows larger studios to specify some other program to serve out the
bin requests.
Permissions
Unlike other directories on a server, your access to bins on a network is stored in the bin document.
The bins themselves contain all the users and passwords in plain text, making it easy for someone to
administer the bins.
When the Sync function is On, the transport controls can be set to control the playback or follow the
master controller.
Fusion Connect
This chapter goes into detail on how to use the Fusion Connect AVX2 plug-in
with an Avid Media Composer editing system. The Fusion Connect AVX plug-in
is only available with Fusion Studio.
Contents
Fusion Connect Overview 323
System Requirements 323
The Effects Palette 323
The Layer Input Dialog 324
Applying Fusion Connect to a Transition Point 324
Export Clips 324
Edit Effect 325
Browse for Location 325
Auto Render in Fusion 326
Red on Missing Frames 326
Compress Exported Frames 326
Edit Effect Also Launches Fusion 326
Versioning 326
Create New Version 326
Version 326
About RAW Images 326
About Color Depth 327
Manual vs. Auto-Render 327
Fusion/Avid Project Relationship 329
Rendering with Fusion 329
Directory Structure of Fusion Connect Media 330
Advanced Project Paths 331
Configuring Paths on macOS 331
Configuring Paths on Windows 332
Fields and Variables 332
Environment Variables 333
System Requirements
Fusion Connect has the following requirements:
– Supported Avid products: Media Composer 8.x
– Supported product: Fusion Studio 8.1 or later
– Installation: Two files will be installed in your Media Composer:
– Fusion Connect.avx
– BlackmagicFusionConnect.lua
Once the layer count is selected, Fusion Connect will be applied to the Timeline.
– Select the layer count equal to the number of video track layers you want to ingest into Fusion.
– Filler can be used as a layer.
– Fusion Connect will allow a maximum of eight layers.
You can use the Avid dialog boxes or smart tools to adjust the length and offset of
the transition to Start, Center, End, or Custom.
Export Clips
By pressing the Export Clips button in the Effects Editor, Fusion Connect exports all the associated
clips as image sequences, to provide access to them in Fusion. Any previously exported existing
images are overwritten, ensuring that all media needed by Fusion is accessible. Performing an export
is desired if you want to use Fusion on a different computer than the one with Media Composer
installed.
When using Fusion on the same computer as Media Composer, there is no need to export the clips
explicitly by checking the Export Clips checkbox. Without this option enabled, Fusion Connect saves
TIP: Set your Timeline Video Quality button to Full Quality (green) and 10-bit color bit depth. If
the Timeline resolution is set to Draft Quality (green/yellow) or Best Performance (yellow),
Fusion receives subsampled, lower-resolution images.
Edit Effect
After exporting the clips, the Edit Effect button performs three subsequent functions:
– Creates a Fusion composition, with Loaders, Savers, and Merges (for layers), or a Dissolve
(for transitions). This function is only performed the first time a comp is created when the
Fusion Connect AVX2 plug-in is applied.
– Launches Fusion (if installed on the machine), if it is not already launched.
– Opens the Fusion comp associated with created effects.
The path settings field in the Effects Editor updates to show the current location. If you apply Fusion
Connect to another clip in the Timeline, the last location is remembered.
Versioning
Creating visual effects is almost always an iterative process. You’ll often need to create revisions after
your first pass at the effect. Built into Fusion Connect is a versioning feature that lets you create
multiple revisions of an effect and switch between them from within Media Composer.
Version
This slider selects which version of the comp is used in the Media Composer Timeline. It can be used
to interactively switch from one version to the other in order to compare the results.
Create a comp file when first clicked, but will not Creates Fusion RAW files as Export Clip would do.
overwrite this file when clicked again. Attempts to Also creates a comp file when first clicked, but will
launch Fusion and load the comp. If Fusion is not not overwrite this file when clicked again. Launches
installed locally, the comp can be accessed manually Fusion and loads the comp.
from a different machine via the network.
In this case, both the Fusion comp and the Avid clip
will be rendered simultaneously. If a full-size rendered
7 Render clip in Avid
frame is not found, the full size/depth source frames
are exported automatically for that time. Fusion is then
Optional step, but recommended.
instructed to start rendering from that point.
The resulting frame is loaded and returned to MC,
and process is repeated for each frame thereafter.
In all three node tree layouts outlined above, there will also be a Saver node. The Saver node is
automatically set to render to the directory that is connected back to the Media Composer Timeline
with the correct format. If for some reason the file format or file path are changed in the Saver node,
the Fusion Connect process will not render correctly.
TIP: Due to the design of the AVX2 standard, hidden embedded handles are not supported.
To add handles, prior to exporting to Fusion, increase the length of your clip in the Media
Composer Timeline to include the handle length.
Fusion Node Editor representations of a single clip segment effect in the Media Composer
Timeline (left), a multi-layered composite (center), and the transition (right)
TIP: If segments in the Avid Timeline have different durations, move the longest clip to the top
layer and apply Fusion Connect to that clip. This will ensure all clips are brought into Fusion
without being trimmed to the shortest clip.
Fusion Connect AVX uses frame rate and resolution from the Media Composer Timeline.
Avid ProjectName
Avid SequenceName
Bob_v01.comp
Charlie_v01.comp
Bob
Charlie
Avid
Charlie_0000.raw
Charlie
Charlie_0000.raw
Charlie_0000.raw
Directory named after the
exported clip. Image sequence named
after the exported clip.
Dave Dave_0000.raw
Dave_0000.raw
Optional directory named after Dave_0000.raw
the exported clip.
Optiona second source clip
for Charlie_v01.comp.
xyz_name
xyz_0000.raw
Optional directory named after xyz_0000.raw
the exported clip. xyz_0000.raw
Fusion
Charlie_0000.raw
Render_v01
Charlie_0000.raw
Charlie_0000.raw
Directory named ‘Render_’ plus the
version number of the Fusio comp. Rendered images sequence
named after the Avid clip.
Render_v02
If you apply the effect to a transition, the naming behavior might be somewhat different.
Fusion Connect AVX creates folder structures in the OS to save media and
Fusion compositions. Those names are reflected in the Timeline.
You will notice that the Fusion Connect icon is a green dot (real-time) effect. If your hardware is fast
enough, the results that populate the plug-in will play in real time. However, it’s recommended that you
render the green dot effect, which will force an MXF precompute to be created to guarantee real-
time playback.
Default paths can be configured using variables similarly as on Windows, but for added convenience it
is possible to enter any desired path defaults directly into fields in the dialog, without the need for
using environment variables.
Fusion Connect can define the user variables directly in the Fusion Connect plug-in. Click the
Configure Path Defaults button to launch the path defaults dialog editor. In the Options section of the
Fusion Connect AVX2 plug-in, click the triangle to reveal the path details.
Project Name $DRIVE CONNECT_DRIVE Drive or folder for all Connect projects
User Variables
Click on the link that says “Edit environment variables for your account.”
System Variables
Click on the link that says “Edit the system environment variables.”
User Variables
For system-wide operations, place the environment variable in ~/.bash_profile
TIP: System variables control the environment throughout the operating system, no matter
which user is logged in.
User variables always overrule any system variable; therefore, the user variable always wins
if control for a specific function is duplicated in the user and system variable.
System Variables
For system-wide operations, place the environment variable in /etc/profile
TIP: If you type directly in Fusion Connect’s Path Editor, you do not have to type the variable,
just the value. You also can make modifications without having to restart the
Media Composer! The only caveat is that in order to remove a variable, you must exit
Media Composer and clear the environment variable in the Windows interface or
macOS Terminal and restart the Media Composer.
Values Description
$DRIVE This will force the directory to the drive where the Avid media is stored.
This will force a directory based on the Avid project name for which the media was
$PROJECT
digitized/imported or AMA linked.
This will force a directory based on the Avid SEQUENCE name for which the media
$SEQUENCE
was digitized/imported or AMA linked.
Here is an example of how a variable can be set up to support project and sequence names within
your directory.
Preferences
This chapter covers the various options that are available from the
Fusion Preferences Window.
Contents
Preferences Overview 336
Categories of Preferences 337
Preferences In Depth 340
AVI 341
Defaults 341
Flow 343
Frame Format 345
General 346
GPU 348
Layout 350
Loader 351
Memory 353
Network 354
Path Maps 356
Preview 359
QuickTime 360
Script 361
Spline Editor 362
Splines 364
Timeline 365
Tweaks 366
User Interface 368
Video Monitoring 370
View 371
VR Headsets 372
Preferences Overview
The Preferences window provides a wide variety of optional settings available for you to configure
Fusion’s behavior to better suit your working environment. These settings are accessed via the
Preferences window. The Preferences window can be opened from a menu at the top of the interface.
In DaVinci Resolve, to open the Fusion Preferences window, do one of the following:
– On macOS, switch to the Fusion page and choose Fusion > Fusion Settings.
– On Windows, switch to the Fusion page and choose Fusion > Fusion Settings.
– On Linux, switch to the Fusion page and choose Fusion > Fusion Settings.
In Fusion Studio, to open the Fusion Preferences window, do one of the following:
– On macOS, choose Fusion Studio > Preferences.
– On Windows, choose File > Preferences.
– On Linux, choose File > Preferences.
Categories of Preferences
The first entry in the Preferences sidebar is assigned to the Global preferences. Clicking the Global
and Default Settings disclosure arrow reveals the following sections.
3D View
The 3D View preferences offer control over various parameters of the 3D Viewers, including grids,
default ambient light setup, and Stereoscopic views.
Defaults
The Defaults preferences are used to select default behavior for a variety of options, such as
animation, global range, timecode display, and automatic tool merging.
Frame Format
The Frame Format preferences are used to create new frame formats as well as select the default
image height and width when adding new creator tools like Background and Text+. You also set the
frame rate for playback.
General
The General preferences contain options for the general operation, such as auto save, and gamma
settings for color controls.
Path Map
Path Map preferences are used to configure virtual file path names used by Loaders and Savers as
well as the folders used by Fusion to locate comps, macros, scripts, tool settings, disk
caches, and more.
Script
The Script preferences include a field for passwords used to execute scripts externally, programs to
use for editing scripts, and the default Python version to use.
Splines
Options for the handling and smoothing of animation splines, Tracker path defaults, onion-skinning,
roto assist, and more are found in the Splines preference.
Timeline
The Timeline preferences is where you create and edit Timeline/Spline filters and set default options
for the Keyframes Editor.
Tweaks
The Tweaks preferences handle miscellaneous settings for modifying the behavior when loading
frames over the network and queue/network rendering.
User Interface
These preferences set the appearance of the user interface window and how the Inspector is
displayed.
View
The View preferences are used to manage settings for viewers, including default control colors,
Z-depth channel viewing ranges, default LUTs, padding for fit, zoom, and more.
VR Headsets
The VR Headsets preferences allow configuration of any connected Virtual Reality headsets, including
how stereo and 3D scenes are viewed.
Import
The Import settings contain options for EDL Import that affect how flows are built using the data
from an EDL.
3D View
The 3D View preferences contain settings for various defaults in the 3D Viewers, including grids,
default ambient light setup, and Stereoscopic views.
Grid
The Grid section of the 3D View preferences configures how the grid in 3D Viewers are drawn.
– Grid Antialiasing: Some graphics hardware and drivers do not support antialiased grid
lines, causing them to sort incorrectly in the 3D Viewer. Disabling this checkbox will disable
antialiasing of the grid lines. To turn off the grid completely, right-click in a 3D Viewer and choose
3D Options > Grid.
– Size: Increasing the Size value will increase the number of grid lines drawn. The units used for the
spacing between grid lines are not defined in Fusion. A “unit” is whatever you want it to be.
– Scale: Adjusting the overall scaling factor for the grid is useful, for example, if the area of the grid
appears too small compared to the size of your geometry.
Orthographic Views
Similar to the Perspective Views, the Orthographic Views (front, top, right, and left views) section sets
the nearest and furthest point any object can get to or from the viewer before clipping occurs.
Fit to View
The Fit to View section has two value fields that manage how much empty space is left around objects
in the viewer when the F key is pressed.
– Fit Selection: Fit Selection determines the empty space when one or more objects are selected
and the F key is pressed.
– Fit All: Fit All determines the empty space when you press F with no objects selected.
Default Lights
These three settings control the default light setup in the 3D Viewer.
The default ambient light is used when lighting is turned on and you have not added a light to the
scene. The directional light moves with the camera, so if the directional light is set to “upper left,” the
light appears to come from the upper-left side of the image/camera.
AVI
The AVI preference is only available in Fusion Studio on Windows. It configures the default AVI codec
settings when you select AVI as the rendering file format in the Saver node.
– Compressor: This drop-down menu displays the AVI codecs available from your computer. Fusion
tests each codec when the application opens; therefore, some codecs may not be available if the
tests indicate that they are unsuitable for use within Fusion.
– Quality: This slider determines the amount of compression to be used by the codec. Higher values
produce clearer images but larger files. Not all codecs support the Quality setting.
– Key Frame Every X Frames: When checked, the codec creates keyframes at specified intervals.
Keyframes are not compressed in conjunction with previous frames and are, therefore, quicker to
seek within the resulting movie. Not all codecs support the keyframe setting.
– Limit Data Rate To X KB/Second: When checked, the data rates of the rendered file are limited to
the amount specified. Not all codecs support this option. Enter the data rate used to limit the AVI
in kilobytes (kB) per second, if applicable. This control does not affect the file unless the Limit Data
Rate To option is selected.
Defaults
The choices made here are used to determine Fusion’s behavior when new tools are added to the
Node Editor and when parameters are animated.
Default Animate
The Default Animate section is used to change the type of modifier attached to a parameter when the
Animate option is selected from its contextual menu. The default option is Nothing, which uses a
Bézier spline to animate numeric parameters and a path modifier for positional controls.
– Number With and Point With: Drop-down lists are used to select a different modifier for the new
default. For example, change the default type used to animate position by setting the Point with
the drop-down menu to XY Path.
Choices shown in this menu come from installed modifiers that are valid for that type of parameter.
These include third-party plug-in modifiers, as well as native modifiers installed with Fusion.
Auto Tools
The Auto Tools section determines which tools are added automatically for the most common
operations of the Background tools and Merge operations.
– Background: When set to None, a standard Background tool is used; however, the drop-down
menu allows you to choose from a variety of tools including 2D and 3D tools to customize the
operation to your workflow.
– Merge: When set to None, nothing happens. When set to Merge, connecting the outputs of two
tools or dragging multiple clips on the Node Editor uses a standard Merge. Other valid options for
this are Anaglyph, Channel Booleans, and Dissolve.
– Use Merge Only When Connecting Outputs Directly: When this option is active, Merges are not
automatically added when you drag multiple clips from the Finder or Windows Explorer onto the
Flow area.
Global Range
Using the Start and End fields, you can define the Global Start and End frames used when creating
new compositions.
Flow
Many of the same options found in the Node Editor’s contextual menu, like settings for Tile Picture, the
Navigator, and Pipe Style, are found in this category.
Force
The Force section can set the default to display pictures in certain tool tiles in the Node Editor rather
than showing plane tiles. The Active checkbox sets pictures for the actively selected tool, the All
checkbox enables pictures for all tiles, and the Source and Mask checkbox enables tile pictures for
just Source and Mask tools.
When All is enabled, the picture shown will either be a thumbnail of the image rendered by the tool if
the tool has rendered, or if the Show Thumbnails option is disabled, the tool’s default icon is used.
Concatenated transforms will also show a default icon.
– Show Modes/Options: Enabling this option will display icons in the tool tile depicting various
states, like Disk Caching or Locked.
– Show Thumbnails: When this checkbox is selected, tool tiles set to show tile pictures will
display the rendered output of the tool. When the checkbox is cleared, the default icon for the tool
is used instead.
– Pipe Style: This drop-down menu selects which method is used to draw connections between
tools. The Direct method uses a straight line between tools, and Orthogonal uses horizontal and
vertical lines.
– Build Direction: When auto-building or laying out a node tree, Build Direction controls whether
tools are organized horizontally or vertically.
– Scale: The Scale menu allows you to select the default zoom level of the Node Editor when a new
composition is created.
Frame Format
Frame Format preferences allow you to select the resolution and frame rate for the nodes that
generate images like Background, fast noise, and Text+. It also sets the color bit depth for final
renders, previews, and interactive updates in the viewer. The color bit depth settings only apply to
Fusion Studio. Rendering in DaVinci Resolve always use 32-bit float.
Default Format
This drop-down menu is used to select the default resolution for Generator tools from a list of presets.
This is only a default setting; these settings can be overridden using the Resolution settings in a
node’s Inspector.
Use the Edit boxes to change any of the default settings. When creating a new setting, press the New
button and enter a name for the setting in the dialog box that appears and enter the parameters.
Color Depth
The three menus in the Color Depth section are used to select the color mode for processing preview
renders, interactive renders, and full (final) renders. Processing images at 8-bit is the lowest color
depth and is rarely sufficient for final work these days but is acceptable for fast previews. 16-bit color
has much higher color fidelity but uses more system resources. 16-bit and 32-bit float per channel uses
even more system resources and is best for digital film and HDR rendered images.
Generally, these options are ignored by the composition unless a Loader or Creator tool’s Color Depth
control is set to Default.
General
The sections contained in the General preferences affect the behavior of the Inspector as well as
some other user interface elements.
Usability
Usability has a number of project, Node Editor, and user interface settings that can make the
application easier to work with, depending on your workflow.
– Auto Clip Browse: When this checkbox is enabled, the File Browser is automatically displayed
when a new Loader or Saver is added to the Node Editor.
– New Comp on Startup: When checked, a new, empty project is created each time Fusion Studio is
launched. This has no effect in DaVinci Resolve’s Fusion page.
– Summarize Load Errors: When loading node trees or “comps” that contain unknown tools (e.g.,
comps that have been created on other computers with plug-ins not installed on the current
machine), the missing tools are summarized in the console rather than a dialog being presented
for every missing tool.
– Save Compressed Comps: This option enables the saving of compressed node trees, rather than
ASCII based text files. Compressed node trees take up less space on disk, although they may take
a moment longer to load. Node trees containing complex spline animation and many paint strokes
can grow into tens of megabytes when this option is disabled. However, compressed comps
cannot be edited with a text editor unless saved again as uncompressed.
– Show Video I/O Splash: This toggles whether the Splash image will be displayed over the video
display hardware. This is only applies to Fusion Studio.
– Use Simplified Copy Names: This option reduces the occurrence of underscores in tool names
when copying.
– Show Render Settings: When this checkbox is selected, the Fusion Render Settings dialog will be
displayed every time a render is started in Fusion Studio. Holding Shift while starting a render will
prevent the display of the dialog for that session, using whatever settings were applied during the
last render. Disabling this option reverses this behavior.
Auto Save
The Auto Save settings only apply to Fusion Studio. To set auto backups for the Fusion page in
DaVinci Resolve, use the DaVinci Resolve Project Load and Save Preferences.
When Auto Save is enabled in Fusion Studio, comps are automatically saved to a backup file at regular
intervals defined by the Delay setting. If a backup file is found when attempting to open the comp, you
are presented with the choice of loading either the backup or the original.
If the backup comp is opened from the location set in the Path Map preference, saving the backup will
overwrite the original file. If the backup file is closed without saving, it is deleted without affecting the
original file.
– Save Before Render: When enabled, the comp is automatically saved before a preview or final
render is started.
– Delay: This preference is used to set the interval between Auto Saves. The interval is set using
mm:ss notation, so entering 10 causes an Auto Save to occur every 10 seconds, whereas entering
10:00 causes an Auto Save every 10 minutes.
Proxy
– Update All, Selective, No Update: The Update mode button is located above the toolbar. You
can use this preference to determine the default mode for all new comps. Selective is the usual
default. It renders only the tools needed to display the images in the Display view. All will render
all tools in the composition, whereas None prevents all rendering.
– Standard and Auto: These sliders designate the default ratio used to create proxies when the
Proxy and Auto Proxy modes are turned on. These settings do not affect the final render quality.
Even though the images are being processed smaller than their original size, the image viewing scales
in the viewers still refer to original resolutions. Additionally, image processing performed in Proxy
Scale mode may differ slightly from full-resolution rendering.
The Proxy and Auto Proxy size ratios may be changed from within the interface itself by right-clicking
on the Prx and APrx buttons above the toolbar and selecting the desired value from the
contextual menu.
GPU
The GPU preference is only available in Fusion Studio. In DaVinci Resolve, you can configure the GPU
processing in Resolve’s Memory and GPU preferences.
In Fusion Studio, the GPU preference is used to specify the GPU acceleration method used for
processing, based on your computer platform and hardware capabilities. It is also used for enabling
caching and debugging GPU devices and tools.
Options
The GPU options include radio buttons to select whether the GPU is used when processing and, if so,
which computer framework is used for communicating with the GPU.
– GPU Tools: This preference has three settings: Auto, Disable, and Enable. When set to Disable, no
GPU acceleration is used for tools or third-party plug-ins. Fuses may still require GPU acceleration.
If Enable is selected, GPU acceleration is available for tools and plug-ins, if appropriate
drivers are installed.
– API: The API setting selects the GPU processing method to use.
– Device: The Device setting determines which GPU hardware to use in the case of multiple GPUs.
The Auto setting gives priority to GPU processing; however, if it is unavailable, Fusion uses the
platform default. Currently, both the AMD and CPU options require either the AMD Catalyst 10.10
Accelerated Parallel Processing (APP) technology Edition driver or the ATI Stream SDK 2.1 or later
to be installed. The Select setting allows you to choose the device explicitly.
Debugging
The more advanced preferences located in this section are designed for diagnostics and analyzing
GPU operations.
– Verbose Console Messages: Enabling this option causes information to be shown in the Console.
For example, Startup Logs, Compiler Warnings, and Messages.
– OpenGL Sharing: Enabling this option shares system RAM with onboard GPU RAM to create a
larger, but slower, OpenGL memory pool.
– Clear Cache Files: This option will clear already compiled GPU code and then
recompile the kernels.
There are a lot of options, but in practice, you simply organize the interface the way you prefer it on
startup and when a new composition is created, then open this Preferences panel and click on the
three buttons to grab the Program Layout, the Document Layout and the Window Settings.
Program Layout
The Program Layout is used to save the overall Fusion interface window and any open floating
windows. Each new composition you open within the lager overall Fusion interface window will adhere
to these preferences.
– Grab Program Layout: Pressing this button stores the application’s overall current position
and size.
– Run Mode: This menu is used to select the application’s default mode at startup.
You choose between a Maximized application window, a Minimized application, or a Normal
application display.
– Use the Following Position and Size: When checked, the values stored when Grab Program
Layout was selected will be used when starting Fusion Studio.
– Create Floating Views: When checked, the position and size of the floating viewers will be saved
when the Grab Program Layout button is used.
Window Settings
Rather than saving entire comp layouts, you can save position and size for individual floating windows
and panels within a comp using the Window Settings.
– Automatically Open This Window: When checked, the selected window will automatically be
opened for new flows.
– Grab Window Layout: Pressing this button stores the size and position of the selected window.
– Run Mode: Select the default run mode for the selected window. You can choose between a
Maximized window, a Minimized window, or a Normal window display.
– Use Grabbed Position and Size: When checked, the selected window will be created using the
stored position and size.
Loader
The Loader preferences are only available in Fusion Studio. Using the Loader preferences, you can set
options for the default Loader’s color depth and aspect ratio as well as define the local and network
cache settings.
Cache
The Cache preferences allow you to control how disk caching operates in Fusion. You can set how and
where the cache is generated, when the cache is removed, how the cache reacts when source files
are not available, as well as many other cache related options. This is not to be confused with RAM
cache, which is controlled in the Memory preferences.
– Disable All Local Caching: This setting disables local caching.
– Cache Files from Network DiskCaches: If a tool has disk caching enabled, and the disk cache
files are stored remotely on the network, then enabling this option will use a local copy of those
cache files, similarly to the local cache on a networked Loader.
– Enable Local Caching of Loaders: Files will be copied into the LoaderCache path set below or in
the Path Maps preferences.
– Cache Multi-Frame Files: Files like AVI or QuickTime will be copied into the LoaderCache path.
This may take some time if the file is large.
– Don’t Cache Files from Local Disks: Files that do not sit on a network drive will not be copied into
the LoaderCache path. You can disable this option if you have, for example, a fast SSD cache drive
and want to use it for local files as well to speed up file access while working interactively.
– Only Files Smaller Than xxx MB.: Files larger than the value set here will not be copied into the
LoaderCache path.
– Cache Path Separator Character: When Enable Local Caching of Loaders is enabled, you can use
this setting to rebuild the path of the original files in LoaderCache.
For instance, given the default “!” character, the original path X\Project\MyShots\ Shot0815\ will be
translated into X!Project!MyShots!Shot0815! in the LoaderCache path. Other separator characters
may be used, including the “\” character, which will use subdirectories in LoaderCache: X\Project\
MyShots\Shot0815\.
– If Original File Is Missing: This setting provides three options to determine the caching behavior
when the original files can’t be found. The Fail option behaves exactly as the Default Loader in
Fusion. The Loader will not process, which may cause the render to halt. The Load Cache option
loads the cache even though no original file is present.The Delete Cache option clears missing
files from the cache.
– Cache Location: For convenience, this is a copy of the LoaderCache path set in the Path Maps
preferences.
– Explore: This button opens the LoaderCache path in the macOS X Finder window
or a Windows Explorer window.
– Clear All Cache Files: This button deletes all cached files present in the LoaderCache path.
Caching Limits
The Caching Limits include options for Fusion’s RAM cache operation. Here, you can determine how
much RAM is allocated to the RAM cache for playing back comps in the viewer.
– Limit Caching To: This slider is used to set the percentage of available memory used for the
interactive tool cache. Available memory refers to the amount of memory installed in the computer.
When the interactive cache reaches the limit defined in this setting, it starts to remove lower
priority frames in the cache to clear space for new frames.
– Automatically Adjust In Low Memory Situations: This checkbox will set the caching to adjust
when memory is low. The console will display any cache purges.
– Leave At Least X MBytes: This setting is used to set the hard limit for memory usage. No matter
what the setting of the Cache Limit, this setting determines the amount of physical memory
available for use by other applications. Normally, this value should not be smaller than 25 MBytes.
Final Render
These settings apply to memory usage during a rendering session, either preview or final, with no
effect during an interactive session.
– Render Slider: This slider adjusts the number of frames that are rendered at the same time.
– Simultaneous Branching: When checked, more than one branch of a node tree will be
rendered at the same time. If you are running low on memory, turn this off to increase
rendering performance.
Network
The Network preferences are only available in Fusion Studio. These preferences are used to set up
and control network rendering in Fusion Studio. The majority of settings are found in the Render
Manager dialog.
General
The General preferences are designed with the most used options at the top in the General section.
These options determine in what capacity the system is used during network rendering.
– Make This Machine a Render Master: When enabled, Fusion will accept network render
compositions from other computers and manage the render. It does not necessarily mean that this
computer will be directly involved in the render, but it will submit the job to the render nodes listed
in the Render Manager dialog.
– Allows This Machine to Be Used as a Network Slave: When enabled, this computer can be used
as a Render node and will accept compositions for network rendering. Deselect it to prevent other
people from submitting compositions to render on this computer.
– Render on All Available Machines: Enable this checkbox to ignore groups and priorities
configured in the Render Manager. Compositions submitted from this computer for network
rendering will always be assigned to every available slave.
Email Notification
You can use the Email Notification section to set up who gets notified with status updates regarding
the render jobs and the network.
– Notify Options: These checkboxes cause emails to be sent when certain render events take
place. The available events are Queue Completion, Job Done, and Job Failure.
– Send Email to: Enter the address or addresses to which notifications should be sent. You separate
multiple addresses with a semicolon.
– Override Sender Address: Enter an email address that will be used as the sender address. If this
option is not selected, no sender address is used, which may cause some spam filters to prevent
the message from being delivered to the recipient.
Server Settings
This section covers Clustering and Network Rendering. For more information on these settings and
clustering, see Chapter 65, “Rendering Using Saver Nodes” in the DaVinci Resolve Reference Manual
or Chapter 4 in the Fusion Reference Manual.
For Fusion Studio, there are two main advantages to virtual path maps instead of actual file paths. One
is that you can easily change the path to media connected to Loaders (for example, when moving a
comp from one drive to another), without needing to make any changes in the composition. The other
advantage is when network rendering, you can bypass the different OS filename conventions.
– Enable Reverse Mapping of Paths Preferences: This checkbox is at the top of the Path Map
settings. When enabled, Fusion uses the built-in path maps for entries in the path’s settings
when applying mapping to existing filenames. The main benefit is for Fusion Studio. Enabling
this checkbox causes Loaders to automatically use paths relative to the location of the saved
composition when they are added to the Node Editor. For more information on using relative paths
for Loaders, see Chapter 105, “IO Nodes,” in the DaVinci Resolve Reference Manual or Chapter 44
in the Fusion Reference Manual.
As with other preferences in Fusion Studio, paths maps are available in both Global and Composition
preferences. Global preferences are applied to all new compositions, while Composition path maps
are only saved with the current composition. Composition path maps will override Global path maps
with the same name.
– Default Path Maps: The Defaults are user-editable path maps. They can reference the System
paths, as part of their paths. For instance. the Temp folder is defined in the System path and used
by the Default DiskCache path map to refine the nested location (Temp:DiskCache). Default path
maps can also redirect paths without using the Global System path maps. After you change a
Default, the updated setting can be selected in the Preferences window, and a Reset button at the
bottom of the Preferences window will return the modified setting to the System default.
– AutoSaves: This setting determines the Fusion Comp AutoSave document’s location, set in the
Fusion General preferences.
– Bins: Sets the location of Fusion Studio bins. Since the bins use pointers to the content, the
content is not saved with the bin. Only the metadata and pointers are saved in the bins.
– Brushes: Points Fusion to the folder that contains custom paintbrushes.
– Comps: The folder where Fusion Studio compositions are saved. On macOS or Windows, the
default location is in Users/YourUserName/Documents/Blackmagic Design/Fusion.
– Config: Stores Configuration files used by Fusion Studio during its operation.
– Defaults: Identifies the location of node default settings so they can be restored if overwritten.
– DiskCache: Sets the location for files written to disk when using the Cache to Disk feature. This
location can be overridden in the Cache to Disk window.
– Edit templates: The location where Fusion macros are saved in order to appear as templates in
the DaVinci Resolve Effects Library.
– Filters: Points to a folder containing Convolution filters like sharpen, which can be used for the
Custom Filter node.
– User Path Maps: User paths are new paths that you have defined that do not currently exist in the
Defaults settings.
– Comp refers to the folder where the current composition is saved. For instance, saving media
folders within the same folder as your Fusion Studio comp file is a way to use relative file paths
for Loaders instead of actual file paths.
Preview
Preview is only available in Fusion Studio. Previews in DaVinci Resolve use the Scratch Disk setting in
the Media Storage preferences.
In the Preview preferences, you configure the creation and playback options for preview renders.
QuickTime
The QuickTime preferences are only available in Fusion Studio on macOS. These settings configure
the default QuickTime codec settings when you select QuickTime as the rendering file format in the
Saver node.
– Compressor: This drop-down menu displays the QuickTime codecs available from your computer.
Fusion tests each codec when the program is started; therefore, some codecs may not be
available if the tests indicate that they are unsuitable for use within Fusion.
– Quality: This slider is used to determine the amount of compression to be used by the codec.
Higher values produce clearer images but larger files. Not all codecs support the Quality setting.
Script
The preferences for Scripting include a field for passwords used to execute scripts from the command
line and applications for use when editing scripts.
Login
There are three login options for running scripts outside of the Fusion application.
– No Login Required to Execute Script: When enabled, scripts executed from the command line, or
scripts that attempt to control remote copies of Fusion, do not need to log in to the workstation in
order to run.
– Specify Custom Login: If a username and password are assigned, Fusion will refuse to process
incoming external script commands (from FusionScript, for example), unless the Script first logs in
to the workstation. This only affects scripts that are executed from the command line, or scripts
that attempt to control remote copies of Fusion. Scripts executed from within the interface do not
need to log in regardless of this setting. For more information, see the Scripting documentation.
Options
– Script Editor: Use this preference to select an external editor for scripts. This preference is used
when selecting Scripts > Edit.
Python Version
– Two options are presented here for selecting the version of Python that you plan on using
for your scripts.
Spline Editor
The Spline Editor preferences allow you to set various spline options for Autosnap behavior, handles,
markers, and more. This only affects splines displayed in the Spline Editor, not splines created in the
viewer using the polygon tool or paths.
– Autosmooth: Automatically smooths out any newly created points or key frames on the splines
selected in this section. You can choose to automatically smooth animation splines, B-Splines,
polyline matte shapes, LUTs, paths, and meshes.
– B-Spline Modifier Degree: This setting determines the degree to which the line segments
influence the resulting curvature when B-Splines are used in animation. Cubic B-Splines determine
a segment through two control points between the anchor points, and Quadratic B-Splines
determine a segment through one control point between the anchor points.
– B-Spline Polyline Degree: This setting is like the one above but applies to B-Splines
used for masks.
– Tracker Path Points Visibility: This setting determines the visibility of the control points on tracker
paths. You can show them, hide them, or show them when your cursor hovers over the path, which
is the default behavior.
– Tracker Path: The default tracker creates Bézier-style spline paths. Two other options in this
setting allow you to choose B-Spline or XY Spline paths.
– Polyline Edit Mode on Done: This setting determines the state of the Polyline tool after you
complete the drawing of a polyline. It can either be set to modify the existing control points on the
spline or modify and add new control points to the spline.
– Onion Skinning: The Onion Skinning settings determine the number of frames displayed while
rotoscoping, allowing you to preview and compare a range of frames. You can also adjust if the
preview frames only from the frame prior to the current frame, after the current frames, or split
between the two.
Filter/Filter to Use
The Filter menu populates the hierarchy area below the menu with that setting. It lets you edit the
filters. The Filter to Use menu selects the default filter setting located in the Keyframes Editor
Options menu.
Timeline Options
The Timeline Options configure which options in the Keyframe Editor are enabled by default. A series
of checkboxes correspond to buttons located in the Timeline, allowing you to determine the states of
those buttons at the time a new comp is created. For more information on the Keyframes Editor
functions, see Chapter 70, “Animating in Fusion’s Keyframes Editor” in the DaVinci Resolve Reference
Manual or Chapter 9 in the Fusion Reference Manual.
– Autosnap Points: When moving points in the Keyframes Editor, the points will snap to the fields or
to the frames, or they can be moved freely.
Tweaks
The Tweaks preferences handle a collection of settings for fine-tuning Network rendering in Fusion
Studio and graphics hardware behavior.
Network
The Network section is used to control and monitor the health of communication packets over TCP/IP
when rendering over a network in Fusion Studio.
– Maximum Missed Heartbeats: This setting determines the maximum number of times the network
is checked before terminating the communication with a Render node.
– Heartbeat Interval: This sets the time between network checks.
File I/O
The File I/O options are used to control the performance when reading frames or large media files
from both direct and networked attached storage.
– I/O Canceling: This option enables a feature of the operating system that allows queued
operations to be canceled when the function that requested them is stopped. This can improve
the responsiveness, particularly when loading large images over a network.
Enabling this option will specifically affect performance while loading and accessing formats that
perform a large amount of seeking, such as the TIFF format.
This option has not been tested with every hardware and OS configuration, so it is recommended
to enable it only after you have thoroughly tested your hardware and OS configuration using drive
loads from both local disks and network shares.
– Enable Direct Reads: Enabling this checkbox uses a more efficient method when loading a large
chunk of contiguous data into memory by reducing I/O operations. Not every operating system
employs this ability, so it may produce unknown behavior.
– Read Ahead Buffers: This slider determines the number of 64K buffers that are use to read ahead
in a file I/O operation. The more buffers, the more efficient loading frames from disk will be, but the
less responsive it will be to changes that require disk access interactively.
Area Sampling
The Area Sampling options allow you to fine-tune the RAM usage on Render nodes by trading off
speed for lower RAM requirements.
– Automatic Memory Usage: This checkbox determines how area sampling uses available memory.
Area sampling is used for Merges and Transforms. When the checkbox is enabled (default), Fusion
will detect available RAM when processing the tool and determine the appropriate trade-off
between speed and memory.
If less RAM is available, Fusion will use a higher proxy level internally and take longer to render.
The quality of the image is not compromised in any way, just the amount of time it takes to render.
In node trees that deal with images larger than 4K, it may be desirable to override the automatic
scaling and fix the proxy scale manually. This can preserve RAM for future operations.
– Pre-Calc Proxy Level: Deselecting the Automatic Memory will enable the Pre-Calc Proxy Scale
slider. Higher values will use less RAM but take much longer to render.
Open GL
This section controls how Fusion makes use of your graphics card when compositing in 3D with the
Renderer 3D node. Most settings may be left as they are, but since OpenGL hardware varies widely in
capabilities and different driver revisions can sometimes introduce bugs, these tweaks can be useful if
you are experiencing unwanted behavior.
– Disable View LUT Shaders: OpenGL shaders can often dramatically accelerate View LUTs, but
this can occasionally involve small trade-offs in accuracy. This setting will force Fusion to process
LUTs at full accuracy using the CPU instead. Try activating this if View LUTs do not seem to be
giving the desired result.
– Image Overlay: The Image Overlay is a viewer control used with Merge and Transform tools
to display a translucent overlay of the transformed image. This can be helpful in visualizing the
transformation when it is outside the image bounds but may reduce performance when selecting
the tool if cache memory is low. There are three settings to choose from: None, Outside, and All.
– None: This setting never displays the translucent overlay or controls, which can reduce the
need for background renders, in some cases resulting in a speed up of the display.
– Outside: This will display only those areas of the control that are outside the bounds of the
image, which can reduce visual confusion.
– All: Displays all overlays of all selected tools.
– Smooth Resize: This setting can disable the viewer’s Smooth Resize behavior when displaying
floating-point images. Some older graphics cards are not capable of filtering floating-point
textures or may be very slow. If Smooth Resize does not work well with float images, try setting
this to flt16 or int.
– Auto Detect Graphics Memory (MB): Having Fusion open alongside other OpenGL programs
like 3D animation software can lead to a shortage of graphics memory. In those cases, you can
manually reduce the amount of memory Fusion is allowed to use on the card. Setting this too low
or too high may cause performance or data loss.
– Use 10-10-10-2 Framebuffer: If your graphics hardware and monitor support 30-bit color (Nvidia
Quadro/AMD Radeon Pro, and some Nvidia GeForce/AMD Radeon), this setting will render
viewers with 10 bits per primary accuracy, instead of 8 bits. Banding is greatly reduced when
displaying 3D renders or images deeper than 8-bit.
User Interface
The User Interface preferences set the appearance of the user interface window and how the
Inspector is displayed.
Appearance
When enabled, the Use Gray Background Interface checkbox will change the color of the background
in Fusion’s panels to a lighter, more neutral shade of gray.
Controls
This group of checkboxes manages how the controls in the Inspector are displayed.
– Auto Control Open: When disabled, only the header of the selected node is displayed in the
Inspector. You must double-click the header to display the parameters. When enabled, the
parameters are automatically displayed when the node is selected.
– Auto Control Hide: When enabled, only the parameters for the currently active tool (red outline)
will be made visible. Otherwise, all tool headers will be visible and displayed based on the Auto
Control Open setting.
– Auto Control Close Tools: When enabled, only the active (red outlined) tool in the Node Editor will
have controls displayed. Any previous active node’s tools will be closed in the Inspector. When
disabled, any number of tools may be opened to display parameters at the same time. This setting
has no effect if the Auto Control Hide checkbox is enabled.
– Auto Control Close Modifiers: When enabled, only one modifier’s parameters will be displayed for
the active node. Any additional modifiers for the active node will show only their header.
– Auto Control Advance: If the Auto Control Advanced checkbox is enabled, the Tab key and
Return/Enter key will cause the keyboard focus to advance to the next edit box within the
Inspector. When disabled, Return/Enter will cause the value entered to be accepted, but the
keyboard focus will remain in the same edit box of the control. The Tab key can still be used to
advance the keyboard focus.
– Show Controls for Selected: When this option is disabled, only the active tool’s parameters are
shown in the Inspector. By default, it is enabled, showing controls for the active tool as well as all
selected tools.
Video Monitoring
This setting is only available in Fusion Studio. Control over video hardware for the Fusion Page is done
in the DaVinci Resolve preferences. The Video Monitoring preferences are used to configure the
settings of Blackmagic Design capture and playback products such as DeckLink PCIe cards and
UltraStudio i/O units.
Stereo Mode
This group of settings configures the output hardware for displaying stereo 3D content.
– Mono will output a single non stereo eye.
– Auto will detect which method with which the stereo images are stacked.
– Use the Vstack option if the stereo images are stacked vertically as left on top and
right at the bottom.
– Use the Hstack option if the stereo images are stacked horizontally as left and right.
The Swap eyes checkbox will swap the eyes if stereo is reversed.
View
The View preferences are used to manage settings and default controls for viewers.
Control Colors
The Control Colors setting allows you to determine the color of the active/inactive onscreen controls.
Fit Margin
The Fit Margin setting determines how much padding is left around the frame when the Fit button is
pressed or Fit is selected from the viewer’s contextual menu.
VR Headsets
The VR Headsets preferences allow configuration of any connected Virtual Reality headsets, including
how stereo and 3D scenes are viewed.
API
– Disabled: Disabled turns off and hides all usage of headsets.
– Auto: Auto will detect which headset is plugged in.
– Occulus: Occulus will set the VR output to the Oculus headset.
– OpenVR: OpenVR will support a number of VR headsets like the HTC Vive.
Stereo
Similar to normal viewer options for stereo 3D comps, these preferences control how a stereo
3D comp is displayed in a VR headset.
Mode
– Mono: Mono will output a single non stereo eye.
– Auto: Auto will detect the method with which the stereo images are stacked.
– Vstack: Vstack stereo images are stacked vertically as left on top and right at the bottom.
– Hstack: Hstack stereo images are stacked horizontally as left and right.
– Swap Eyes: Swap eyes will swap the eyes if stereo is reversed.
3D
Similar to normal viewer options for 3D comps, these preferences control how a 3D comp is displayed
in a VR headset.
Lighting
– Disabled lighting is off.
– Auto will detect if lighting is on in the view.
– On will force lighting on in the VR view.
Sort Method
– Z buffer sorting is the fast OpenGL method of sorting polygons.
– Quick Sort will sort the depth of polygons to get better transparency rendering.
– Full Sort will use a robust sort and render method to render transparency .
– Shadows can be on or off.
– Show Matte Objects will make matte objects visible in view or invisible.
Users List
The Users List is a list of the users and their permissions. You can select one of the entries to edit their
settings using the User and Password edit boxes.
– Add: The Add button is used to add a new user to the list by entering a username and password.
– Remove: Click this button to remove the selected entry.
User
This editable field shows the username for the selected Bin Server item. If the username is unknown,
try “Guest” with no password.
Password
Use this field to enter the password for the Bin user entered in the Users list.
Permissions
The administrator can set up different permission types for users.
– Read: This will allow the user to have read-only permission for the bins.
– Create: This will allow the user to create new bins.
– Admin: This gives the user full control over the bins system.
– Modify: This allows the user to modify existing bins.
– Delete: This allows the user to remove bins.
Servers
This dialog lists the servers that are currently in the connection list. You can select one of the entries
to edit its settings.
– Add: Use this button to add a new server to the list.
– Remove: Click this button to remove the selected entry.
Server
This editable field shows the name or IP address of the server for the selected entry in the list.
User
This editable dialog shows the username for the selected Bin Server item.
Password
Use this field to enter the password for the server entered in the Server list.
Library
The Library field lets you name the bins. If you wanted to create a bin for individual projects, you would
name it in the Library field and each project would gets its own bin.
Application
The Application field allows larger studios to specify some other program to serve out the
Bin requests.
Stamp Quality
The Stamp Quality is a percentage slider that determines the compression ratio used for Stamp
thumbnail creation. Higher values offer better quality but take up more space.
Stamp Format
This drop-down list determines whether the Stamp thumbnails will be saved as compressed or
uncompressed.
Options
– Open Bins on Startup: When Open Bins on Startup is checked, the bins will open automatically
when Fusion is launched.
– Checker Underlay: When the Checker Underlay is enabled, a checkerboard background is used
for clips with alpha channels. When disabled, a gray background matching the Bin window is used
as the clip’s background.
Flow Format
This drop-down menu provides three options that determine how the node tree is constructed for the
imported EDL file.
– Loader Per Clip: A Loader will be created for each clip in the EDL file.
– A-B Roll: A node tree with a Dissolve tool will be created automatically.
– Loader Per Transition: A Loader with a Clip list will be created, representing the imported EDL list.
Shortcuts Customization
Keyboard shortcuts can be customized in Fusion Studio. You can access the Hotkey Manager by
choosing Customize HotKeys from the View menu.
Fusion has active windows to focus attention on those areas of the interface, like the Node Editor,
the viewers, and the Inspector. When selected, a gray border line will outline that section. The
shortcuts for those sections will work only if the region is active. For example, Command-F in the
View will scale the image to fit the view area; in the Flow view, Command-F will open the Find tool
dialog; and in the Spline editor, it will fit the splines to the window.
On the right is a hierarchy tree of each section of Fusion and a list of currently set hotkeys. By
choosing New or Edit, another dialog will appear, which will give specific control over that hotkey.
Customizing Preferences
Fusion Studio’s preferences configure Fusion’s overall application default settings and settings for
each new composition. Although you access and set these preferences through the Preferences
window, Fusion saves them in a simple text format called Fusion.prefs.
These default preferences are located in a \Profiles\Default folder and shared by all Fusion users on
the computer. However, you may want to allow each user to have separate preferences and settings,
and this requires saving the preferences to different locations based on a user login.
To change the saved location of the preferences file requires the use of environment variables.
Locking Preferences
If the line “Locked = true,” appears in the main table of a master file, all settings in that file are locked
and override any other preferences. Locked preferences cannot be altered by the user.
2D Compositing
Chapter 16
Controlling Image
Processing and
Resolution
This chapter covers the overall image-processing pipeline. It discusses color bit-depth
and how to control the output resolution in a resolution-independent environment.
Contents
Fusion’s Place in the DaVinci Resolve Image-Processing Pipeline 382
Source Media into the Fusion Page 382
Forcing Effects into the Fusion Page 382
Output from the Fusion Page to the Color Page 383
What Viewers Show in Different DaVinci Resolve Pages 383
Managing Resolution In Fusion 383
Changing the Resolution of a Clip 384
Compositing with Different-Resolution Clips 384
Sizing Between DaVinci Resolve Pages 385
Color Bit Depths 385
Understanding Integer vs. Float 385
Setting Color Depth in Fusion Studio 386
Combining Images with Different Color Depths 387
Advantages of Floating-Point Processing 388
TIP: The decoding or debayering of RAW files occurs prior to all other operations, and as
such, any RAW adjustments will be displayed correctly in the Fusion page.
This means you have access to the entire source clip in the Fusion page, but the render range is set to
match the duration of the clip in the Timeline. You also use the full resolution of the source clip, even if
the Timeline is set to a lower resolution. However, none of the Edit or Cut page Inspector adjustments
carry over into the Fusion page, with the exception of the Lens Correction adjustment.
When you make Zoom, Position, Crop, or Stabilization changes in the Edit or Cut page, they are not
visible in the Fusion page. The same applies to any Resolve FX or OpenFX third-party plug-ins. If you
add these items to a clip in the Edit or Cut page, and then you open the Fusion page, you won’t see
them taking effect. All Edit and Cut page timeline effects and Inspector adjustments, with the
exception of the Lens Correction adjustment, are computed after the Fusion page but before the Color
page. If you open the Color page, you’ll see the Edit and Cut page transforms and plug-ins applied to
that clip, effectively as an operation before the grading adjustments and effects you apply in the Color
page Node Editor.
With this in mind, the order of effects processing in the different pages of DaVinci Resolve can be
described as follows:
TIP: Retiming applied to the clip in the Edit page Timeline is also not carried over into the
Fusion page.
TIP: The output of the Fusion page is placed back into the Edit page Timeline based on
DaVinci Resolve’s Image Sizing setting. By default, DaVinci Resolve uses an image sizing
setting called Scale to Fit. This means that even if the Fusion page outputs a 4K composition,
it conforms to 1920 x 1080 if that is what the project or a particular Timeline is set to. Changing
the image sizing setting in DaVinci Resolve’s Project Settings affects how Fusion
compositions are integrated into the Edit page Timeline.
TIP: To change resolution and reposition a frame without changing the pixel resolution of a
clip, use the Transform node.
The Background node sets the output size, and the foreground image is cropped if it is larger.
Processing at 32-bit float can work with shadow areas below 0.0 and highlights above 1.0, similar to
16-bit float, except with a much greater range of precision but also much greater memory and
processing requirements.
If you aren’t sure what the color depth process is for a tool, you can position the pointer over the
node’s tile in the Node Editor, and a tooltip listing the color depth for that node will appear on the
Status bar.
TIP: When working with images that use 10-bit or 12-bit dynamic range or greater, like
Blackmagic RAW or Cinema DNG files, set the Depth menu in the Inspector to 16-bit float or
32-bit float. This preserves highlight detail as you composite.
Greater Accuracy
Using 16- or 32-bit floating-point processing prevents the loss of accuracy that can occur when using
8- or 16-bit integer processing. The main difference is that integer values cannot store fractional or
decimal values, so rounding occurs in all image processing. Floating-point processing allows decimal
or fractional values for each pixel, so it is not required to round off the values of the pixel to the closest
integer. As a result, color precision remains virtually perfect, regardless of how many operations are
applied to an image.
If you have an 8-bit pixel with a red value of 75 (dark red) and that pixel is halved using a Color
Correction tool, the pixel’s red value is now 37.5. Since you cannot store decimal or fractional values in
integers, that value is rounded off to 37. Doubling the brightness of the pixel with another Color
Correction tool should bring back the original pixel value of 75 but because of rounding 37 x 2 is 74.
The red value lost a full point of precision due to integer rounding on a very simple example. This is a
problem that can result in visible banding over several color corrections. Similar problems arise when
merging images or transforming them. The more operations that are applied to an image, the more
color precision is lost to rounding when using 8- or 16-bit integer processing.
Use the Show Full Color Range pop-up menu to detect out-of-range images.
Enabling this display mode rescales the color values in the image so that the brightest color in the
image is remapped to a value of 1.0 (white), and the darkest is remapped to 0.0 (black).
The 3D Histogram subview can also help visualize out-of-range colors in an image. For more
information, see Chapter 68, “Using Viewers” in the DaVinci Resolve Reference Manual, or Chapter 7
in the Fusion Reference Manual.
Alternatively, you can clip the range by adding a Change Depth node and switching to 8-bit or 16-bit
integer color depths.
Managing Color
for Visual Effects
This chapter discusses LUTs, color space conversions, and the value of
compositing with linear gamma while previewing the image in the viewer using the
gamma of your choice.
Contents
Color Management 391
All Compositing Is Math 391
Introducing Color Management in Fusion 392
Converting to Linear Gamma 392
Applying LUTs to a Viewer 395
Using Resolve Color Management 397
Using ACES Color Management in Resolve 398
Using OCIO for ACES Color Management in Fusion 399
Applying OCIO LUTs in the Viewer 400
A Rec. 709 HD gamma curve (left) and a nonlinear, or log gamma, curve (right)
The problem is that these images do not look normal on any monitor. Clips recorded with a log gamma
curve typically have a low contrast, low saturated appearance when viewed on an sRGB computer
display or Rec. 709 HD video monitor. This problem is easy to fix using a LookUp Table, or LUT. A LUT
is a form of gamma and color correction applied to the viewer to normalize how the image is displayed
on your screen.
A clip displayed with a nonlinear, log gamma curve (left) and corrected in the viewer using a LUT (right)
TIP: 3D rendered CGI images are often generated as EXR files with linear gamma, and
converting them is not necessary. However, you should check your specific files to make sure
they are using linear gamma.
– Gamut node: The Gamut node, found in the Color category of the Effects Library, lets you perform
linear conversions based on color space. This node converts to linear or from linear and is often
inserted after a MediaIn or Loader node or just before a MediaOut or Saver node. Depending
on where you insert the node, you either choose from the Source Space controls or the Output
Space controls.
When converting media to linear gamma, set the Source Space menu to the color space of your
source material. For instance, if your media is full 1080 HD ProRes, then choose ITU-R BT.709
(scene) for gamma of 2.4. Then, enable the Remove Gamma checkbox if it isn’t already enabled, to
use linear gamma.
When converting from linear gamma for output, you insert the Gamut node before your output
node, which is a Saver in Fusion Studio or a MediaOut node in DaVinci Resolve’s Fusion page.
Make sure the Source Space menu is set to No Change, and set the Output Space to your output
color space. For instance, if your desired output is full 1080 HD, then choose either sRGB or ITU-R
BT.709 (scene) for gamma of 2.4. Then, enable the Add Gamma checkbox if it isn’t already
enabled, to format the output of the Gamut node for your final output.
– MediaIn and Loader nodes: MediaIn and Loader nodes have Source Gamma Space controls in
the Inspector that let you identify and remove the gamma curve without the need to add another
node. If your files include gamma curve metadata like RAW files, the Auto setting for the Curve
Type drop-down menu reads the metadata and uses it when removing the gamma curve. When
using intermediate files or files that do not include gamma curve metadata, you can choose either
a log gamma curve by choosing Log from the Curve Type menu or a specific color space using
the Space option from the menu. Clicking the Remove Curve checkbox then removes the gamma
curve, converting the image to linear gamma.
– FileLUT node: The FileLUT node, found in the LUT category of the Effects Library, lets you do a
conversion using any LUT you want, giving you the option to manually load LUTs in the ALUT3,
ITX, 3DL, or CUBE format to perform a gamma and gamut conversion. Although LUTs are very
commonly placed at the end of a node tree for final rendering, you’ll get more accurate gamma
and color space conversions using the Gamut and CineonLog nodes to transform your MediaIn
and Loader nodes into linear.
A clip displayed with a nonlinear, log gamma curve (left) and the clip transformed to linear gamma (right)
It would be impossible to work if you couldn’t view the image as it’s supposed to appear within the
final gamut and gamma you’ll be outputting. For this reason, each viewer has a LUT menu that lets you
enable a “preview” color space and/or gamma conversion, while the node tree is processing correctly
in linear gamma.
To preview the images in the viewer using sRGB or Rec. 709 color space:
1 Enable the LUT button above the viewer.
2 From the Viewer LUT drop-down menu, choose either a Gamut View LUT, or a LUT from the VFX
IO category that transforms linear to Rec. 709 or sRGB.
3 If you choose the Gamut View LUT, then choose Edit from the bottom of the LUT menu to
configure the LUT.
4 In the LUT Editor, set the Output Space to the target color space you want.
5 Enable the Add Gamma checkbox to apply the gamma curve based on the selected color space.
To override the input color space for differently recorded clips in the Media Pool:
1 Enable DaVinci YRGB Color Management as explained above.
2 Save and close the Settings dialog.
3 In the Media Pool, select the clip or clips you want to assign a new Input Color space.
4 Right-click one of the selected clips.
5 Choose the Input Color Space that corresponds to those clips from the contextual menu.
NOTE: When using Fusion Studio, the OpenColorIO (OCIO) framework is used for ACES color
management.
The Color Science drop-down menu in the Color Management panel of the Project Settings is used to
set up the ACES color management in DaVinci Resolve.
When ACES is enabled, IDT and ODT are used to identify input and output devices.
– ACES Version: When you’ve chosen one of the ACES color science options, this menu becomes
available to let you choose which version of ACES you want to use. As of DaVinci Resolve 16, you
can choose either ACES 1.0.3 or ACES 1.1 (the latest version).
– ACES Input Device Transform: This menu lets you choose which IDT (Input Device Transform) to
use for the dominant media format in use.
– ACES Output Device Transform: This menu lets you choose an ODT (Output Device Transform)
with which to transform the image data to match your required deliverable.
– Process Node LUTs In: This menu lets you choose how you want to process LUTs in the Color
page and does not affect the Fusion page.
For more information on ACES within Davinci Resolve, see Chapter 8, “Data Levels, Color Management
and ACES” in the DaVinci Resolve Reference Manual.
Using OCIO for converting MediaIn or Loader nodes to linear gamma is based on the OCIO Color
Space node. Placing the OCIO Color Space node directly after a Loader (or MediaIn in
DaVinci Resolve) displays the OCIO Source and Output controls in the Inspector.
The Source menu is used to choose the color profile for your Loader or MediaIn node. The default raw
setting shows an unaltered image, essentially applying no color management to the clip. The selection
you make from the menu is based on the recording profile of your media.
The Output menu is set based on your deliverables. When working in Fusion Studio, typically the
Output selected is ACEScg, to work in a scene linear space.
By default, the same standard options are available in the View LUT. However, clicking the Browse
button allows you to load the same config file you loaded into the OCIO Color Space node. Once
loaded, all the expanded OCIO options are available. If you selected the OCIO Color Space node to
output ACEScg, you use the OCIO View LUT to go from a source setting of linear sRGB to an output
setting of sRGB or Rec. 709 in most cases.
TIP: If your monitor is calibrated differently, you will need to select a LUT that matches your
calibration.
Whether you use the OCIO Color Space LUT or a LUT for your specific monitor calibration, you can
save the viewer setup as the default.
To save the OCIO ColorSpace LUT setup as the default viewer setup:
– Right-click in the viewer, and then choose Settings > Save Defaults. Now, for every comp, the
viewer is preconfigured based on the saved defaults.
Understanding
Image Channels
This chapter seeks to demystify how Fusion handles image channels and, in the
process, show you how different nodes need to be connected to get the results you
expect. It also explains the mysteries of premultiplication, and presents a full
explanation of how Fusion is capable of using and even generating auxiliary data.
Contents
Channels in Fusion 403
Types of Channels Supported by Fusion 403
Fusion Node Connections Carry Multiple Channels 404
Node Inputs and Outputs 405
Node Colors Tell You Which Nodes Go Together 408
Using Channels in a Composition 409
Channel Limiting 410
Adding Alpha Channels 411
How Channels Propagate During Compositing 412
Rearranging or Combining Channels 413
Understanding Premultiplication 413
The Rules of Premultiplication 415
Alpha Channel Status in MediaIn and Loader Nodes 417
Controlling Premultiplication in Color Correction Nodes 417
Controlling Premultiplication With Alpha Divide and Alpha Multiply 418
Multi Channel Compositing 418
Compositing with Beauty Passes 419
Working with Auxiliary Channels 424
Channels in Fusion
Fusion introduces some innovative ways of working with the many different channels of image data
that modern compositing workflows encompass. This chapter’s introduction to color and data
channels and how they’re affected by different nodes and operations is a valuable way to begin the
process of learning to do paint, compositing, and effects in Fusion.
If you’re new to compositing, or you’re new to the Fusion workflow, you ignore this chapter at your
peril, as it provides a solid foundation to understanding how to predictably control image data as you
work in this powerful environment.
Alpha Channels
An alpha channel is an embedded fourth channel that defines different levels of transparency in an
RGB image. Alpha channels are typically embedded in RGB images that are generated from computer
graphics applications. In Fusion, white denotes solid areas, while black denotes transparent areas.
Grayscale values range from more opaque (lighter) to more transparent (darker).
If you’re working with an imported alpha channel from another application for which these conventions
are reversed, never fear. Every node capable of using an alpha channel is also capable of inverting it.
Single-Channel Masks
While similar to alpha channels, mask channels are single channel images, external to any RGB image
and typically created by Fusion within one of the available Mask nodes. Mask nodes are unique in that
they propagate single-channel image data that defines which areas of an image should be solid and
which should be transparent. However, masks can also define which parts of an image should be
affected by a particular operation, and which should not. Mask channels are designed to be
connected to specific mask inputs of nodes including Effect Mask, Garbage Mask, and Solid
Mask inputs.
TIP: You can view any of a node’s channels in isolation using the Color drop-down menu in
the viewer. Clicking the Color drop-down menu reveals a list of all channels within the
currently selected node, including red, green, blue, or auxiliary channels.
NOTE: Node trees shown in this chapter may display MediaIn nodes found in
DaVinci Resolve’s Fusion page; however, Fusion Studio Loader nodes are interchangeable
unless otherwise noted.
Running multiple channels through single connection lines makes Fusion node trees simple to read,
but it also means you need to keep track of which nodes process which channels to make sure that
you’re directing the intended image data to the correct operations.
MediaIn node connected to a Highlight node connected to MediaOut node in the Fusion page.
When connecting nodes, a node’s output carries the same channels no matter how many times the
output is “branched.” You cannot send one channel out on one branch and a different channel out on
another branch.
The MediaIn node’s output is branched but carries the same RGB channels to both inputs.
In another example, the DeltaKeyer node has a primary input (labeled “Input”) that accepts RGBA
channels, but it also has three Matte inputs. These SolidMatte, GarbageMatte, and EffectsMask inputs
on the Delta Keyer accept alpha or mask channels to modify the matte being extracted from the image
in different ways.
If you position your pointer over any node’s input or output, a tooltip appears in the Tooltip bar at the
bottom of the Fusion window, letting you know what that input or output is for, to help guide you to
using the right input for the job. If you pause for a moment longer, another tooltip appears in the Node
Editor itself.
(Left) The node input’s tooltip in the Tooltip bar, (Right) The node tooltip in the Node Editor
Side by side, dropping a connection on a node’s body to connect to that node’s primary input
Side by side, dropping a connection on a specific node input, note how the inputs
rearrange themselves afterwards to keep the node tree tidy-looking
TIP: If you hold the Option key down while you drag a connection line from one node onto
another, and you keep the Option key held down while you release the pointer’s button to
drop the connection, a menu appears that lets you choose which specific input you want to
connect to, by name.
TIP: This chapter tries to cover many of the easy-to-miss exceptions to node connection that
are important for you to know, so don’t skim too fast.
Additionally, some 2D nodes such as Fog and Depth Blur (in the Deep Pixel category) accept and use
auxiliary channels such as Z-Depth to create different perspective effects in 2D.
TIP: Two 2D nodes that specifically don’t process alpha channel data are the Color Corrector
node and the Gamut node. The Color Correction node lets you color correct a foreground
layer to match a background layer without affecting an alpha channel. The Gamut node lets
you perform color space conversions to RGB data from one gamut to another without
affecting the alpha channel.
(Left) The Normal Z channel output by a rendered torus, (Right) The Normal Z channel after the output is
connected to a Vortex node; note how this auxiliary channel warps along with the RGB and A channels
This is appropriate because in most cases, you want to make sure that all channels are transformed,
warped, or adjusted together. You wouldn’t want to shrink the image without also shrinking the alpha
channel along with it, and the same is true for most other operations.
On the other hand, some nodes deliberately ignore specific channels when it makes sense. For
example, the Color Corrector and Gamut nodes, both of which are designed to alter RGB data
specifically, do not affect auxiliary channels. This makes them convenient for color-matching
foreground and background layers you’re compositing, without worrying that you’re altering the depth
information accompanying that layer.
TIP: If you’re doing something exotic and you actually want to operate on a channel that’s
usually unaffected by a particular node, you can always use the Channel Booleans node to
reassign the channel. When doing this to a single image, it’s important to connect that image
to the background input of the Channel Booleans node, so the alpha and auxiliary channels
are properly handled.
Channel Limiting
Most nodes have a set of Red, Green, Blue, and Alpha buttons in the Settings tab of that node’s
Inspector. These buttons let you exclude any combination of these channels from being affected by
that node.
For example, if you wanted to use the Transform node to affect only the green channel of an image,
you can turn off the Green, Blue, and Alpha buttons. As a result, the green channel is processed by
this operation, and the red, blue, and alpha channels are copied straight from the node’s input to the
node’s output, skipping that node’s processing to remain unaffected.
A simple node tree for keying; note that only one connection links the DeltaKeyer to the Merge node
Rotoscoping, or manually drawing a mask shape using a Polygon or other Mask node is another
technique used to create the matte channel. There are many ways to configure the node tree for this
task, but the simplest setup is just to connect a Polygon or B-Spline mask node to the Effect Mask
input of a MediaIn or Loader node.
In both cases, you can see how the node tree’s ability to carry a single channel or multiple channels of
image data over a single connection line simplifies the compositing process.
Auxiliary channels, on the other hand, are handled in a much more specific way. When you composite
two image layers using the Merge node, auxiliary channels only propagate through the image that’s
connected to the background input. The rationale for this is that in most CGI composites, the
background is most often the CG layer that contains auxiliary channels, and the foreground is a
live-action green screen plate.
Since most compositions use multiple Merge nodes, it pays to be careful about how you connect the
background and foreground inputs of each Merge node to make sure that the correct channels
flow properly.
Understanding Premultiplication
Now that you understand how to direct and recombine RGB images and alpha channels in Fusion, it’s
time to go more deeply into alpha channels to make sure you always combine RGB and alpha
channels correctly for each operation you perform in your composite. This might seem simple, but
small mistakes are easy to make and can result in unsightly artifacts. This is arguably one of the most
confusing areas of visual effects compositing, so don’t skip this section.
When alpha channel and RGB pixels are both contained within a media file, such as a 3D rendered
animation that contains RGB and transparency, or a motion graphics movie file with transparency
baked in, there are two different ways they might be combined, and it’s important to know which
is in use.
– Unpremultipled (Straight): An RGB image unaltered by the semi-transparency information in a
fourth channel (alpha channel)
– Premultiplied: An RGB image that has each channel multiplied by its alpha
channel before compositing.
Non-premultiplied images, sometimes called “straight” alpha channels, have RGB channels that are
unaltered (not multiplied) by the alpha channel. The result is that the RGB image has no anti-aliased
edges and no semi-transparency. It’s usually obvious where the RGB image ends and the alpha matte
begins. The image below is an example of the ragged edges seen in the RGB channels when using a
non-premultiplied alpha channel. But notice the smooth semi-transparent edges found in the alpha.
A detailed view of a non-premultiplied RGB image (left) and its alpha channel (right)
A premultiplied alpha channel means the RGB pixels are multiplied by the alpha channel. This method
guarantees that the RGB image pixels include semi-transparency where needed, like anti-aliased
edges. Most computer-generated images are premultiplied for convenience, because they’re easier to
review smoothly without actually being placed inside of a composite.
A detailed view of a premultiplied image (left) and its alpha channel (right)
Nonpremultiplied edges in an additive Merge (left) and premultiplied edges in an additive Merge (right)
TIP: When an RGB image and a Mask node are combined using, for instance, a Matte Control
node, if the RGB image is not multiplied by the mask in the Matte control, the checkerboard
background pattern in the viewer will appear only semi-transparent when it should be fully
transparent.
A node tree with explicit Alpha Divide and Alpha Multiply nodes
TIP: The EXR format in Fusion is optimized when multiple Loaders are used to read the same
EXR file. The file is only loaded once to access all the channels.
TIP: It is wise to rename each Loader or MediaIn to represent the beauty pass it contains.
The MediaIn’s Image tab includes a Layer menu. Any pass included in a multi-part EXR image
sequence can be selected from this menu and automatically assigned to the RGBA channels.
In most cases, the menu shows the combined channel passes, meaning the individual red, green, blue,
and alpha channels cannot be selected. Because the alpha channel is not included in many beauty
passes, you sometimes need to borrow the alpha channel from a different beauty pass. For this
reason, it’s often better to use the Channels tab for mapping the individual channels of a beauty pass
to the channels of the MediaIn node.
The MediaIn node in DaVinci Resolve and the Format tab in a Loader node include the same channel
mapping functionality. The Channels tab and the Format tab include individual RGBA menus at the top
of the tab. You can use these menus to map the RGBA channels from any pass contained in the
multi-part EXR. For instance, if you want to map the Ambient Occlusion pass to the RGB channels,
choose AO. R (Red) from the Red channel menu, AO. G (Green) from the Green channel menu, and AO.
B (Blue) from the Blue channel menu.
The Ambient Occlusion beauty pass does not include an alpha channel. To composite it, you can
reuse the alpha channel pass from another beauty pass. In the image below, the alpha channel is
mapped using the combined render pass’ alpha channel.
TIP: When using the Format tab in the Loader node, the checkbox next to each channel
needs to be turned on for the corresponding channel to become available in the
node’s output.
If you prefer, you can use a Channels Booleans node to make the same composite. In this case, there
is no technical difference between the two nodes.
One of the exceptions to the steps above are Shadow passes, such as Ambient Occlusion. In that
case, a multiply Apply mode is usually employed.
To add an alpha channel into your assembled beauty pass composite, do the following:
1 Connect the last Merge or Channel Booleans output into the background input of a Matte
Control node.
2 Connect the render pass that contains the alpha into the green Foreground input of the Matte
Control node.
3 In the Matte Control’s Inspector, choose Combine Alpha from the Combine menu.
4 Choose Copy from the Combine Op menu.
An alpha channel from the color pass added back into the completed beauty pass node tree
TIP: Alpha channels from 3D renderings are typically premultiplied. That being the case, be
sure to turn on the Pre Divide/Post Multiply checkbox on any node that performs color
correction. If using more than one node in a row to perform color correction, use the Alpha
Divide and Alpha Mult nodes instead.
TIP: When trying to locate information about auxiliary channels in other software, some 3D
applications refer to auxiliary channels as Arbitrary Output Variables (AOVs), render elements,
or secondaries.
TIP: The Color Inspector SubView can be used to read numerical values from all of
the channels.
Z-Depth
Each pixel in a Z-Depth channel contains a value that represents the relative depth of that pixel in the
scene. In the case of overlapping objects in a model, most 3D applications take the depth value from
the object closest to the camera when two objects are present within the same pixel since the closest
object typically obscures the farther object.
When present, Z-Depth can be used to perform depth merging using the Merge node or to control
simulated depth-of-field blurring using the Depth Blur node.
For this example, we’ll examine the case where the Z-Depth channel is provided as a separate file.
The Z- channel can often be rendered as an RGB image. You’ll need to combine the beauty and Z pass
using a Channel Booleans node. When the Z pass is rendered as an image in the RGB channels, the
Channels Booleans node is used to re-shuffle the Lightness of the foreground RGB channel into the
Z-channel.
The Depth Blur node is one of the nodes that take advantage of a Z-channel in order to create blurry
depth-of-field simulations. To set this up, the output of the MediaIn node connects to the background
input on the Depth Blur.
The Depth Blur uses the Z-channel that is enabled in the Channel Booleans node.
Once you see these experimental results, you can return to each parameter and refine it as needed to
achieve the actual look you want.
TIP: Z-Depth channels often contain negative values. If this causes problems, you can choose
Normalize Color Range from the viewer’s Options menu to apply a normalization to the
viewer, keeping the image within a range from 0 to 1.
TIP: The wide adoption of an open-source matte creation technology called Cryptomatte, has
somewhat superseded mattes created from Coverage, Background, Object ID, and Material
ID passes.
Background RGBA
This channel is a somewhat extinct render pass in most 3D applications. It contained the color values
from the objects behind the pixels described in the Z coverage.
Object ID
Most 3D applications are capable of assigning ID values to objects in a scene. Each pixel in the Object
ID channel will be identified by that ID number, allowing for the creation of masks.
If you want to use an Object ID in a comp, like all aux channels you must map the Object ID pass to the
Object ID channel in the MediaIn or Loader Node.
Material ID
Most 3D applications are capable of assigning ID values to materials in a scene. Each pixel in the
Material ID channel will be identified by that ID number, allowing for masks based on materials.
You can set up Material IDs using the Settings tab, similarly to how ObjectIDs are set.
Texture (left) applied to 2D image (right) using UV channels and texture node.
UV channels from a MediaIn node used in a Texture node and merged over the original image
TIP: If you are using a separate UV render pass with the UV data in the RGB channels, map
red to U and green to V in a Channel Booleans node.
X, Y, and Z Normals
The X, Y, and Z Normal channels contain information about each pixel’s orientation (the direction it
faces) in 3D space. The normals are often displayed as lines coming out from your object
perpendicular to the surface, letting you visualize the relationship between the surface and camera.
The Normals X, Y, and Z channels are often used with a Shader node to perform relighting
adjustments on a 2D rendered image.
XY Vector pass (left) used with Vector Motion Blur to generate motion blur on spaceship (right)
The Vector render pass is combined with the beauty image using the Channels
Booleans node, which then feeds the Vector Motion Blur node.
World Position
World Position Pass (WPP) is an auxiliary channel, sometimes referred to as Point Position, XYZ pass, or
WPP. It’s used to represent each pixel’s 3D (XYZ) position as an RGB color value. The result is data that
can be viewed as a very colorful RGB image. Like Z-Depth, this can be used for compositing via depth.
However, it can also be used for masking based on 3D position, regardless of camera transforms.
The colors correspond to a pixel’s position in 3D, so if a pixel sits at 0/0/0 in a 3D scene, the resulting
pixel’s will have an RGB value of 0/0/0 or black. If a pixel sits at 1/0/0 in the 3D scene, the resulting
pixel is fully red. Due to the huge extent, 3D scenes can have the WPP channel should always be
rendered in 32-bit floating-point to provide the accuracy needed.
XYZ Position
TIP: The Object ID and Material ID auxiliary channels can be used by some tools in Fusion to
generate a mask. The “Use Object” and “Use Material” settings used to accomplish this are
found in the Settings tab of that node’s controls in the Inspector.
Compositing Layers
in Fusion
This chapter is intended to give you a solid base for making the transition from a
layer-based compositing application to Fusion’s node-based interface.
It provides practical information about how to start structuring a node tree for simple
layered composites.
Contents
Applying Effects 437
Adding a Node to the Tree 437
Editing Parameters in the Inspector 437
Replacing Nodes 438
Adjusting Fusion Sliders 439
Compositing Two Clips Together 439
Adding Additional Media to Compositions 439
Automatically Creating Merge Nodes 440
Fixing Problem Edges in a Composite 441
Using Composite Modes in the Merge Node 442
Creating and Using Text 443
Creating Text Using the Text+ Node 443
Styling and Adjusting Text 444
Using Text as a Mask 445
Using Transform Controls in the Merge Node 448
Building a Simple Green-Screen Composite 449
Mapping Timeline Layers to Nodes in Fusion 449
Pulling a Green-Screen Key Using the Delta Keyer 451
Dealing with Spill 454
Masking a Graphic 455
In Fusion Studio, you must press the 1 or 2 key on the keyboard to load the selected node in
the viewer.
There are many other ways of adding nodes to your node tree, but it’s good to know how to browse
the Effects Library as you get started.
Clicking the last panel on any node opens the Settings panel. Every node has a Settings panel, and
this is where the parameters that every node shares, such as the Blend slider and RGBA buttons, are
found. These let you choose which image channels are affected, and let you blend between the effect
and the original image.
In the case of the TV effect, for example, the resulting image has a lot of transparency because the
scan lines being added are also being added to the alpha channel, creating alternating lines of
transparency. Turning off the Alpha checkbox results in a more solid image, while opening the Controls
tab (the first tab) and dragging the Scan Lines slider to the right to raise its value to 4 creates a more
visible television effect.
The original TV effect (left), and modifications to the TV effect to make the clip more solid (right)
Replacing Nodes
In the Effect category of the Effects Library, you’ll also find a Highlight node that adds glints to the
highlights of an image.
In our example, the Highlight1 node takes the TV node’s place in the node tree, and the new effect can
be seen in the viewer, which in this example consists of star highlights over the lights in the image.
It’s time to use the Inspector controls to customize this effect.
Each slider is limited to a different range of minimum and maximum values that is particular to the
parameter you’re adjusting. In this case, the Number of Points slider maxes out at 24. However, you
can remap the range of many (but not all) sliders by entering a larger value in the number field to the
right of that slider. Doing so immediately repositions the slider’s controls to the left as the slider’s range
increases to accommodate the value you just entered.
Entering a larger value to expand the range over which a slider will operate
Dragging a node from the Media Pool onto a connection (left), and dropping it to create a Merge node composite (right)
The Node Editor is filled with shortcuts like this to help you build your compositions more quickly.
Here’s one for when you have a disconnected node that you want to composite against another node
with a Merge node. Drag a connection from the output of the node you want to be the foreground
layer, and drop it on top of the output of the node you want to be the background layer. A Merge node
will be automatically created to build that composite. Remember: background inputs are orange, and
foreground inputs are green.
Click to select the Merge node for that particular composite, and look for the Subtractive/
Additive slider.
Drag the slider all the way to the left, to the Subtractive position, and the fringing disappears.
A clip with alpha exhibits fringing (left), and after fixing fringing by
dragging the Subtractive/Additive slider to the left (right)
The Additive/Subtractive slider lets you blend between two versions of the merge operation, one
Additive and the other Subtractive, to find the best combination for the needs of your particular
composite. Blending between the two is occasionally useful for dealing with problem composites that
have edges that are calling attention to themselves as either too bright or too dark.
For example, using Subtractive merging on a premultiplied image may result in darker edges, whereas
using Additive merging with a non-premultiplied image will cause any non-black area outside the
foreground’s alpha to be added to the result, thereby lightening the edges. By blending between
Additive and Subtractive, you can tweak the edge brightness to be just right for your situation.
The Screen node is perfect for simulating reflections, and lowering Blend a bit lets you balance the
foreground and background images. It’s subtle, but helps sell the shot.
TIP: You may have noticed that the Merge node also has a set of Flip, Center, Size,
and Angle controls that you can use to transform the foreground image without needing to
add a dedicated Transform node. It’s a nice shortcut for simplifying node trees large
and small.
If you’re viewing the Merge, the text appears in the viewer superimposed against the background clip.
Onscreen controls appear that let you rotate (the circle) and reposition (the red center handle and two
arrows) the text, and we can see a faint cursor that lets us edit and kern the text using other tools in
the viewer toolbar.
TIP: Holding down the Command key while dragging any control in the Inspector “gears
down” the adjustment so that you can make smaller and more gradual adjustments.
Clicking a red dot under a particular letter puts a kerning highlight over that letter.
To make manual kerning adjustments:
1 Option-drag the red dot under any letter of text to adjust that character’s kerning while
constraining letter movement to the left and right. You can also drag letters up and down for other
effects. Depending on your system, the kerning of the letter you’re adjusting might not update until
you drop the red dot in place.
2 If you don’t like what you’ve done, you can open the Advanced Controls in the Inspector and clear
either the kerning of selected letters or all manual kerning before starting over again.
Connecting a MediaIn2 or Loader2 node onto the Merge1 node’s foreground input causes the entire
viewer to be filled with the MediaIn2 (assuming we’re still viewing the Merge node). At this point, we
need to insert the Text1 node’s image as an alpha channel into the MediaIn2 node’s connection, and
we can do that using a MatteControl node.
With this done, connecting the Text+ node’s output, which has the alpha channel, to the MatteControl
node’s Garbage Matte input, is a shortcut we can use to make a mask, matte, or alpha punch out a
region of transparency in an image.
Keep in mind that it’s easy to accidentally connect to the wrong input. Because inputs rearrange
themselves depending on what’s connected and where the node is positioned (and, frankly, the colors
can be hard to keep track of when you’re first learning), it’s key to make sure that you always check the
tooltips associated with the input you’re dragging a connection over to make sure that you’re really
connecting to the correct one. If you don’t, the effect won’t work, and if your effect isn’t working, the
first thing you should always check is whether you’ve connected the proper inputs.
One alternate method of connecting nodes together is to hold down the Option key while dragging a
connection from one node’s output and dropping it onto the body of another node. This opens a
pop-up menu from which you can choose the specific input you want to connect to, by name. Note
that the menu only appears after you’ve dropped the connection on the node and released your
pointing device’s button.
Once the Text1 node is properly connected to the MatteControl node’s Garbage Matte input, a text-
shaped area of transparency is displayed for the graphic if you load the MatteControl node into
the viewer.
NOTE: When connecting two images of different sizes to a Merge node, the resolution of the
background image defines the output resolution of that node. Keep that in mind when you run
into resolution issues.
Background on video track 1 (top left), green-screen clip on video track 2 (bottom),
and graphic file on video track 3 (top right)
Implied in a timeline-based system is that higher numbered video tracks appear as the more forward,
or frontmost, element in the viewer. Video track 1 is the background to all other video tracks. Video
track 3 is in the foreground to both video track 1 and video track 2.
A stack of clips to use in a composite (top), and turning that stack into a
Fusion clip in DaVinci Resolve’s Edit page (bottom)
In Fusion, each video clip is represented by a MediaIn in the Fusion page or a Loader in Fusion Studio.
In our example below, the MediaIn2 is video track 2, and MediaIn 1 is video track 1. These two
elements are composited using a Merge node (foreground over background, respectively). The
composite of those two elements becomes the output of the first Merge node, which becomes the
background to a second Merge. There is no loss of quality or precomposing when you chain Merges
together. MediaIn3 represents video track 3 and is the final foreground in the node tree since it is the
topmost layer.
The initial node tree of the three clips we turned into a Fusion clip
The DeltaKeyer node is the main tool used for green-screen keying. It attaches to the output of the
node that represents the green screen—in our example, that is the MediaIn2 node. With the MediaIn2
selected, pressing Shift-Space opens the Select Tool dialog where you can search for and insert any
node. Below we have added the DeltaKeyer after the MediaIn2 node but prior to being merged with
the background.
The DeltaKeyer node is a sophisticated keyer that is capable of impressive results by combining
different kinds of mattes and a clean-plate layer, but it can also be used very simply if the background
that needs to be keyed is well lit. And once the DeltaKeyer creates a key, it embeds the resulting alpha
channel in its output, so in this simple case, it’s the only node we need to add. It’s also worth noting
that, although we’re using the DeltaKeyer to key a green screen, it’s not limited to keying green or blue
only; the DeltaKeyer can create impressive keys on any color in your image.
With the DeltaKeyer selected, we’ll use the Inspector controls to pull our key by quickly sampling the
shade of green from the background of the image. To sample the green-screen color, drag the
Eyedropper from the Inspector over the screen color in the viewer.
As you drag in the viewer, an analysis of the color picked up by the location of the Eyedropper
appears within a floating tooltip, giving some guidance as to which color you’re really picking.
Meanwhile, if viewing the Merge in a second viewer, we get an immediate preview of the transparency
and the image we’ve connected to the background.
The original image (left), and after sampling the green screen using the Eyedropper from the Inspector (right)
When we’re happy with the preview, releasing the pointer button samples the color, and the Inspector
controls update to display the value we’ve chosen.
No matter how good the composite may look, once you’ve selected the screen color to pull a key, you
need to load the DeltaKeyer node into the viewer itself. This allows you to evaluate the quality or
density of the alpha channel created by the key. Above the viewer, click the Color button in the viewer
toolbar, or click in the viewer and press C to switch the viewer between the RGB color channels of the
image and the alpha channel.
Black in a matte represents the transparent areas, while white represents the opaque areas. Gray
areas represent semi-transparency. Unless you are dealing with glass, smoke, or fog, most mattes
should be pure white and pure black with no gray gray areas. If a close examination of the alpha
channel reveals some fringing in the white foreground of the mask, the DeltaKeyer has integrated
controls for post-processing of the key and refining the matte. Following is a quick checklist of the
primary adjustments to make.
After making the screen selection with the Eyedropper, try the following adjustments to
improve the key.
– Adjust the Gain slider to boost the screen color, making it more transparent. This can adversely
affect the foreground transparency, so adjust with care.
– Adjust the Balance slider to tint the foreground between the two non-screen colors. For a green
screen, this pushes the foreground more toward red or blue, shifting the transparency in the
foreground.
Clicking the third of the seven tabs of controls in the DeltaKeyer Inspector opens up a variety of
controls for manipulating the matte.
Initial adjustments in the matte tab may include the following parameters:
– Adjust the lower and upper thresholds to increase the density in black and white areas.
– Very subtly adjust the Clean Foreground and Clean Background sliders to fill small holes in the
black and white matte. The more you increase these parameters, the more harsh the edges of
your matte become.
The original key (left), and the key after using the Clean Foreground slider (right)
With this accomplished, we’re happy with the key, so we load the Merge1 node back into the
viewer, and press C to set the Color control of the viewer back to RGB. We can see the graphic in
the background, but right now it’s too small to cover the whole frame, so we need to make another
adjustment.
The final key is good, but now we need to work on the background
Spill can now be handled using a color correction node placed directly after the DeltaKeyer or
branched from the original MediaIn or Loader node and combined with a MatteControl.
Branching the original image with one branch for the DeltaKeyer and a second for color correction
Masking a Graphic
Next, it’s time to work on the top video track: the news graphic that will appear to the left of the
newscaster. The graphic we will use is actually a sheet of different logos, so we need to cut one out
using a mask and position it into place.
A graphic of multiple logos that must be cropped down to isolate just one
Masking the logo using a Rectangle mask connected directly to a Merge node
Now, all we need to do is to use the onscreen controls of the Rectangle mask to crop the area we want
to use, dragging the position of the mask using the center handle, and resizing it by dragging the top/
bottom and left/right handles of the outer border.
As an extra bonus, you can take care of rounded corners when masking a graphic by using the Corner
Radius slider in the Inspector controls for the Rectangle mask to add the same kind of rounding.
Moving and resizing the mask to fit our logo, and rounding
the edges using the Corner Radius Inspector control
For a simple over-the-shoulder graphic, masking the image may be all you need to do, but masking an
image does not change the actual dimensions of the graphic. It only changes the area you see. So,
accurately positioning the graphic based on the center of the composite becomes more difficult, and
any type of match moving would give incorrect results because the graphic has a different resolution
than the background. To fix this resolution mismatch, you can place a Crop node after the MediaIn to
change the actual dimensions of the graphic layer.
Adding a Crop node after the masked MediaIn to center the cropped logo on the background
With the Crop node selected, the viewer toolbar includes a Crop tool.
Dragging a bounding box using the Crop tool (left), and the cropped logo now centered on the frame (right)
NOTE: The Resize, Letterbox, and Scale nodes also change the resolution of an image.
At this point, we’re all set to move the logo into place. Because the logo is the foreground input to
a Merge, you can select the Merge2 node, load it into the viewer, and use the built-in Center X
and Y controls or the oncreen controls to place the logo where you want it and make it a
suitable size.
Placing the logo using the foreground input transform controls of the Merge2 node
Rotoscoping
with Masks
This chapter covers how to use masks to rotoscope, one of the most common
tasks in compositing.
Contents
Introduction to Masks and Polylines 459
Mask Nodes 459
Polyline Types 460
Converting Polylines from One Type to Another 461
How to Use Masks with Other Nodes 462
Attaching Masks to an Image for Rotoscoping 462
Combining Multiple Masks 464
Mask Inputs on Other Nodes 464
Creating and Editing Polylines In-Depth 467
The Polyline Toolbar 467
Selecting a Specific Polyline 467
Polyline Creation Modes 468
Protection Modes 469
Closing Polylines 469
Selecting and Adjusting Polylines 470
Polyline Points Selection 470
Moving Polyline Points 470
Smoothing a Polyline Segment 471
Linearizing a Polyline Segment 471
Transforming Individual or Multiple Points 471
Deleting Selected Points 471
Editing Bézier Handles 472
Mask Nodes
Mask nodes create an image that is used to define transparency in another image. Unlike other image
creation nodes in Fusion, mask nodes create a single channel image rather than a full RGBA image.
The most used mask tool, the Polygon mask tool, is located in the toolbar.
For more information on these mask tools, see Chapter 107, “Mask Nodes” in the DaVinci Resolve
Reference Manual or Chapter 46 in the Fusion Reference Manual.
B-Spline Masks
B-Spline masks are user-created shapes made with polylines that are drawn using the B-Splines. They
behave identically to polyline shapes when linear, but when smoothed the control points influence the
shape through tension and weight. This generally produces smoother shapes while requiring fewer
control points. B-Spline mask tools are automatically set to animate as soon as you add them to the
Node Editor.
Bitmap Masks
The Bitmap mask allows images from the Node Editor to act as masks for nodes and effects. Bitmap
masks can be based on values from any of the color, alpha, hue, saturation, luminance, and the
auxiliary coverage channels of the image. The mask can also be created from the Object or Material ID
channels contained in certain 3D-rendered image formats.
Mask Paint
Mask Paint allows a mask to be painted using Fusion’s built-in vector paint nodes.
Wand Mask
A Wand mask provides a crosshair that can be positioned in the image. The color of the pixel under
the crosshair is used to create a mask, where every contiguous pixel of a similar color is also included
in the mask. This type of mask is ideal for isolating color adjustments.
Ranges Mask
Similar to the Bitmap mask, the Ranges mask allows images from the Node Editor to act as masks for
nodes and effects. Instead of creating a simple luminance-based mask from a given channel, Ranges
allows spline-based selection of low, mid, and high ranges, similar to to the Color Corrector node.
Polyline Types
You can draw polylines using either B-Spline or Bézier spline types. Which you choose depends on
the shape you want to make and your comfort with each spline style.
Bézier Polylines
Bézier polylines are shapes composed of control points and handles. Several points together are used
to form the overall shape of a polyline.
Bézier control point with direction handles aligned to create a linear segment
If you’re familiar with applications such as Adobe Photoshop or Illustrator, you’ll already be familiar with
many of the basic concepts of editing Bézier polylines.
B-Spline Polylines
A B-Spline polyline is similar to a Bézier spline; however, these polylines excel at creating smooth
shapes. Instead of using a control point and direction handles for smoothness, the B-Spline polyline
uses points without direction handles to define a bounding box for the shape. The smoothness of the
polyline is determined by the tension of the point, which can be adjusted as needed.
When converting from one type to another, the original shape is preserved. The new polyline
generally has twice as many control points as the original shape to ensure the minimum change to the
shape. While animation is also preserved, this conversion process will not always yield perfect results.
It’s a good idea to review the animation after you convert spline types.
Masks are single-channel images that can be used to define which regions of an image you want to
affect. Masks can be created using primitive shapes (such as circles and rectangles), complex polyline
shapes that are useful for rotoscoping, or by extracting channels from another image.
Each mask node is capable of creating a single shape. However, Mask nodes are designed to be
added one after the other, so you can combine multiple masks of different kinds to create complex
shapes. For example, two masks can be subtracted from a third mask to cut holes into the resulting
mask channel.
Fusion offers several different ways you can use masks to accomplish different tasks. You can attach
Mask nodes after other nodes in which you want to create transparency, or you can attach Mask nodes
directly to the specialized inputs of other nodes to limit or create different kinds of effects.
To use this setup, you’ll load the MatteControl node into the viewer and select the Polygon node to
expose its controls so you can draw and modify a spline while viewing the image you’re rotoscoping.
The MatteControl node’s Garbage Matte > Invert checkbox lets you choose which part of the image
becomes transparent.
When you’re finished rotoscoping, you simply connect the Polygon node’s output to the Loader node’s
input, and an alpha channel is automatically added to that node.
TIP: If you connect a Mask node to a MediaIn or Loader node’s effect input without any
shapes drawn, that mask outputs full transparency, so the immediate result is that the image
output by the MediaIn or Loader node becomes completely blank. This is why when you want
to rotoscope by connecting a mask to the input of a MediaIn or Loader node, you need to
work within a disconnected Mask node first. Once the shape you’re drawing has been closed,
connect the Mask node to the MediaIn or Loader’s input, and you’re good to go.
Combining multiple Polygon nodes one after the other in the node tree
When a Mask node’s input is attached to another mask, a Paint Mode drop-down menu appears, which
allows you to choose how you want to combine the two masks.
The default option is Merge, but you can also choose Subtract, Minimum, Maximum, Multiply, or any
other operation that will give you the mask boolean interaction you need. Additionally, a pair of Invert
and Solid checkboxes let you further customize how to combine the current mask with the one
before it.
TIP: If you select a node with an empty effect mask input, adding a Mask node automatically
connects to the open effect mask input.
While masks (or mattes) are connected via an input, they are actually applied “post effect,” which
means the node first applies its effect to the entire image, and then the mask is used to limit the result
by copying over unaffected image data from the input.
Although many nodes support effects masking, there are a few where this type of mask does not
apply—notably Savers, Time nodes, and Resize, Scale, and Crop nodes.
TIP: Effects masks define the domain of definition (DoD) for that effect,
making it more efficient.
Pre-Masking Inputs
Unlike effect masks, a pre-mask input (the name of which is usually specific to each node using them)
is used by the node before the effect is applied. This usually causes the node to render more quickly
and to produce a more realistic result. In the case of the Highlight and the Glow nodes, a pre-mask
restricts the effect to certain areas of the image but allows the result of that effect to extend beyond
the limits of the mask.
TIP: You can quickly add a mask node to the Effect/Solid/Garbage Matte inputs of a keyer
node by right-clicking the header bar of that node in the Inspector and choosing whichever
mask node you want to use from the Effect Mask, SolidMatte, and GarbageMatte submenus.
You choose whether a garbage matte is applied to a keying node as opaque or transparent in the
Inspector for the node to which it’s connected.
Solid Matte
Solid Matte inputs (colored white) are intended to fill unwanted holes in a matte, often with a less
carefully pulled key producing a dense matte with eroded edges, although you could also use a
polygon or mask paint to serve this purpose. In the example below, a gentle key designed to preserve
the soft edges of the talent’s hair leaves holes in the mask of the woman’s face, but using another
DeltaKeyer to create a solid matte for the interior of the key that can be eroded to be smaller than the
original matte lets you fill the holes while leaving the soft edges alone. This is also sometimes known
as a hold-out matte.
If you hover the pointer over any of the Polyline toolbar buttons, a tooltip that describes the button’s
function appears. Clicking on a button will affect the currently active polyline or the selected polyline
points, depending on the button.
You can change the size of the toolbar icons, add labels to the buttons, or make other adjustments to
the toolbar’s appearance in order to make polylines easier to use. All the options can by found by
right-clicking on the toolbar and selecting from the options displayed in the contextual menu.
Click Append
This mode is the default mode for mask creation. It’s used to quickly define the rough shape of the
mask, after which you switch to Insert and Modify mode to refine the mask further.
When a shape is closed, the polyline is automatically switched to Insert and Modify mode.
Although the Click Append mode is rarely used with paths, it can be helpful when you know the
overall shape of a motion path, but you don’t yet know the timing.
TIP: Holding Shift while you draw a mask constrains subsequent points to 45-degree angles
relative to the previous point. This can be very helpful when drawing regular geometry.
Insert and Modify mode is also the default mode for creating motion paths. A new control point is
automatically added to the end of the polyline, extending or refining the path, any time a parameter
that is animated with a motion path is moved.
Protection Modes
In addition to the modes used to create a polyline, two other modes are used to protect the points
from further changes after they have been created.
Modify Only
Modify Only mode allows existing points on the polyline to be modified, but new points may not be
added to the shape.
TIP: Even with Modify Only selected, it is still possible to delete points from a polyline.
Done
The Done mode prohibits the creation of any new points, as well as further modification of any existing
points on the polyline.
Closing Polylines
There are several ways to close a polyline, which will connect the last point to the first.
All these options are toggles that can also be used to open a closed polygon.
To add or remove points from the current selection, do one of the following:
– Hold the Shift key to select a continuous range of points.
– Hold Command and click each control point you want to add or remove.
– Press Command-A to select all the points on the active polyline.
TIP: Once a control point is selected, you can press Page Down or Page Up on the keyboard
to select the next control point in a clockwise or counterclockwise rotation. This can be very
helpful when control points are very close to each other.
To move selected control points using the pointer, do one of the following:
– Drag on the selected points anywhere in the viewer.
– Hold Shift while dragging to restrict movement to a single axis.
– Hold Option and drag anywhere in the viewer to move the selected control point.
To move selected control points using the keyboard, do one of the following:
– Press the Up or Down Arrow keys on the keyboard to nudge a point up or down in the viewer.
– Hold Command-Up or Down Arrow keys to move in smaller increments.
– Hold Shift-Up or Down Arrow keys to move in larger increments.
If you want to adjust the length of a handle without changing the angle, hold Shift while moving a
direction handle.
Point Editor
The Point Editor dialog can be used to reposition control points using precise X and Y coordinates.
Pressing the E key on the keyboard will bring up the Point Editor dialog and allow you to reposition
one or more selected control points.
The dialog box contains the X- and Y-axis values for that point. Entering new values in those boxes
repositions the control point. When multiple control points are selected, all the points move to the
same position. This is useful for aligning control points along the X- or Y-axis.
If more than one point is selected, a pair of radio buttons at the top of the dialog box determines
whether adjustments are made to all selected points or to just one. If the Individual option is selected,
the affected point is displayed in the viewer with a larger box. If the selected point is incorrect, you can
use the Next and Previous buttons that appear at the bottom of the dialog to change the selection.
If you are not sure of the exact value, you can also perform mathematical equations in the dialog box.
For example, typing 1.0-5 will move the point to 0.5 along the given axis.
Reduce Points
When freehand drawing a polyline or an editable paint stroke, the spline is often created using more
control points than you need to efficiently make the shape. If you choose Reduce Points from the
polyline’s contextual menu or toolbar, a dialog box will open allowing you to decrease the number of
points used to create the polyline.
The overall shape will be maintained while eliminating redundant control points from the path. When
the value is 100, no points are removed from the spline. As you drag the slider to the left, you reduce
the number of points in the path.
Shape Box
If you have a polyline shape or a group of control points you want to scale, stretch, squish, skew, or
move, you can use the shape box to easily perform these operations.
To enable the shape box, do one of the following:
– Click the Shape Box toolbar button.
– Choose Shape Box from the contextual menu.
– Press Shift-B.
If there are selected points on the polyline when the Shape Box mode is enabled, the shape box is
drawn around those points. Otherwise, you can drag the shape box around the area of control points
you want to include.
Holding Command while dragging a shape box handle will apply adjustments from the center of the
shape box, constraining the transformation to the existing proportions of the shape box. Holding Shift
while dragging a corner handle affects only that handle, allowing skewed and non-uniform
transformations.
You use these options to simplify the screen display when adjusting control points placed closely
together and to avoid accidentally modifying controls and handles that are adjacent to the
intended target.
Roto Assist
You can enable the Roto Assist button in the toolbar when you begin drawing your shape to have
points snap to the closest high-contrast edge as you draw the shape. The points that have snapped to
an edge are indicated by a cyan outline.
The control points on the outer shape are automatically parented to their matching points on the inner
shape. This means that any changes made to the inner shape will also be made to the outer shape.
The relationship is one-way; adjustments to the outer shape can be made without affecting the
inner shape.
Once the outer polyline is selected, you can drag any of the points away from the inner polyline to add
some softness to the mask.
TIP: Press Shift-A to select all the points on a shape, and then hold O and drag to offset the
points from the inner shape. This gives you a starting point to edit the falloff.
The farther the outer shape segment is from the inner shape, the larger the falloff will be in that area.
To adjust the overall timing of the mask animation, you edit the Keyframe horizontal position spline
using the Spline Editor or Timeline Editor. Additional points can be added to the mask at any point to
refine the shape as areas of the image become more detailed.
A new coordinate control is added to the Polyline mask controls for each published point, named Point
0, Point 1, and so on.
When this mode is enabled, the new “following” control points will maintain their position relative to
the motion of any published points in the mask, while attempting to maintain the shape of that segment
of the mask. Unlike published points, the position of the following points can still be animated to allow
for morphing of that segment’s shape over time.
Paint
This chapter describes how to use Fusion’s non-destructive Paint tool to repair
images, remove objects, and add creative elements.
Contents
Paint Overview 481
Types of Paint Nodes 481
Setting Up the Paint Node 481
Setting the Paint Node’s Resolution 482
Paint Node Workflow 482
Select the Correct Paint Stroke Type 483
Setting the Brush Size 485
Choosing an Apply Mode 485
Editing Paint Strokes 489
Editing Paint Strokes in the Modifiers Tab 489
Deleting Strokes 490
Animating and Tracking Paint Strokes 490
Tracking a Paint Stroke 491
Using the Planar Tracker with the Paint Tool 493
Inverting the Steady Effect to Put the Motion Back In 499
Painting a Clean Plate 500
The main difference between these two Paint tools is that the Mask Paint tool only paints on the Alpha
channel, so there are no channel selector buttons. The Paint tool can paint on any or all channels. The
majority of this chapter covers the Paint node, since it shares identical parameters and settings with
the Mask Paint node.
The Paint node available in the Paint category of the Effects Library
The Paint node is composited over the image you want to paint on using the merge
Setting this up requires some configuration of the nodes. The Background node must be fully
transparent and, unless you are doing something simple like using the Stroke tool set to Color to paint
over an image, you must drag the image you want to clone or smudge into the Source Tool field in the
Paint node’s inspector. These steps are described in more detail later in this chapter.
The Stroke tool is most often used for cloning, beauty work, and creative paint
Although the Stroke type is the most flexible, that flexibility can come at a performance penalty if
you’re painting hundreds of strokes on a frame. For larger numbers of strokes that do not need to be
animated, it’s better to use Multistroke or Clone Multistroke, as these are more processor efficient.
The Shape Strokes are used to create shapes or clone areas based on shapes
When you paint, each stroke is unpremultiplied, so adjusting the Alpha slider in the Inspector does not
affect what you apply to the RGB channels. However, changing opacity affects all four channels.
You can use the Clone Apply Mode to clone from the same image connected to the Paint node’s
background input or a different source from the node tree.
5 Paint over the area you want to cover up using the source pixels.
When trying to erase objects or artifacts from a clip using the Clone Apply Mode, it can sometimes
be easier if you sample from a different frame on the same clip. This works well when the object you
are trying to clone out moves during the clip, revealing the area behind the object. Sampling from a
different frame can use the revealed background by offsetting the source frame.
8 Paint over the area you want to cover up using the source pixels.
The plane is half painted out using on the Overlay with Time Offset
TIP: When using a Clone Apply Mode, you can hold down the O key instead of clicking the
Overlay checkbox in Inspector to see the Overlay. Releasing the O key will return to normal
viewing without the Overlay.
TIP: To select multiple strokes, you can Shift-click or Command-click to select and deselect
multiple specific strokes, or you can drag a selection box around all strokes you want
to select.
Although you can make changes in the Tools tab in the Inspector, the Paint node uses both the Tools
tab and the Modifiers tab. In the Tools tab, you can create new brush strokes and select a stroke in the
viewer to edit. The Modifiers tab presents a list of all the strokes for the selected Paint node, which
makes it easy to modify any previously created paint stroke.
NOTE: Multistroke and Clone Multistroke each only appear as one item in the Modifiers tab
no matter how many strokes you create using those tools. Those two tools are not editable
after creating them.
The same controls you used in the Tools tab to create the strokes are located in the Modifier’s tab to
modify them. You can also animate each individual stroke.
Once you stop painting a stroke, it’s added to the Modifiers tab along with an additional Stroke
modifier that represents the next stroke. For instance, if you paint your first stroke, the Modifiers tab
shows your stroke as Stroke1 and then a stroke 2 as well, which represents the next stroke you create.
You always have one more stroke in the Modifiers tab than strokes in the viewer.
Deleting Strokes
There are two ways you can delete paint strokes.
To delete all paint strokes you’ve made on every frame, do one of the following:
– Click the reset button in the upper-right corner of the Inspector.
– Delete the Paint node in the Node Editor.
Drag the MediaIn you want to track into the Tracker Source field in the Inspector
Selecting all the strokes and then clicking the Paint Group
button collects all the strokes into a single group
The group’s onscreen controls replace the controls for each paint stroke, and the Modifiers tab in the
Inspector shows the group’s parameters. The individual strokes are still editable by selecting Show
Subgroup Controls in the Modifiers tab of the Inspector. The group then comes with a Center, Angle,
and Size control for connecting to a tracker.
With the PlanarTracker node selected and loaded in the viewer, a viewer toolbar appears with a variety
of tools for drawing shapes and manipulating tracking data. The Planar Tracker works by tracking flat
surfaces that you define by drawing a shape around the feature you want to track. When you first
create a PlanarTracker node, you can immediately begin drawing a shape, so in this case, we draw a
simple polygon over the man’s forehead since that’s the feature we want to steady in preparation for
painting.
We draw a simple box by clicking once each on each corner of the man’s forehead to create control
points, and then clicking the first one we created to close the shape.
Drawing a shape over the man’s forehead to prepare for Planar Tracking
In the Inspector, the PlanarTracker node has tracking transport controls that are similar to those of the
Tracker. However, there are two buttons, Set and Go, underneath the Operation Mode menu, which
defaults to Track, since that’s the first thing we need to do. The Set button lets you choose which
frame to use as the “reference frame” for tracking, so you click the Set button first before clicking the
Track Forward button below.
TIP: The Set button lets you supervise a Planar Track in progress and stop it if you see it
slipping, making adjustments as necessary before clicking Set at the new frame to set a new
reference before continuing to track forward towards the end of the clip.
The Pattern controls let you set up how you want to handle the analysis. Of these controls, the Motion
Type menu is perhaps the most important. In this particular case, Perspective tracking is the analysis
we want. Still, in other situations, you may find you get better results with the Translation, Translation/
Rotation, and Translation/Rotation/Scale options.
Once you initiate the track, a series of dots appears within the track region shape you created to
indicate trackable pixels found. A green progress bar at the bottom of the Timeline ruler lets you see
how much of the shot is remaining to track.
Clicking the Track from First Frame button to set the Planar Track in progress; green dots
on the image and a green progress bar let you know the track is happening
NOTE: If nothing happens when you track, or it starts to track and then stops, that’s your cue
that there isn’t enough trackable detail within the shape you’ve drawn for the Planar Tracker
to work, and your best bet is to choose a different location of the image to track.
You’ll immediately see the image warped as much as is necessary to pin the tracked region in place
for whatever operation you want to perform. If you scrub through the clip, you should see that the
image dynamically cornerpin-warps as much as is necessary to keep the forehead region within the
shape you drew pinned in place. In this case, this sets up the man’s head as a canvas for paint.
Steadying the image results in warping as the forehead is pinned in place for painting
Choosing the Stroke tool from the Paint node’s tools in the viewer toolbar
Next, choose the Clone mode from the Apply Controls. In this example, we’ll clone part of the man’s
face over the scars to get rid of them. Choosing the Clone mode switches the controls of the Paint
node to those used for cloning.
With the Stroke tool selected in the Paint toolbar, the Clone mode selected in the Inspector controls,
and the Source for cloning added to the Source Tool field, we’re ready to start painting. If we move the
pointer over the viewer, a circle shows us the paint tool, ready to go.
To use the clone brush, first hold down the Option key and click somewhere on the image to identify
the source area of the clone. In this example, we’ll sample from just below the first scar we want to
paint. After Option-clicking to sample the image, you can click to begin painting anywhere in
the frame.
Setting an offset to sample for cloning (left), and dragging to draw a clone stroke (right)
If you don’t like the stroke you’ve created, you can undo with Command-Z and try again. We repeat the
process with the other scar on the man’s forehead, possibly adding a few other small strokes to make
sure there are no noticeable edges, and in a few seconds, we’ve taken care of the issue.
Original image (left), and after painting out two scars on the man’s forehead with the Stroke tool set to Clone
Scrubbing through the steadied clip shows the paint fix is “sticking” to the man’s forehead
We select and copy the PlanarTracker node coming before the Merge node, and paste a copy of it
after. This copy has all the analysis and tracking data of the original PlanarTracker node.
Pasting a second copy of the PlanarTracker node after the Paint node
With the second PlanarTracker node selected, we go into the Inspector and turn on the Invert Steady
Transform checkbox, which inverts the steady warp transform to restore the image back to the
way it was.
This is just one example of how to set up a Planar Tracker and Paint node. In some instances, you
made need to do more work with masks and layering, but the above example gives you a good
starting point.
Disable the default Keyframe in the Time Stretcher and enter the frame you want to freeze. If you have
already performed a Planar Track, then entering the frame you set as the Reference Frame is usually a
good frame to freeze.
To create the clean plate, you connect the paint node to the output of the Time Stretcher. Clone over
the areas you want to hide, and you now have a single clean frame. Now you need to composite the
clean area over the original.
Add a MatteControl node with a garbage mask to cut out the painted forehead
Before fixing this, we drag the Soft Edge slider in the Inspector to the right to blur the edges just a bit.
With the new Planar Transform node inserted, the Polygon automatically moves to match the motion of
the forehead that was tracked by the original PlanarTracker node, and it animates to follow along with
the movement of the shot. At this point, we’re finished!
The final painted image, along with the final node tree
Using the
Tracker Node
This chapter shows the many capabilities of the Tracker node in Fusion, starting with
how trackers can be connected in your node trees, and finishing with the different
tasks that can be performed.
Contents
Introduction to Tracking 505
Tracker Node Overview 506
Modes of the Tracker Node 506
Basic Tracker Node Operation 506
Connect to a Tracker’s Background Input 506
Analyze the Image to be Tracked 507
Apply the Tracking Data 508
Viewing Tracking Data in the Spline Editor 510
Tracker Inspector Controls 510
Motion Tracking Workflow In Depth 512
Connect the Image to Track 512
Add Trackers 512
Refine the Search Area 514
Perform the Track Analysis 515
Tips for Choosing a Good Pattern 516
Using the Pattern Flipbooks 517
Using Adaptive Pattern Tracking 517
Dealing with Obscured Patterns 518
Dealing with Patterns That Leave the Frame 518
Setting Up Tracker Offsets 519
Introduction to Tracking
Tracking is one of the most useful and essential techniques available to a compositor. It can be
roughly defined as the creation of a motion path from analyzing a specific area in a clip over time.
Fusion includes a variety of different tracking nodes that let you analyze different kinds of motion.
Once you have tracked motion on a clip, you can then use the resulting data for stabilization, motion
smoothing, matching the motion of one object to that of another, and a host of other essential tasks.
Types of tracking nodes in Fusion:
– Tracker: Follows a relatively small, identifiable feature or pattern in a clip to derive a
2D motion path. This is sometimes referred to as point tracking.
– Planar Tracker: Follows a flat, unvarying surface area in a clip to derive a 2 ½D motion path
including perspective. A planar tracker is also more tolerant than a point tracker when some
tracked pixels move offscreen or become obscured.
– Camera Tracker: Tracks multiple points or patterns in a clip and performs a more sophisticated
analysis by comparing those moving patterns. The result is a precise recreation of the live-action
camera in virtual 3D space.
Each tracker type has its own chapter in this manual. This chapter covers the tracking techniques with
the Tracker node.
Stabilizing
You can use one or more tracked patterns to remove all the motion from the sequence or to smooth
out vibration and shakiness. When you use a single tracker pattern to stabilize, you stabilize only the
X and Y position. Using multiple patterns together, you are able to stabilize position, rotation,
and scaling.
Match Moving
The reverse of stabilizing is match moving, which detects position, rotation, and scaling in a clip using
one or more patterns. Instead of removing that motion, it is applied to another image so that the two
images can be composited together.
Corner Positioning
Corner positioning tracks four patterns that are then used to map the four corners of a new foreground
into the background. This technique is generally used to replace signs or mobile phone screens.
The Planar Tracker node is often a better first choice for these types of tracking tasks.
Perspective Positioning
Perspective positioning again tracks four patterns to identify the four corners of a rectangle.
Each corner is then mapped to a corner of the image, rescaling and warping the image to remove all
apparent perspective. The Planar Tracker node is often a better first choice for removing perspective
from a clip.
However, if you’re just using a Tracker node to analyze data for use with multiple nodes elsewhere in
the comp, you could choose to branch it and leave its output disconnected to indicate that Tracker
node is a data repository. Please note that this is not necessary; serially connected Tracker nodes can
be linked to multiple other nodes as well.
Setting the Operation parameter in the Operation tab in the Inspector to Match Move, Corner Position,
or Perspective Position always applies the motion to the foreground input (if one is connected). This is
an easy workflow for simple situations. In this scenario, you can use the Tracker node to replace a
Merge node since Tracker nodes include all the same functionality as a Merge.
The Tracker set up as a branch and connected using the Connect To menu
Applying the light of a ray gun by connecting tracking data to the center position of an Ellipse node
This is made easier by renaming the Tracker you created to something descriptive of what’s
being tracked.
Once the tip of the ray gun has been tracked, this tracking data is then connected to the Center
parameter of an Ellipse node that’s limiting a Glow effect by right-clicking the label of the Center
parameter in the Inspector, and choosing Tracker1 > Ray Gun Glow: Offset position from the Connect
to submenu of the contextual menu. All the data from every Tracker node in your node tree and every
tracking pattern appears within this submenu, and since we named the Tracker, it’s easy to find.
Choosing Offset position because it will place the center of the ellipse directly over the path. However,
it also gives us the flexibly to offset the ellipse if need be, using the offset controls in the Inspector.
You can connect the data from a Tracker node to any other node’s parameter; however, you’ll most
typically connect track data to center, pivot, or corner X/Y style parameters. When you use tracking
data this way, it’s not necessary to connect the output of the Tracker node itself to anything else in
your node tree; the data is passed from the Tracker to the Center parameter by linking it with the
Connect To submenu.
Right-click in the viewer to bring up a contextual menu. At the very bottom is a reference to the path
the Tracker created, called Tracker1Tracker1Path:Polyline. Choosing it calls up a longer submenu
where you can choose Convert to XY Path.
For more information on Displacement Splines, see Chapter 71, “Animating in Fusion’s Spline Editor” in
the DaVinci Resolve Reference Manual or Chapter 10 in the Fusion Reference Manual.
– The Operations tab: This is where you decide how the tracking data is used.
– The Display Options tab: This is where you can customize how the onscreen controls
look in the viewer.
Add Trackers
Although each Tracker node starts with a single tracker pattern, a single node is capable of analyzing
multiple tracking patterns that have been added to the Tracker List, enabling you to track multiple
features of an image all at once for later use and to enable different kinds of transforms. Additional
trackers can be added by clicking the Add button immediately above the Tracker List control.
Multiple patterns are useful when stabilizing, match moving, or removing perspective from a clip. They
also help to keep the Node Editor from becoming cluttered by collecting into a single node what
would otherwise require several nodes.
When you add a Tracker node to the Node Editor, you start with one pattern box displayed in the
viewer as a small rectangle. When the cursor is placed over the pattern rectangle, the control
expands and two rectangles appear. The outer rectangle has a dashed line, and the inner
rectangle has a solid line. The outer rectangle is the search area, and the inner rectangle is
the pattern.
If you need to select a new pattern, you can move the pattern box by dragging the small (and
easily missed) handle at the top left of the inner pattern box.
The pattern rectangle can also be resized by dragging on the edges of the rectangle. You want to
size the pattern box so that it fits the detail you want to track, and excludes area that doesn’t
matter. Ideally, you want to make sure that every pixel of the pattern you’re tracking is on the same
plane, and that no part of the pattern is actually an occluding edge that’s in front of what you’re
really tracking. When you resize the pattern box, it resizes from the center, so one drag lets you
create any rectangle you need.
Resizing a pattern box to fit the tracking point on the ray gun
TIP: The magnified pattern box does not take viewer LUTs into account. When using Log
content, it may make it easier to position the tracker if you temporarily insert a Brightness
Contrast node between the source content and the yellow input of the tracker. You can use
the Brightness Contrast node to temporarily increase the visibility of the region you
are tracking.
You can override the automatic channel selection by clicking the buttons beneath the bars for each
channel to determine the channel used for tracking.
You can choose any one of the color channels, the luminance channels, or the alpha channel
to track a pattern.
When choosing a channel, the goal is to choose the cleanest, highest contrast channel for use in the
track. Channels that contain large amounts of grain or noise should be avoided. Bright objects against
dark backgrounds often track best using the luminance channel.
Each pattern that’s stored is added to a Flipbook. Once the render is complete, you can play this
Pattern Flipbook to help you evaluate the accuracy of the tracked path. If you notice any jumps in the
frames, then you know something probably went wrong.
None
When the Adaptive mode is set to None, the pattern within the rectangle is acquired when the pattern
is selected, and that becomes the only pattern used during the track.
Every Frame
When Every Frame is chosen, the pattern within the rectangle is acquired when the pattern is
selected, and then reacquired at each frame. The pattern found at frame 1 is used in the search on
frame 2, the pattern found on frame 2 is used to search frame 3, and so on. This method helps the
Tracker adapt to changing conditions in the pattern.
Every Frame tracking is slower and can be prone to drifting from sub-pixel shifts in the pattern from
frame to frame. Its use is therefore not recommended unless other methods fail.
Tools for modifying tracker paths in the Node toolbar of the viewer
The Track Center (Append) mode selects a new pattern that will continue to add keyframes to the
existing path. The offset between the old pattern and the new pattern is automatically calculated to
create one continuous path.
When selecting a pattern to use in appending to an existing path, a pattern that is close to the old
pattern and at the same apparent depth in the frame generates the best results. The further away the
new pattern is, the more likely it is that the difference in perspective and axial rotation will reduce
accuracy of the tracked result.
The X and Y Offset controls allow for constant or animated positional offsets to be created relative to
the actual Tracker’s pattern center. The position of the offset in the viewer will be shown by a dashed
line running from the pattern center to the offset position. You can also adjust the offset in the viewer
using the Tracker Offset button. Clicking the button enables you to reposition the path while keeping
the Tracker pattern in place.
The Tracker Offset tool in the Node toolbar of the viewer; a track of the
orange dot is being offset to the center of the ray gun
Once an offset for a pattern is set, you can connect other positional controls to the Tracker’s Offset
menu using the Connect To > Tracker: Offset Position option in the control’s contextual menu. The
path created during the track remains fixed to the center of the pattern.
Edges
The Edges menu determines whether the edges of an image that leave the visible frame are cropped,
duplicated, or wrapped when the stabilization is applied. Wrapping edges is often desirable for some
methods of match moving, although rarely when stabilizing the image for any other purpose. For more
information on the controls, see Chapter 118, “Tracker Nodes” in the DaVinci Resolve Reference
Manual or Chapter 57 in the Fusion Reference Manual.
Position/Rotation/Scaling
Use the Position, Rotation, and Scaling checkboxes to select what aspects of the motion are corrected.
Pivot Type
The Pivot Type for the stabilization is used to calculate the axis of rotation and scaling calculations.
This is usually the average of the combined pattern centers but may be changed to the position of
a single tracker or a manually selected position.
Reference
The Reference controls establish whether the image is stabilized to the first frame in the
sequence, the last frame, or to a manually selected frame. Any deviation from this reference by the
tracked patterns is transformed back to this ideal frame.
As a general rule, when tracking to remove all motion from a clip, set the Merge mode to BG Only,
the Pivot Type to Tracker Average or Selected Tracker, and the Reference control to Start, End, or
Select Time.
Smoothing Motion
When confronted with an image sequence with erratic or jerky camera motion, instead of trying to
remove all movement from the shot, you often need to preserve the original camera movement while
losing the erratic motion.
The Start & End reference option is designed for this technique. Instead of stabilizing to a reference
frame, the tracked path is simplified. The position of each pattern is evaluated from the start of the
path and the end of the path along with intervening points. The result is smooth motion that replaces
the existing unsteady move.
To preserve some of the curvature of the original camera motion, you can increase the value of the
Reference Intermediate Points slider that appears when the Start & End reference mode is selected.
When tracking to create smooth camera motion, ensure that the Start & End reference mode is
enabled and set the Merge mode to BG Only. It is recommended to leave the Pivot Type control set to
Tracker Average.
Some clips may need to be stabilized so that an element from another source can be added to the
shot. After the element or effect has been composited, the stabilization should be removed to make
the shot look natural again.
When using this Merge menu, you connect a foreground image to the Tracker node’s input
connection in the Node Editor.
Enabling the FG Only mode will apply the motion from the background to the foreground, and the
Tracker will only output the modified FG image. This result can later be merged over the original,
allowing further modifications of the foreground to be applied using other nodes before merging the
result over the background clip.
Steady Position
Steady Position can be used to stabilize footage in both X and/or Y to remove camera shake and other
unwanted movement. The connection inverts the output of the tracked pattern’s motion. When you
connect a Center parameter to the Steady Position of the Tracker, it will be placed at 0.5/0.5 (the
center of the screen) by default at frame 1. You can change this using the Reference mode in the
Tracker’s Operation tab.
Steady Angle
The Steady Angle mode can be used to stabilize footage in both X and/or Y to remove camera shake
and other unwanted movement. When you connect a control, for example the Angle of a Transform, to
the Steady Angle of the Tracker, it will be placed at 0 degrees by default at frame 1. This can be
changed by means of the Reference mode in the Tracker’s Operation tab. From there on, the resulting
motion of the Steady Angle mode will rotate into the opposite direction of the original motion.
So if the angle at frame 10 is 15 degrees, the result of the Steady Angle will be -15 degrees.
To use Steady Angle, you need at least two tracked patterns in your tracker. With just one point, you
can only apply (Un)Steady Position.
Unsteady Position
After using the Steady Position, the Unsteady Position is used to reintroduce the original movement on
an image after an effect or new layer has been added. The resulting motion from Unsteady Position is
basically an offset in the same direction as the original motion.
Steady Size
The Steady Size connection outputs the inverse of the tracked pattern’s scale. When you connect a
parameter, for example the Size of a Transform, to the Steady Size of the Tracker, it will be placed with
a Size of 1 (i.e., the original size) by default at frame 1. This can be changed by means of the Reference
mode in the Tracker’s Operation tab. The resulting size of the Steady Size mode will then counteract
the size changes of the original motion. So if the actual size at frame 10 is 1.15, the result of the Steady
Size will be 1 - (1.15 - 1) = 0.85.
To use Steady Size, you need at least two tracked patterns in your tracker. With just one point, you can
only apply (Un)Steady Position.
Rather than using the Tracker node to perform the Merge operation, an alternative and common way
to use these published outputs is to create a match move by connecting the outputs to multiple nodes.
A tracker is used to track a pattern, and then that data can be connected to multiple other nodes using
the Connect To submenu.
3 Right-click over the Transform’s Center and choose Connect to > Tracker1 > Steady Position.
The tracker publishes its output for other nodes to connect to, as done here to stabilize the clip
4 Connect the foreground to a corner-positioned node, so you can position the corners of the
foreground appropriately over the background.
5 Add another Transform node to the Node Editor after the Merge.
A second Transform after the Merge is used to add back in the original motion with Unsteady Poisition
6 Connect the new Transform’s Center to the Tracker’s Unsteady Position. The image will be
restored to its original state with the additional effect included.
The differences between a Tracker modifier and a Tracker node are as follows:
– The Tracker modifier can only track a single pattern.
– A source image must be set for the Tracker modifier.
The Tracker modifier can only output a single value and cannot be used for complex stabilization
procedures, but it is a nice quick way to apply a tracker to a point that you need to follow.
3 Click the Modifiers tab in the Inspector and drag the MediaIn1 node that you want to track into the
Tracker Source field.
4 Click the Track Forward button to begin tracking the person’s eye.
5 Insert a Soft Glow node directly after the MediaIn and connect the Ellipse Mask to the white
Glow Mask input.
You can set a different source image for the Tracker modifier by typing in the name of the node or
dragging and dropping the node from the Node Editor into the Tracker Source field control. If you have
a node (let’s call it node#1) connected to the node that contains the modifier (let’s call it node#2), the
source image for the Tracker modifier will automatically the node #1
For more information on the Tracking parameters, see Chapter 118, “Tracker Nodes” in the
DaVinci Resolve Reference Manual or Chapter 57 in the Fusion Reference Manual.
Our goal for this composition is to motion track the background image so that the text moves along
with the scene as the camera flies along.
In this example, we’ll drag the onscreen control so the pattern box overlaps a section of the bridge
right over the leftmost support. As we drag the onscreen control, we see a zoomed-in representation
of the part of the image we’re dragging over to help us position the tracker with greater precision.
For this example, the default sizes of the pattern and search box are fine as is.
Now, we just need to use the tracker analysis buttons at the top to begin the analysis. These buttons
work like transport controls, letting you start and stop analysis as necessary to deal with problem
tracks in various ways. Keep in mind that the first and last buttons, Track from Last Frame and Track
from First Frame, always begin a track at the last or first frame of the composition, regardless of the
playhead’s current position, so make sure you’ve placed your tracker onscreen controls appropriately
at the last or first frame.
The analysis buttons, left to right: Track from Last Frame, Track Backward,
Stop Tracking, Track Forward, Track from First Frame
For now, clicking the Track from Beginning button will analyze the entire range of this clip, from the first
frame to the last. A dialog lets you know when the analysis is completed, and clicking the OK button
dismisses it so you can see the nice clean motion path that results.
Now it’s time to connect the track we’ve just made to the text in order to start it in motion. After loading
the Merge1 node into the viewer to see the text in context with the overall composite we’re creating,
we’ll select the Text1 node to open its parameters in the Inspector, and click the Layout panel icon
(second button from the left) to expose the Layout controls, which are the text-specific transform
controls used to position the text object in the frame. These are the controls that are manipulated
when you use the Text node onscreen controls for repositioning or rotating text.
The Center X and Y parameters, while individually adjustable, also function as a single target for
purposes of connecting to tracking to quickly set up match moving animation. You set this up via the
contextual menu that appears when you right-click any parameter in the Inspector, which contains a
variety of commands for adding keyframing, modifiers, expressions, and other automated methods of
animation including connecting to motion tracking.
Connecting the Center X and Y parameter to the Bridge Track: Offset position motion path we analyzed
Immediately, the text moves so that the center position coincides with the center of the tracked motion
path at that frame. This lets us know the center of the text is being match moved to the motion
track path.
The offset we create is shown as a dotted red line that lets us see the actual offset being created by
the X and Y Offset controls. In fact, this is why we connected to the Bridge Track: Offset position
option earlier.
Now, if we play through this clip, we can see the text moving along with the bridge.
Two frames of the text being match moved to follow the bridge in the shot
Planar Tracking
This chapter provides an overview of how to use the Planar Tracker node, and how to
use it to make match moves simple. For more information about the Planar Tracker
node, see Chapter 118, “Tracker Nodes” in the DaVinci Resolve Reference Manual or
Chapter 57 in the Fusion Reference Manual.
Contents
Introduction to Tracking 537
Using the Planar Tracker 537
Different Ways of Using the Planar Tracker Node 537
Setting Up to Use the Planar Tracker 538
Check for Lens Distortion 538
A Basic Planar Tracker Match Move Workflow 538
Tips for Choosing Good Planes to Track 541
TIP: Part of using the Planar Tracker is also knowing when to give up and fall back to using
Fusion’s Tracker node or to manual keyframing. Some shots are simply not trackable, or the
resulting track suffers from too much jitter or drift. The Planar Tracker is a time-saving node in
the artist’s toolbox and, while it can track most shots, no tracker is a 100% solution.
A Lens Distort node inserted between a MediaIn1 and Planar Tracker to remove lens distortion
For more information about using the Lens Distort node, see Chapter 121, “Warp Nodes” in the
DaVinci Resolve Reference Manual or Chapter 60 in the Fusion Reference Manual.
If you are using DaVinci Resolve, you can use the Lens Corrections control in the Cut page or Edit
page. This adjustment carries over into the Fusion page. Lens correction in DaVinci Resolve
automatically analyzes the frame in the Timeline viewer for edges that are being distorted by a wide
angle lens. Clicking the Analyze button moves the Distortion slider to provide an automatic correction.
From there, the MediaIn node in the Fusion page will have the correction applied, and you can begin
planar tracking.
3 Next, you’ll need to identify the specific pattern within the image that you want to track. In most
cases, this will probably be a rectangle, but any arbitrary closed polygon can be used. The pixels
enclosed by this region will serve as the pattern that will be searched for on other frames. Please
note that it is important that the pattern is drawn on the reference frame. In this example, we want
to track the wall behind the man, so we draw a polygon around part of the wall that the man won’t
pass over as he moves during the shot.
TIP: Do not confuse the pattern you’re identifying with the region you’re planning to
corner pin (which always has four corners and is separately specified in Corner Pin mode.
4 (Optional) If moving objects partially cover up or occlude the planar surface, you may wish to
connect a mask that surrounds and identifies these occlusions to the white “occlusion mask” input
of the Planar Tracker. This lets the Planar Tracker ignore details that will cause problems.
When using the Hybrid Tracker, providing a mask to deal with occluding objects is nearly
mandatory, while with the Point Tracker it is recommended to try tracking without a mask.
5 If necessary, move the playhead back to the reference frame, which in this case was the first
frame. Then, click the Track To End button and wait for the track to complete.
As the clip tracks, you can see track markers and trails (if they’re enabled in the Options tab of the
Inspector) that let you see how much detail is contributing to the track, and the direction of motion
that’s being analyzed.
6 Once the track is complete, play through the clip to visually inspect the track so you can evaluate
how accurate it is. Does it stick to the surface? Switching to Steady mode can help here, as
scrubbing through the clip in Steady mode will help you immediately see unwanted motion in
the track.
7 Since we’re doing a match move, click the Create Planar Transform button to export a Planar
Transform node that will automatically transform either images or masks to follow the analyzed
motion of the plane you tracked.
In this case, the Planar Transform node will be inserted after a pair of Background and Paint nodes
that are being used to put some irritatingly trendy tech jargon graffiti on the wall. The Planar
Transform will automatically transform the Paint node’s output connected to its background input
to match the movement of the wall.
The final result; the paint layer is match moved to the background successfully
TIP: If you want to composite semi-transparent paint strokes on the wall, or use Apply modes
with paint stroke, you can attach a Paint node to a Background node set to 100 transparency.
The resulting image will be whatever paint strokes you make against transparency and is easy
to composite.
If the pattern contains too few pixels or not enough trackable features, this can cause problems with
the resulting track, such as jitter, wobble, and slippage. Sometimes dropping down to a simpler motion
type can help in this situation.
Lastly, you can develop your own plug-ins without using a computer development
environment by scripting Fusion’s native Fuse plug-ins.
Contents
What Are Open FX? 543
What Are Resolve FX? 543
Applying Open FX and Resolve FX Plug-Ins 543
Introduction to Fuse Plug-Ins 544
Chapter 24 Using Open FX, Resolve FX, and Fuse Plug-Ins 542
What Are Open FX?
Fusion is able to use compatible Open FX (OFX) plug-ins that are installed on your computer. Open FX
is an open standard for visual effects plug-ins. It allows plug-ins written to the standard to work on
both DaVinci Resolve and Fusion Studio as well as other applications that support the standard.
OFX plug-ins can be purchased and downloaded from third-party suppliers such as BorisFX,
Red Giant, and RE:Vision Effects. All OFX appear in the Open FX category of the Effects Library,
alongside all other effects that are available in Fusion.
Chapter 24 Using Open FX, Resolve FX, and Fuse Plug-Ins 543
Introduction to Fuse Plug-Ins
Fuses are plug-ins developed for Fusion using the Lua built-in scripting language. Being script-based,
Fuses are compiled on-the-fly in Fusion without the need of a computer programming environment.
While a Fuse may be slower than an identical Open FX plug-in created using Fusion’s C++ SDK, a Fuse
will still take advantage of Fusion’s existing nodes and GPU acceleration.
To install a Fuse:
1 Use the .fuse extension at the end of the document name.
2 For DaVinci Resolve, save it in one of the following locations:
– On macOS: Macintosh HD/Users/username/Library/Application Support/Blackmagic Design/
DaVinci Resolve/Fusion/Fuses
– On Windows: C:\Users\username\AppData\Roaming\Blackmagic Design\DaVinci Resolve\
Support\Fusion\Fuses
– On Linux: home/username/.local/share/DaVinciResolve/Fusion/Fuses
You can open and edit Fuses by selecting the Fuse node in the Node Editor and clicking the Edit
button at the top of the Inspector. The Fuse opens in the text editor specified in the Global
Preferences/Scripting panel.
TIP: Changes made to a Fuse in a text editor do not immediately propagate to other instances
of that Fuse in the composition. Reopening a composition updates all Fuses in the
composition based on the current saved version. Alternatively, you can click the Reload
button in the Inspector to update the selected node without closing and reopening the
composition.
Chapter 24 Using Open FX, Resolve FX, and Fuse Plug-Ins 544
PART 3
3D Compositing
Chapter 25
3D Compositing
Basics
This chapter covers many of the nodes used for creating 3D composites, the tasks
they perform, and how they can be combined to produce effective 3D scenes.
Contents
An Overview of 3D Compositing 547
3D Compositing Fundamentals 548
Creating a Minimal 3D Scene 549
The Elements of a 3D Scene 550
Geometry Nodes 550
The Merge3D Node 552
The Renderer3D Node 555
Software vs. GPU Rendering 555
Software Renderer 555
OpenGL Renderer 556
OpenGL UV Renderer 556
Loading 3D Nodes into the Viewer 557
Navigating the 3D View 559
Transforming Cameras and Lights Using the Viewers 559
Transparency Sorting 560
Material Viewer 561
Transformations 562
Onscreen Transform Controls 562
Pivot 563
Target 563
Parenting 565
An Overview of 3D Compositing
Traditional image-based compositing is a two-dimensional process. Image layers have only the
amount of depth needed to define one as foreground and another as background. This is at odds with
the realities of production, since all images are either captured using a live-action camera with
freedom in all three dimensions, in a shot that has real depth, or have been created in a true 3D
modeling and rendering application.
Within the Fusion Node Editor, you have a GPU-accelerated 3D compositing environment that includes
support for imported geometry, point clouds, and particle systems for taking care of such things as:
– Converting 2D images into image planes in 3D space
– Creating rough primitive geometry
– Importing mesh geometry from FBX or Alembic scenes
– Creating realistic surfaces using illumination models and shader compositing
– Rendering with realistic depth of field, motion blur, and supersampling
3D Compositing Fundamentals
The 3D category of nodes (which includes the Light, Material, and Texture subcategories) work
together to create 3D scenes. Examples are nodes that generate geometry, import geometry, modify
geometry, create lights and cameras, and combine all these elements into a scene. Nearly all these
nodes are collected within the 3D category of nodes found in the Effects Library.
Conveniently, at no point are you required to specify whether your overall composition is 2D or 3D,
because you can seamlessly combine any number of 2D and 3D “scenes” together to create a single
output. However, the nodes that create these scenes must be combined in specific ways for this to
work properly.
More realistically, each 3D scene that you want to create will probably have three to five nodes to give
you a better lit and framed result. These include:
– One of the available geometry nodes (such as Text3D or Image Plane 3D)
– A light node (such as DirectionalLight or SpotLight)
– A camera node
– A Merge3D node
– A Renderer3D node
All these should be connected together as seen below, with the resultantly more complex 3D scene
shown below.
The same text, this time lit and framed using Text3D,
Camera, and SpotLight nodes to a Merge3D node
The 3D nodes available from the toolbar include the ImagePlane3D, Shape3D,
Text3D, Merge3D, Camera3D, SpotLight3D, and Renderer3D nodes
Geometry Nodes
You can add 3D geometry to a composition using the ImagePlane3D node, the Shape3D node, the
Cube3D node, the Text3D node, or optionally by importing a model via the FBX Mesh 3D node.
Furthermore, you can add particle geometry to scenes from pEmitter nodes. You can connect these to
a Merge3D node either singularly or in multiples to create sophisticated results combining
multiple elements.
A more complex 3D scene combining several geometry nodes including the Text3D, Shape3D, and ImagePlane3D nodes
Texturing Geometry
By itself, geometry nodes can only consist of a simple flat color. However, you can alter the look of 3D
geometry by texturing it using clips (either still images or movies), using material nodes such as the
If you’re shading or texturing Text3D nodes, you need to add a texture in a specific way since each
node is actually a scene with individual 3D objects (the characters) working together. In the following
example, the RustyMetal shader preset is applied to a Text3D node using the ReplaceMaterial3D
node. The interesting thing about the ReplaceMaterial3D node is that it textures every geometric
object within a scene at once, meaning that if you put a ReplaceMaterial3D node after a Text3D node,
you texture every character within that node. However, if you place a ReplaceMaterial3D node after a
Merge3D node, then you’ll end up changing the texture of every single geometric object being
combined within that Merge3D node, which is quite powerful.
You can build elaborate scenes using multiple Merge3D nodes connected together
This checkbox is disabled by default, which lets you light elements in one Merge3D scene without
worrying about how the lighting will affect geometry attached to other Merge3D nodes further
downstream. For example, you may want to apply a spotlight to brighten the wall of a building in one
Merge3D node without having that spotlight spill over onto the grass or pavement at the foot of the
wall modeled in another Merge3D node. In the example shown below, the left image shows how the
cone and taurus connected to a downstream node remain unlit by the light in an upstream node with
Pass Through Lights disabled, while the right image shows how everything becomes lit when turning
Pass Through Lights on.
The result of lights on the text in one Merge3D node not affecting the cone and taurus
added in a downstream Merge3D node (left) Turning on Pass Through Lights in the upstream
Merge3D node results in those lights also illuminating the downstream shapes (right)
If you transform a Merge3D node that’s connected to other Merge3D nodes, what happens depends
on which node you’re transforming, an upstream node or the downstream node:
– If you transform a downstream Merge3D node, you also transform all upstream nodes connected
to it as if they were all a single scene.
– If you transform an upstream Merge3D node, this has no effect on downstream Merge3D nodes,
allowing you to make transforms specific to that particular node’s scene.
An example of a 3D scene using multiple Merge3D nodes working together; the upstream Merge3D nodes arrange
the 3D objects placed within the scene, while the last Merge3D node (orange) lights and frames the scene.
The Renderer3D uses one of the cameras in the scene (typically connected to a Merge3D node) to
produce an image. If no camera is found, a default perspective view is used. Since this default view
rarely provides a useful angle, most people build 3D scenes that include at least one camera.
The image produced by the Renderer3D can be any resolution with options for fields processing, color
depth, and pixel aspect.
Software Renderer
The software renderer is generally used to produce the final output. While the software renderer is not
the fastest method of rendering, it has twin advantages. First, the software renderer can easily handle
textures much larger than one half of your GPU’s maximum texture size, so if you’re working with
texture images larger than 8K you should choose the software renderer to obtain maximum quality.
When the Renderer3D node “Renderer Type” drop-down is set to OpenGL Renderer, you cannot render
soft shadows or excessively large textures (left). When the Renderer3D node “Renderer Type” drop‑down
is set to Software Renderer, you can render higher-quality textures and soft shadows (right).
OpenGL Renderer
The OpenGL renderer takes advantage of the GPU in your computer to render the image; the textures
and geometry are uploaded to the graphics hardware, and OpenGL shaders are used to produce the
result. This can produce high-quality images that can be perfect for final rendering, and can also be
potentially orders of magnitude faster than the software renderer, but it does pose some limitations on
some rendering effects, as soft shadows cannot be rendered, and the OpenGL renderer also ignores
alpha channels during shadow rendering, resulting in a shadow always being cast from the
entire object.
On the other hand, because of its speed, the OpenGL renderer exposes additional controls for
Accumulation Effects that let you enable depth of field rendering for creating shallow-focus effects.
Unfortunately, you can’t have both soft shadow rendering and depth of field rendering, so you’ll need
to choose which is more important for any given 3D scene you render.
OpenGL UV Renderer
When you choose the OpenGL UV Renderer option, a Renderer3D node outputs an “unwrapped”
version of the textures applied to upstream objects, at the resolution specified within the Image tab of
that Renderer3D node.
This specially output image is used for baking out texture projections or materials to a texture map for
one of two reasons:
– Baking out projections can speed up a render.
– Baking out projections lets you modify the texture using other 2D nodes within your composition,
or even using third-party paint applications (if you output this image in isolation as a graphics file)
prior to applying it back onto the geometry.
Suppose, for instance, that you have a scene on a street corner, and there’s a shop sign with a phone
number on it, but you want to change the numbers. If you track the scene and have standing geometry
for the sign, you can project the footage onto it, do a UV render, switch the numbers around with a
Paint node, and then apply that back to the mesh with a Texture2D.
The UV renderer can also be used for retouching textures. You can combine multiple DSLR still shots
of a location, project all those onto the mesh, UV render it out, and then retouch the seams and apply
it back to the mesh.
You could project tracked footage of a road with cars on it, UV render out the projection from the
geometry, do a temporal median filter on the frames, and then map a “clean” roadway back down.
The 3D Viewer
To change the viewpoint, right-click in the viewer and choose the desired viewpoint from the ones
listed in the Camera submenu. A shortcut to the Camera submenu is to right-click on the axis label
displayed in the bottom corner of the viewer.
In addition to the usual Perspective, Front, Top, Left, and Right viewpoints, if there are cameras and
lights present in the scene as potential viewpoints, those are shown as well. It’s even possible to
display the scene from the viewpoint of a Merge3D or Transform3D by selecting it from the contextual
menu’s Camera > Other submenu. Being able to move around the scene and see it from different
viewpoints can help with the positioning, alignment, and lighting, as well as other aspects of
your composite.
Furthermore, selecting a 3D node in the Node Editor also selects the associated object in the
3D Viewer.
When a viewer is set to display the view of a camera or light, panning, zooming, or rotating the viewer
(seen at right) actually transforms the camera or light you’re viewing through (seen at left)
It is even possible to view the scene from the perspective of a Merge3D or Transform3D node by
selecting the object from the Camera > Others menu. The same transform techniques will then move
the position of the object. This can be helpful when you are trying to orient an object in a certain
direction.
Transparency Sorting
While generally the order of geometry in a 3D scene is determined by the Z-position of each object,
sorting every face of every object in a large scene can take an enormous amount of time. To provide
the best possible performance, a Fast Sorting mode is used in the OpenGL renderer and viewers. This
is set by right-clicking in the viewer and choosing Transparency > Z-buffer. While this approach is much
faster than a full sort, when objects in the scene are partially transparent it can also produce
incorrect results.
The Sorted (Accurate) mode can be used to perform a more accurate sort at the expense of
performance. This mode is selected from the Transparency menu of the viewer’s contextual menu.
The Renderer3D also presents a Transparency menu when the Renderer Type is set to OpenGL.
Sorted mode does not support shadows in OpenGL. The software renderer always uses the Sorted
(Accurate) method.
Material Viewer
When you view a node that comes from the 3D > Material category of nodes in the Effects Library, the
viewer automatically switches to display a Material Viewer. This Material Viewer allows you to preview
the material applied to a lit 3D sphere rendered with OpenGL by default.
The type of geometry, the renderer, and the state of the lighting can all be set by right-clicking the
viewer and choosing options from the contextual menu. Each viewer supports A and B buffers to assist
with comparing multiple materials.
The Translation parameters are used to position the object in local space, the Rotation parameters
affect the object’s rotation around its own center, and the Scale slider(s) affect its size (depending on
whether or not they’re locked together). The same adjustments can be made in the viewer using
onscreen controls.
If the Scale’s Lock XYZ checkbox is enabled in the Inspector, only the overall scale of the object is
adjusted by dragging the red or center onscreen control, while the green and blue portions of the
onscreen control have no effect. If you unlock the parameters, you are able to scale an object along
individual axes separately to squish or stretch the object.
Selecting Objects
With the onscreen controls visible in the viewer, you can select any object by clicking on its center
control. Alternatively, you can also select any 3D object by clicking its node in the Node Editor.
Pivot
In 3D scenes, objects rotate and scale around an axis called a pivot. By default, this pivot goes
through the object’s center. If you want to move the pivot so it is offset from the center of the object,
you can use the X, Y, and Z Pivot parameters in the Inspector.
Target
Targets are used to help orient a 3D object to a specific point in the scene. No matter where the object
moves, it will rotate in the local coordinate system so that it always faces its target, which you can
position and animate.
4 Use the X/Y/Z Target Position controls in the Inspector or the Target onscreen control in the
viewer to position the target and in turn position the object it’s attached to.
For example, if a spotlight is required in the scene to point at an image plane, enable the spotlight’s
target in the Transform tab and connect the target’s XYZ position to the image plane’s XYZ position.
Now, no matter where the spotlight is moved, it will rotate to face the image plane.
A light made to face the wall using its enabled target control
Here are the two simple rules of transforming parented Merge3D nodes:
– Transforms and animation applied to a Merge3D are also applied to all 3D objects connected
to that Merge3D node, including cameras, lights, geometry, and other merge nodes connected
upstream.
– Transforms and animation applied to upstream merge nodes don’t affect downstream
merge nodes.
Cameras
When setting up and animating a 3D scene, the metaphor of a camera is one of the most
comprehensible ways of framing how you want that scene to be rendered out, as well as animating
your way through the scene. Additionally, compositing artists are frequently tasked with matching
cameras from live-action clips, or matching cameras from 3D applications.
To accommodate all these tasks, Fusion provides a flexible Camera3D node with common camera
controls such as Angle of View, Focal Length, Aperture, and Clipping planes, to either set up your own
camera or to import camera data from other applications. The Camera3D node is a virtual camera
through which the 3D environment can be viewed.
Cameras are typically connected and viewed via a Merge3D node; however, you can also connect
cameras upstream of other 3D objects if you want that camera to transform along with that object
when it moves.
The viewer’s frame may be different from the camera frame, so it may not match the true boundaries of
the image that will be rendered by the Renderer3D node. If there is no Renderer3D node added to
your scene yet, you can use Guides that represent the camera’s framing. For more information about
Guides, see Chapter 68, “Using Viewers” in the DaVinci Resolve Reference Manual or Chapter 7 in the
Fusion Reference Manual.
Turning on “Enable Accumulation Effects” exposes a Depth of Field checkbox along with Quality and
Amount of DoF Blur sliders that let you adjust the depth of field effect. These controls affect only the
perceived quality of the depth of field that is rendered. The actual depth of field that’s generated
depends solely on the setup of the camera and its position relative to the other 3D objects in
your scene.
When you select your scene’s Camera3D node to view its controls in the Inspector, a new Focal Plane
checkbox appears in the Control Visibility group. Turning this on lets you see the green focal plane
indicator in the 3D Viewer that lets you visualize the effect of the Focal Plane slider, which is located in
the top group of parameters in the Camera3D node’s Controls tab.
For more information about these specific camera controls, see Chapter 90, “3D Nodes” in the
DaVinci Resolve Reference Manual, or Chapter 29 in the Fusion Reference Manual.
Importing Cameras
If you want to match cameras between applications, you can import camera paths and positions from a
variety of popular 3D applications. Fusion is able to import animation splines from Maya and XSI
directly with their own native spline formats. Animation applied to cameras from 3ds Max and
LightWave are sampled and keyframed on each frame.
TIP: When importing parented or rigged cameras, baking the camera animation in the 3D
application before importing it into Fusion often produces more reliable results.
NOTE: When lighting is disabled in either the viewer or final renders, the image will appear to
be lit by a 100% ambient light.
Ambient Light
You use ambient light to set a base light level for the scene, since it produces a general uniform
illumination of the scene. Ambient light exists everywhere without appearing to come from any
particular source; it cannot cast shadows and will tend to fill in shadowed areas of a scene.
Directional Light
A directional light is composed of parallel rays that light up the entire scene from one direction,
creating a wall of light. The sun is an excellent example of a directional light source.
Point Light
A point light is a well defined light that has a small clear source, like a light bulb, and shines from that
point in all directions.
Spotlight
A spotlight is an advanced point light that produces a well defined cone of light with falloff. This is the
only light that produces shadows.
Lighting Hierarchies
Lights normally do not pass through a Merge, since the Pass Through Lights checkbox is off by default.
This provides a mechanism for controlling which objects are lit by which lights. For example, in the
following two node trees, two shapes and an ambient light are combined with a Merge3D node, which
is then connected to another Merge3D node that’s also connected to a plane and a spotlight. At the
left, the first Merge3D node of this tree has Pass Through Lights disabled, so you can only see the two
shapes lit. At the right, Pass Through Lights has been enabled, so both the foreground shapes and the
background image plane receive lighting.
Pass Through Lights is disabled, so only the front two shapes are illuminated (left) Pass Through
Lights is enabled, so all shapes connected to both Merge3D nodes are illuminated (right)
Lighting Options
Most nodes that generate geometry have additional options for lighting. These options are used to
determine how each individual object reacts to lights and shadows in the scene.
– Affected By Lights: If the Affected By Lights checkbox is enabled, lights in the scene will affect
the geometry.
– Shadow Caster: When enabled, the object will cast shadows on other objects in the scene.
– Shadow Receiver: If this checkbox is enabled, the object will receive shadows.
For more information on shadow controls, see the “Spotlight” section of Chapter 91, “3D Light Nodes”
in the DaVinci Resolve Reference Manual or Chapter 30 in the Fusion Reference Manual.
Shadow Maps
A shadow map is an internal depth map that specifies each pixel’s depth in the scene. This information
is used to assemble the shadow layer created from a spotlight. All the controls for the shadow map are
found in the Spotlight Inspector.
The quality of the shadow produced depends greatly on the size of the shadow map. Larger maps
generate better-looking shadows but will take longer to render. The wider the cone of the spotlight, or
the more falloff in the cone, the larger the shadow map will need to be to produce useful quality
results. Setting the value of the Shadow Map Size control sets the size of the depth map in pixels.
Generally, through trial and error, you’ll find a point of diminishing returns where increasing the size of
the shadow map no longer improves the quality of the shadow. It is not recommended to set the size
of the shadow maps any larger than they need to be.
The Shadow Map Proxy control is used to set a percentage by which the shadow map is scaled for fast
interactive previews, such as Autoproxy and LoQ renders. A value of .4, for example, represents a
40% proxy.
Shadow Softness
By default, the spotlight generates shadows without soft edges, but there are options for constant and
variable soft shadows. Hard-edged shadows will render significantly faster than either of the Soft
Shadow options. Shadows without softness will generally appear aliased, unless the shadow map size
is large enough. In many cases, softness is used to hide the aliasing rather than increasing the shadow
map to preserve memory and avoid exceeding the graphics hardware capabilities.
Setting the spotlight’s shadow softness to None will render crisp and well-defined shadows.
The Constant option will generate shadows where the softness is uniform across the shadow,
Selecting the Variable option reveals the Spread, Min Softness, and Filter Size sliders. A side
effect of the method used to produce variable softness shadows is that the size of the blur applied to
the shadow map can become effectively infinite as the shadow’s distance from the geometry
increases. These controls are used to limit the shadow map by clipping the softness calculation to a
reasonable limit.
The filter size determines where this limit is applied. Increasing the filter size increases the maximum
possible softness of the shadow. Making this smaller can reduce render times but may also limit the
softness of the shadow or potentially even clip it. The value is a percentage of the shadow map size.
For more information, see “Spotlight” in Chapter 91, “3D Light Nodes” in the DaVinci Resolve
Reference Manual or Chapter 30 in the Fusion Reference Manual.
The goal is to adjust the Multiplicative Bias slider until the majority of the Z-fighting is resolved, and
then adjust the Additive Bias slider to eliminate the rest. The softer the shadow, the higher the bias will
probably have to be. You may even need to animate the bias to get a proper result for some
particularly troublesome frames.
Material Components
All the standard illumination models share certain characteristics that must be understood.
Diffuse
The Diffuse parameters of a material control the appearance of an object where light is absorbed or
scattered. This diffuse color and texture are the base appearance of an object, before taking into
account reflections. The opacity of an object is generally set in the diffuse component of the material.
Alpha
The Alpha parameter defines how much the object is transparent to diffuse light. It does not affect
specular levels or color. However, if the value of alpha, either from the slider or a Material input from
the diffuse color, is very close to or at zero, those pixels, including the specular highlights, will be
skipped and disappear.
Opacity
The Opacity parameter fades out the entire material, including the specular highlights. This value
cannot be mapped; it is applied to the entire material.
Specular
The Specular parameters of a material control the highlight of an object where the light is reflected to
the current viewpoint. This causes a highlight that is added to the diffuse component. The more
specular a material is, the glossier it appears. Surfaces like plastics and glass tend to have white
Three spheres, left to right: diffuse only, specular only, and combined
The specular exponent controls the falloff of the specular highlight. The larger the value, the sharper
the falloff and the smaller the specular component will be.
Transmittance
When using the software renderer, the Transmittance parameters control how light passes through a
semi-transparent material. For example, a solid blue pitcher will cast a black shadow, but one made of
translucent blue plastic would cast a much lower density blue shadow. The transmittance parameters
are essential to creating the appearance of stained glass.
TIP: You can adjust the opacity and transmittance of a material separately. It is possible to
have a surface that is fully opaque yet transmits 100% of the light arriving upon it, so in a
sense it is actually a luminous/emissive surface.
Transmissive surfaces can be further limited using the Alpha and Color Detail control.
Alpha Detail
When the Alpha Detail slider is set to 0, the non-zero portions of the alpha channel of the diffuse color
are ignored and the opaque portions of the object casts a shadow. If it is set to 1, the alpha channel
determines how dense the object casts a shadow.
NOTE: The OpenGL renderer ignores alpha channels for shadow rendering, resulting in a
shadow always being cast from the entire object. Only the software renderer supports alpha
in the shadow maps.
The following examples for Alpha Detail and Color Detail cast a shadow using this image. It is a
green-red gradient from left to right. The outside edges are transparent, and inside is a small semi-
transparent circle.
Alpha Detail set to 1; the alpha channel determines the density of the shadow
Color Detail
Color Detail is used to color the shadow with the object’s diffuse color. Increasing the Color Detail
slider from 0 to 1 brings in more diffuse color and texture into the shadow.
Saturation
Saturation will allow the diffuse color texture to be used to define the density of the shadow without
affecting the color. This slider lets you blend between the full color and luminance only.
Illumination Models
Now that you understand the different components that make up a material or shader, we’ll look at
them more specifically. Illumination models are advanced materials for creating realistic surfaces like
plastic, wood, or metal. Each illumination model has advantages and disadvantages, which make it
appropriate for particular looks. An illumination model determines how a surface reacts to light, so
these nodes require at least one light source to affect the appearance of the object. Four different
illumination models can be found in the Nodes > 3D > Material menu.
Illumination models left to right: Standard, Blinn, Phong, Cook-Torrance, and Ward.
Standard
The Standard material provides a default Blinn material with basic control over the diffuse, specular,
and transmittance components. It only accepts a single texture map for the diffuse component with the
alpha used for opacity. The Standard Material controls are found in the Material tab of all nodes that
load or create geometry. Connecting any node that outputs a material to that node’s Material Input will
override the Standard material, and the controls in the Material tab will be hidden.
Phong
The Phong material produces the same diffuse result as Blinn, but with wider specular highlights at
grazing incidence. Phong is also able to make sharper specular highlights at high exponent levels.
Cook-Torrance
The Cook-Torrance material combines the diffuse illumination model of the Blinn material with a
combined microfacet and Fresnel specular model. The microfacets need not be present in the mesh or
bump map; they are represented by a statistical function, Roughness, which can be mapped. The
Fresnel factor attenuates the specular highlight according to the Refractive Index, which can
be mapped.
Ward
The Ward material shares the same diffuse model as the others but adds anisotropic highlights, ideal
for simulating brushed metal or woven surfaces, as the highlight can be elongated in the U or V
directions of the mapping coordinates. Both the U and V spread functions are mappable.
This material does require properly structured UV coordinates on the meshes it is applied to.
Textures
Texture maps modify the appearance of a material on a per-pixel basis. This is done by connecting an
image or other material to the inputs on the Material nodes in the Node Editor. When a 2D image is
used, the UV mapping coordinates of the geometry are used to fit the image to the geometry, and
when each pixel of the 3D scene is rendered, the material will modify the material input according to
the value of the corresponding pixel in the map.
TIP: UV Mapping is the method used to wrap a 2D image texture onto 3D geometry. Similar to
X and Y coordinates in a frame, U and V are the coordinates for textures on 3D objects.
Texture maps are used to modify various material inputs, such as diffuse color, specular color, specular
exponent, specular intensity, bump map, and others. The most common uses of texture maps is the
diffuse color/opacity component.
A node that outputs a material is frequently used, instead of an image, to provide other shading
options. Materials passed between nodes are RGBA samples; they contain no other information about
the shading or textures that produced them.
Composite Materials
Building complex materials is as easy as connecting the output of a Material node to one of the
Material inputs of another Material or Texture node. When a Material input is supplied just as with
a 2D image, its RGBA values are used per pixel as a texture. This allows for very direct compositing
of shaders.
For instance, if you want to combine an anisotropic highlight with a Blinn material, you can take the
output of the Blinn, including its specular, and use it as the diffuse color of the Ward material. Or, if you
do not want the output of the Blinn to be relit by the Ward material, you can use the Channel Boolean
material to add the Ward material’s anisotropic specular component to the Blinn material with a greater
degree of control.
To produce reflections with real-time interactive feedback at a quality level appropriate for production
environment maps, you make some trade-offs on functionality when compared with slower but
physically accurate raytraced rendering. Environment-mapped reflections and refractions do not
provide self-reflection or any other kind of interaction between different objects. In particular, this
infinite distance assumption means that objects cannot interact with themselves (e.g., the reflections
on the handle of a teapot will not show the body of the teapot). It also means that objects using the
same cube map will not inter-reflect with each other. For example, two neighboring objects would not
reflect each other. A separate cube map must be rendered for each object.
The Reflect node outputs a material that can be applied to an object directly, but the material does not
contain an illumination model. As a result, objects textured directly by the Reflect node will not
respond to lights in the scene. For this reason, the Reflect node is usually combined with the Blinn,
Cook-Torrance, Phong, or Ward nodes.
Reflection
Reflection outputs a material making it possible to apply the reflection or refraction to other materials
either before or after the lighting model with different effects.
Refraction
Refraction occurs only where there is transparency in the background material, which is generally
controlled through the Opacity slider and/or the alpha channel of any material or texture used for the
Background Material Texture input. The Reflect node provides the following material inputs:
Working with reflection and refraction can be tricky. Here are some techniques to make it easier:
– Typically, use a small amount of reflection, between 0.1 and 0.3 strength. Higher values are used
for surfaces like chrome.
– Bump maps can add detail to the reflections/refractions. Use the same bump map in the
Illumination model shader that you combine with Reflect.
– When detailed reflections are not required, use a relatively small cube map, such
as 128 x 128 pixels, and blur out the image.
– The alpha of refracted pixels is set to 1 even though the pixels are technically transparent.
Refracted pixels increase their alpha by the reflection intensity.
– If the refraction is not visible even when a texture is connected to the Refraction Tint Material
input, double-check the alpha/opacity values of the background material.
Bump Maps
Bump mapping helps add details and small irregularities to the surface appearance of an object. Bump
mapping modifies the geometry of the object or changes its silhouette.
To apply a bump map, you typically connect an image containing the bump information to the
BumpMap node. The bump map is then connected to the Bump input of a Material node. There are
two ways to create a bump map for a 3D material: a height map and a bump map.
TIP: Normals are generated by 3D modeling and animation software as a way to trick the eye
into seeing smooth surfaces, even though the geometry used to create the models uses only
triangles to build the objects.
Normals are 3 float values (nx, ny, nz) whose components are in the range [–1, +1]. Because you can
store only positive values in Fusion’s integer images, the normals are packed from the range [–1, +1] to
the range [0, 1] by multiplying by 0.5 and adding 0.5. You can use Brightness Contrast or a Custom
node to do the unpacking.
If you were to connect a bump map directly to the bump map input of a material, it will result in
incorrect lighting. Fusion prevents you from doing this, however, because Fusion uses a different
coordinate system for doing the lighting calculation. You first must use a BumpMap that expects a
packed bump map or height map and will do the conversion of the bump map to work correctly.
If your bump mapping doesn’t appear correct, here are a few things to look for:
– Make sure you have the nodes connected correctly. The height/bump map should connect into a
BumpMap and then, in turn, should connect into the bump map input on a material.
– Change the precision of the height map to get less banding in the normals. For low frequency
images, float32 may be needed.
– Adjust the Height scale on the BumpMap. This scales the overall effect of the bump map.
– Make sure you set the type to HeightMap or BumpMap to match the image input. Fusion cannot
detect which type of image you have.
– Check to ensure High Quality is on (right-click in the transport controls bar and choose High
Quality from the contextual menu). Some nodes like Text+ produce an anti-aliased version in High
Quality mode that will substantially improve bump map quality.
– If you are using an imported normal map image, make sure it is packed [0–1] in RGB and that it is in
tangent space. The packing can be done in Fusion, but the conversion to tangent space cannot.
Projection Mapping
Projection is a technique for texturing objects using a camera or projector node. This can be useful for
texturing objects with multiple layers, applying a texture across multiple separate objects, projecting
background shots from the camera’s viewpoint, image-based rendering techniques, and much more.
There are three ways to do projection mapping in Fusion.
Camera projection used with a Catcher node (example from an older version of Fusion)
TIP: Projected textures can be allowed to slide across an object. If the object moves relative
to the Projector 3D, or alternatively, by grouping the two together with a Merge3D, they can
be moved as one and the texture will remain locked to the object.
In the following section of a much larger composition, an image (the Loader1 node) is projected into 3D
space by mapping it onto five planes (Shape3D nodes renamed ground, LeftWall, RightWall, Building,
and Background), which are positioned as necessary within a Merge3D node to apply reflections onto
a 3D car to be composited into that scene.
The output of the Merge3D node used to assemble those planes into a scene is then fed to a UV Map
node, which in conjunction with a Camera3D node correctly projects all of these planes into 3D space
so they appear as they would through that camera in the scene. Prior to this UVMap projection, you
can see the planes arranged in space at left, where each plane has the scene texture mapped to it. At
right is the image after the UVMap projection, where you can see that the scene once again looks
“normal,” with the exception of a car-shaped hole introduced to the scene.
Five planes positioning a street scene in 3D space in preparation for UV Projection (left), and the UV Map
node being used to project these planes so they appear as through a camera in the scene (right)
Geometry
There are five nodes used for creating geometry in Fusion. These nodes can be used for a variety of
purposes. For instance, the Image Plane 3D is primarily used to place image clips into a 3D scene,
while the Shapes node can add additional building elements to a 3D set, and Text 3D can add three-
dimensional motion graphics for title sequences and commercials. Although each node is covered in
more detail in the “3D Nodes” chapter, a summary of the 3D creation nodes is provided below.
Cube 3D
The Cube 3D creates a cube with six inputs that allow mapping of different textures to each of the
cube’s faces.
Image Plane 3D
The Image Plane 3D is the basic node used to place a 2D image into a 3D scene with an automatically
scaled plane.
Shape 3D
This node includes several basic primitive shapes for assembling a 3D scene. It can create planes,
cubes, spheres, cylinders, cones, and toruses.
Text 3D
The Text 3D is a 3D version of the Text+ node. This version supports beveling and extrusion but does
not have support for the multi-layered shading model available from Text+.
Particles
When a pRender node is connected to a 3D view, it will export its particles into the 3D environment.
The particles are then rendered using the Renderer3D instead of the Particle renderer. For more
information, see Chapter 113, “Particle Nodes” in the DaVinci Resolve Reference Manual or Chapter 52
in the Fusion Reference Manual.
Visible
If the Visibility checkbox is not selected, the object will not be visible in a viewer, nor will it be
rendered into the output image by a Renderer3D. A non-visible object does not cast shadows. This is
usually enabled by default, so objects that you create are visible in both the viewers and final renders.
Unseen by Cameras
If the Unseen by Cameras checkbox is selected, the object will be visible in the viewers but invisible
when viewing the scene through a camera, so the object will not be rendered into the output image by
a Renderer3D. Shadows cast by an Unseen object will still be visible.
FBX Exporter
You can export a 3D scene from Fusion to other 3D packages using the FBX Exporter node. On
render, it saves geometry, cameras lights, and animation into different file formats such as .dae or .fbx.
The animation data can be included in one file, or it can be baked into sequential frames. Textures and
materials cannot be exported.
Using Text3D
The Text3D node is probably the most ubiquitous node employed by motion graphics artists looking
to create titles and graphics from Fusion. It’s a powerful node filled with enough controls to create
nearly any text effect you might need, all in three dimensions. This section seeks to get you started
quickly with what the Text3D node is capable of. For more information, see Chapter 90, “3D Nodes” in
the DaVinci Resolve Reference Manual or Chapter 29 in the Fusion Reference Manual.
TIP: If you click the Text icon in the toolbar to create a Text3D node, and then you click it
again while the Text3D node you just created is selected, a Merge3D node is automatically
created and selected to connect the two. If you keep clicking the Text icon, more Text3D
nodes will be added to the same selected Merge3D node.
Entering Text
When you select a Text3D node and open the Inspector, the Text tab shows a “Styled Text” text entry
field at the very top into which you can type the text you want to appear onscreen. Below, a set of
overall styling parameters are available to set the Font, Color, Size, Tracking, and so on. All styling you
do in this tab affects the entire set of text at once, which is why you need multiple text objects if you
want differently styled words in the same scene.
By default, all text created with the Text3D node is flat, but you can use the Extrusion Style, Extrusion
Depth, and various Bevel parameters to give your text objects thickness.
Combining Text3D nodes using Merge3D nodes doesn’t just create a scene; it also enables you to
transform your text objects either singly or in groups:
– Selecting an individual Text3D node or piece of text in the viewer lets you move that one text
object around by itself, independently of other objects in the scene.
– Selecting a Merge3D node exposes a transform control that affects all objects connected to that
Merge3D node at once, letting you transform the entire scene.
Layout Parameters
The Layout tab presents parameters you can use to choose how text is drawn: on a straight line, a
frame, a circle, or a custom spline path, along with contextual parameters that change depending on
which layout you’ve selected (all of which can be animated).
“Sub” Transforms
Another Transform tab (which the documentation has dubbed the “Sub” Transform tab) lets you apply a
separate level of transform to either characters, words, or lines of text, which lets you create even
more layout variations. For example, choosing to Transform by Words lets you change the spacing
between words, rotate each word, and so on. You can apply simultaneous transforms to characters,
words, and lines, so you can use all these capabilities at once if you really need to go for it. And, of
course, all these parameters are animatable.
Shading
The Shading tab lets you shade or texture a text object using standard Material controls.
Opaque Alpha
When the Is Matte checkbox is enabled, the Opaque Alpha checkbox is displayed. Enabling this
checkbox sets the alpha value of the matte object to 1. Otherwise the alpha, like the RGB, will be 0.
Infinite Z
When the Is Matte checkbox is enabled, the Infinite Z checkbox is displayed. Enabling this checkbox
sets the value in the Z-channel to infinite. Otherwise, the mesh will contribute normally to the
Z-channel.
Matte objects cannot be selected in the viewer unless you right-click in the viewer and choose 3D
Options > Show Matte Objects in the contextual menu. However, it’s always possible to select the
matte object by selecting its node in the node tree.
The Material ID is a value assigned to identify what material is used on an object. The Object ID is
roughly comparable to the Material ID, except it identifies objects and not materials.
Both the Object ID and Material ID are assigned automatically in numerical order, beginning with 1. It is
possible to set the IDs to the same value for multiple objects or materials even if they are different.
Override 3D offers an easy way to change the IDs for several objects. The Renderer will write the
assigned values into the frame buffers during rendering, when the output channel options for these
buffers are enabled. It is possible to use a value range from 0 to 65534. Empty pixels have an ID of 0,
so although it is possible to assign a value of 0 manually to an object or material, it is not advisable
because a value of 0 tells Fusion to set an unused ID when it renders.
A World Position Pass rendering of a scene with its center at (0,0,0) The actual image is on the top left
3D Scene Input
Nodes that utilize the World Position channel are located under the Position category. VolumeFog and
Z to WorldPos require a camera input matching the camera that rendered the Position channels, which
can either be a Camera3D or a 3D scene containing a camera. Just as in the Renderer3D, you can
choose which camera to use if more than one are in the scene. The VolumeFog can render without a
camera input from the Node Editor if the world space Camera Position inputs are set to the correct
value. VolumeMask does not use a camera input. Nodes that support the World Position Pass, located
under the Position category, offer a Scene input, which can be either a 3D Camera or a 3D scene
containing a camera.
There are three Position nodes that can take advantage of World Position Pass data.
– Nodes > Position > Volume Fog
– Nodes > Position > Volume Mask
– Nodes > Position > Z to World
– The “Dark Box”
Empty regions of the render will have the Position channel incorrectly initialized to (0,0,0). To get the
correct Position data, add a bounding sphere or box to your scene to create distant values and allow
the Position nodes to render correctly.
Point Clouds
The Point Cloud node is designed to work with locator clouds generated from 3D tracking software.
3D camera tracking software, such as SynthEyes and PF Track, will often generate hundreds or even
thousands of tracking points. Seeing these points in the scene and referencing their position in 3D and
screen space is important to assist with lining up live action and CG, but bringing each point in as an
individual Locator3D would impact performance dramatically and clutter the node tree.
The Point Cloud node can import point clouds written into scene files from match moving or 3D
scanning software.
The entire point cloud is imported as one object, which is a significantly faster approach.
If a point that matches the name you entered is found, it will be selected in the point cloud and
highlighted yellow.
TIP: The Point Cloud Find function is a case-sensitive search. A point named “tracker15” will
not be found if the search is for “Tracker15”.
Publishing a Point
If you want to use a point’s XYZ positions for connections to other controls in the scene, you can
publish the point. This is useful for connecting objects to the motion of an individual tracker. To publish
a point, right-click it and choose Publish from the contextual menu.
3D Camera Tracking
This chapter presents an overview of using the Camera Tracker node and the workflow
it involves. Camera tracking is used to create a virtual camera in Fusion’s 3D
environment based on the movement or a live-action camera in a clip. You can then
use the virtual camera to composite 3D models, text, or 2D images into a live-action
clip that has a moving camera.
For more information on other types of tracking in Fusion, see Chapter 83, “Using the
Tracker Node” in the DaVinci Resolve Reference Manual or Chapter 22 in the Fusion
Reference Manual.
Contents
Introduction to Tracking 599
What Is 3D Camera Tracking? 599
How Camera Tracking Works 599
The Camera Tracking Workflow 600
Clips That Don’t Work Well for Camera Tracking 600
Outputting from the Camera Tracker 601
2D View 602
3D View 602
Auto-Tracking in the Camera Tracker 603
Increasing Auto-Generated Tracking Points 603
Masking Out Objects 604
Matching the Live-Action Camera 606
Running the Solver 606
How Do You Know When to Stop? 607
Using Seed Frames 608
Cleaning Up Camera Solves 608
Introduction to Tracking
Tracking is one of the most useful and essential techniques available to a compositor. It can roughly be
defined as the creation of a motion path from analyzing a specific area in a clip over time. Fusion
provides a variety of different tracking nodes that let you analyze different kinds of motion.
Each tracker type has its own chapter in this manual. This chapter covers the tracking techniques with
the Camera Tracker node.
Once you complete these steps, an animated camera and point cloud are exported from the Inspector
into a 3D composite. The Camera Tracker encompasses this complete workflow within one tool. Five
tabs at the top of the Inspector are roughly laid out in the order in which you’ll use them. These five
tabs are:
– Track: Used to track a clip.
– Camera: Configures the basic Camera parameters.
– Solve: Calculates the 3D placement of the 2D tracking points and reconstructs the camera.
– Export: Generates a Camera 3D node, a Point Cloud node, and a 3D scene in the node tree.
– Options: Used to customize the look of the onscreen overlays.
TIP: Some shots that cannot be tracked using Fusion’s Camera Tracker can be performed in
dedicated 3D camera-tracking software like 3D Equalizer and PF Track. Camera tracking data
from these applications can then be imported in the Camera3D node in Fusion.
Note that the selection of tracks in the 2D view and their corresponding locators (in the point cloud) in
the 3D view are synchronized. There are also viewer menus available in both the 2D and 3D views to
give quick control of the functionality of this tool.
3D View
The second output of the Camera Tracker node displays a 3D scene. To view this, connect this 3D
output to a 3D Transform or Merge 3D node and view that tool.
Bi-Directional Tracking
When performing a track, you can enable the Bidirectional Tracking checkbox, which first tracks
forward from the start of the clip, and then tracks a second pass in reverse. This two-pass approach
can potentially extend the duration of any given point by re-analyzing points initially identified in the
forward pass. There is very little reason not to have this enabled unless you are very short on time.
Bidirectional tracking takes longer, but it’s usually worth it, and the process is reasonably quick
considering the benefit.
Masks used to omit the moving clouds and waves from being tracked by the Camera Tracker
By doing this, the tracker ignores the waves of the water and moving clouds. Unlike drawing a mask
for an effect, the mask in this case does not have to be perfect. You are just trying to identify the rough
area to occlude from the tracking analysis.
The original image to be tracked (left), and the occlusion mask of the clouds and water (right)
TIP: If there’s a lot of motion in a shot, you can use the Tracker or Planar Tracker nodes to
make your occlusion mask follow the area you want to track. Just remember that, after using
the PlanarTracker or PlanarTransform node to transform your mask, you need to use a Bitmap
node to turn it back into a mask that can be connected to the Camera Tracker node’s Track
Mask input.
If the actual values are not known, try a best guess. The solver attempts to find a camera near these
parameters, and it helps the solver by giving parameters as close to the live action as possible. The
more accurate the information you provide, the more accurate the solver calculation. At a minimum, try
to at least choose the correct camera model from the Film Gate menu. If the film gate is incorrect, the
chances that the Camera Tracker correctly calculates the lens focal length become very low.
Unlike the Track and Solve tabs, the Camera tab does not include a button at the top of the Inspector
that executes the process. There is no process to perform on the Camera tab once you configure the
camera settings. After you set the camera settings to match the live-action camera, you move to the
Solve tab.
– A good balance of tracks across objects at different depths, with not too many tracks
in the distant background or sky (these do not provide any additional perspective
information to the solver).
– Tracks distributed evenly over the image and not highly clustered on a few objects or
one side of the image.
– The track starts and ends staggered over time, with not too many tracks ending on
the same frame.
IMPORTANT
If you are not familiar with camera tracking, it may be tempting to try to directly edit the
resulting 3D splines in the Spline Editor in order to improve a solved camera’s motion path.
This option should be used as an absolute last resort. It’s preferable, instead, to modify the 2D
tracks being fed into the solver.
Hovering the pointer over any tracking point displays a large metadata tooltip that includes the solve
error for the point. For a more visual representation of the accuracy, you can enable the display of
3D locators in the viewer by clicking the Reprojection Locators button in the viewer toolbar.
After a solve, the Camera Tracker toolbar can display Reprojection locators
When the tracking points are converted into a point cloud by the solver, it creates 3D reprojection
locators for each tracking point. These Reprojection locators appear as small X marks near the
corresponding tracking point. The more the two objects overlap, the lower the solve error.
The goal when filtering the trackers is to remove all red tracker marks and keep all the green marks.
Whether you decide to keep both the yellow and orange or just the yellow is more a question of how
many marks you have in the clip. You produce a better solve if you retain only the yellow marks;
however, if you do not have enough marks to calculate the 3D scene, you will have to keep some of
the better orange marks as well.
– Keep all tracks with motion that’s completely determined by the motion of the
live‑action camera.
– Delete tracks on moving objects or people and tracks that have parallax issues.
– Delete tracks that are reflected in windows or water.
– Delete tracks of highlights that move over a surface.
– Delete tracks that do not do a good job of following a feature.
– Delete tracks that follow false corners created by the superposition of
foreground and background layers.
– Consider deleting tracks that correspond to locators that the solver has reconstructed
at an incorrect Z-depth.
Deleting Tracks
You can manually delete tracks in the viewer or use filters to select groups of tracks. When deleting
tracks in the viewer, it is best to modify the viewer just a bit to see the tracks more clearly. From the
Camera Tracker toolbar above the viewer, clicking the Track Trails button hides the trails of the
tracking points. This cleans up the viewer to show points only, making it easier to make selections.
At the right end of the toolbar, clicking the Darken Image button slightly darkens the image, again
making the points stand out a bit more in the viewer.
To begin deleting poor-quality tracks, you can drag a selection box around a group of tracks you want
to remove and then either click the Delete Tracks button in the Camera Tracker toolbar or press
Command-Delete.
You can hold down the Command key to select discontiguous tracking marks that are not near each
other. If you accidentally select tracks you want to keep, continue holding the Command key and drag
over the selected tracks to deselect them.
When deleting tracks, take note of the current Average Solve Error at the top of the Inspector and then
rerun the solver. It is better to delete small groups of tracks and then rerun the solver than to delete
one or two large sections. As mentioned previously, deleting too many tracks can have adverse
effects and increase the Average Solve Error.
For instance, it is generally best to run the solver using tracks with longer durations. Since shorter
tracks tend to be less accurate when calculating the camera, you can remove them using the Filter
section in the Inspector.
Increasing the Minimum Track Length parameter sets a threshold that each tracker must meet. Tracks
falling below the threshold appear red. You can then click the Select Tracks Satisfying Filters button to
select the shorter tracks and click Delete from the Options section in the Inspector.
TIP: In some cases, the clip you are tracking may not have the ground in the frame.
If necessary, you can set the Selection menu to XY, which indicates you are selecting points
on a wall.
To work with the 3D scene, you can select the Merge 3D and load it into one of the viewers, and then
select the Camera Tracker Renderer and load that into a second viewer.
When the Merge 3D is selected, a toolbar above the viewer can add 3D test geometry like an
image plane or cube to verify the precision of the 3D scene and camera. You can then connect
actual 3D elements into the Merge 3D as you would any manually created 3D scene. The point
cloud can help align and guide the placement of objects, and the CameraTracker Renderer is a
Renderer 3D node with all the same controls.
Use the point cloud to accurately place different elements into a 3D scene
At this point, there is no need for the Camera Tracker node unless you find that you need to rerun
the solver. Otherwise, you can save some memory by deleting the Camera Tracker node.
Particle Systems
This chapter is designed to give you a brief introduction to the creation of fully
3D particle systems, one of Fusion’s most powerful features. Once you understand
these basics, for more Information on each particle system node that’s available,
see Chapter 113, “Particle Nodes” in the DaVinci Resolve Reference Manual or
Chapter 52 in the Fusion Reference Manual.
Contents
Introduction to Particle Systems 616
Anatomy of a Simple Particle System 617
Particle System Distribution 619
Particle Nodes Explained by Type 620
Emitters 620
Forces 621
Compositing 621
Rendering 621
Example Particle Systems 622
The three most fundamental nodes required for creating particle systems are found on the toolbar.
As with the 3D nodes to the right, these are arranged, from left to right, in the order in which they must
be connected to work, so even if you can’t remember how to hook up a simple particle system, all you
need to do is click the three particle system nodes from left to right to create a functional
particle system.
However, these three nodes are only the tip of the iceberg. Opening the Particle category in the
Effects Library reveals many, many particle nodes designed to work together to create increasingly
complex particle interactions.
If your needs are more complicated, you can combine two or more pEmitter nodes using a pMerge
node (the particle system version of a Merge node), to create compound particle systems where
multiple types of particles combine with one another to create a result.
If you’re trying to create particle systems with more natural effects, you can add “forces” to each
emitter. These forces are essentially physics or behavioral simulations that automatically cause the
particles affected by them to be animated with different kinds of motion, or to be otherwise affected
by different objects within scenes.
Customizing the effect of pEmitter nodes using different forces to add complexity to the particle animation
You can also attach the following types of nodes to a pEmitter node to deeply customize a
particle system:
– Attach a 2D image to a pEmitter node to create highly customized particle shapes. Make sure your
image has an appropriate alpha channel.
– Attach a Shape3D or other 3D geometry node to a pEmitter node to create a more specific region
of emission (by setting Region to Mesh in the Region tab).
The above examples assume that you’ll output 2D renders to combine into the rest of a
2D composition. However, because particle systems are fully 3D, you also have the option of
outputting your particle system in such a way as to be used from within other 3D scenes
in your composition.
The Output Mode of the pRender node, at the very top of the controls exposed in the Inspector, can
be set to either 2D or 3D, depending on whether you want to combine the result of the particle system
with 2D layers or with objects in a 3D scene.
If you connect a pRender node to a Merge3D node, the Output Mode is locked to 3D, meaning that 3D
geometry is output by the pRender node for use within the Merge3D node’s scene. This means that
the particles can be lit, they can cast shadows, and they can interact with 3D objects within that scene.
NOTE: Once you set the pRender node to either 2D or 3D and make any change to the
nodes in the Inspector, you cannot change the output mode.
Particle systems can be positioned and rotated by loading the pEmitter nodes that generate particles
into a viewer and using the onscreen 3D position and Rotation controls provided to move the particle
system around.
Alternatively, you can use the controls of the pEmitter’s Region tab in the Inspector to adjust
Translation, Rotation, and Pivot. All these controls can be animated.
Emitters
pEmitter nodes are the source of all particles. Each pEmitter node can be set up to generate a single
type of particle with enough customization so that you’ll never create the same type of particle twice.
Along with the pRender node, this is the only other node that’s absolutely required to create a
particle system.
Forces
Many of the particle nodes found in the Particles bin of the Effects Library are “forces” that enhance a
particle simulation by simulating the effect of various forces acting upon the particles generated by
an emitter.
Some forces, including pDirectionalForce, pFlock, pFriction, pTurbulence, and pVortex, are rules that
act upon particles without the need for any other input. These are simply “acts of nature” that cause
particles to behave in different ways.
Other forces, such as pAvoid, pBounce, pFollow, and pKill, work in conjunction with 3D geometry in a
scene such as shapes or planes to cause things to happen when a particle interacts or comes near
that geometry. Note that some of the particles described previously can also use geometry to direct
their actions, so these two categories of forces are not always that clear-cut.
Compositing
The pMerge node is a simple way to combine multiple emitters so that different types of particles work
together to create a sophisticated result. The pMerge node has no parameters; you simply connect
emitters to it, and they’re automatically combined.
Rendering
The pRender node is required whether you’re connecting a particle system’s output to a 2D Merge
node or to a Merge3D node for integration into a 3D scene. Along with the pEmitter node, this is the
only other node that’s absolutely required to create a particle system.
– Controls: The main controls that let you choose whether to output 2D or 3D image data, and
whether to add blur or glow effects to the particle systems, along with a host of other details
controlling how particles will be rendered.
– Scene: These controls let you transform the overall particle scene all at once.
Different particle system presets in the Templates category of the Bins window in Fusion Studio
Simply drag and drop any of the particle presets into the Node Editor, load the last node into the
viewer, and you’ll see how things are put together.
Advanced
Compositing
Techniques
Chapter 28
Optical Flow
and Stereoscopic
Nodes
This chapter covers the numerous stereoscopic and optical flow-based nodes
available in Fusion and their related workflows.
Contents
Overview 625
Stereoscopic Overview 625
Optical Flow Overview 626
Toolset Overview 626
Working with Aux Deep Channels 627
Optical Flow Workflows 628
OpticalFlow 628
TimeSpeed, TimeStretcher 628
SmoothMotion 628
Repair Frame, Tween 628
Advanced Optical Flow Processing 628
Stereoscopic Workflows 629
Stereo Camera 629
Stereo Materials 630
Disparity 631
NewEye, StereoAlign 631
DisparityToZ, ZToDisparity 631
Separate vs. Stack 631
Overview
Fusion includes 3D stereoscopic and optical flow-based nodes, which can work together or
independently of each other to create, repair, and enhance 3D stereoscopic shots.
Stereoscopic Overview
All stereoscopic features are fully integrated into Fusion’s 3D environment. Stereoscopic images can
be created using a single camera, which supports eye separation and convergence distance, and a
Renderer 3D for the virtual left and right eye. It is also possible to combine two different cameras for a
stereo camera rig.
Stereoscopic nodes can be used to solve 3D stereoscopic shooting issues, like 3D rig misalignment,
image mirror polarization differences, camera timing sync issues, color alignment, convergence, and
eye separation issues. The stereo nodes can also be used for creating depth maps.
NOTE: The stereoscopic nodes in the Fusion page work independently of the stereoscopic
tools in the other DaVinci Resolve pages.
Toolset Overview
Here is an overview of the available nodes.
Stereoscopic Nodes
– Stereo > Anaglyph: Combines stereo images to create a single anaglyph image for viewing.
– Stereo > Combiner: sStacks a separate stereo images into a single stacked pair,
so they can be processed together.
– Stereo > Disparity: Generates disparity between left/right images.
– Stereo > DisparityToZ: Converts disparity to Z-depth.
– Stereo > Global Align: Shifts each stereo eye manually to do basic alignment of stereo images.
– Stereo > NewEye: Replaces left and/or right eye with interpolated eyes.
– Stereo > Splitter: Separates a stacked stereo image into to left and right images.
– Stereo > StereoAlign: Adjusts vertical alignment, convergence, and eye separation.
– Stereo > ZToDisparity: Converts Z-depth to disparity.
There are a couple of ways to retrieve or generate those extra channels within Fusion.
For example:
– The Renderer3D node is capable of generating most of these channels.
– The OpticalFlow node generates the Vector and BackVector channels, and then TimeStretcher
and TimeSpeed can make use of these channels.
– The Disparity node generates the Disparity channels, and then DisparityToZ, NewEye, and
StereoAlign nodes can make use of the Disparity channels.
– The OpenEXR format can be used to import or export aux channels into Fusion by specifying a
mapping from EXR attributes to Fusion Aux channels using CopyAux.
OpticalFlow
The Optical Flow node generates the Vector and BackVector data. Typically, for optimal performance,
you connect the Optical Flow output to a Saver to save the image as OpenEXR files with the motion
vectors stored in an aux channel.
TimeSpeed, TimeStretcher
You can create smooth constant or variable slow-motion effects using the TimeSpeed or
TimeStretcher nodes. When Optical Flow motion vectors are available in the aux channel of an image,
enabling Flow mode in the TimeSpeed or TimeStretcher Interpolation settings will take advantage of
the Vector and BackVector channels. For the Flow mode to work, there must be either an upstream
OpticalFlow node generating the hidden channels or an OpenEXR Loader bringing these channels in.
These nodes use the Vector/BackVector data to do interpolation on the motion channel and then
destroy the data on output since the input Vector/BackVector channels are invalid. For more detail on
TimeSpeed or TimeStretcher, see Chapter 110, “Miscellaneous Nodes” in the DaVinci Resolve
Reference Manual and Chapter 49 in the Fusion Reference Manual.
SmoothMotion
SmoothMotion can be used to smooth the Vector and BackVector channels or smooth the disparity in
a stereo 3D clip. This node passes through, modifies, or generates new aux channels, but does not
destroy them.
The workflow is to load a left and right stereo image pair and process those in the Disparity node.
Once the Disparity map is generated, other nodes can process the images.
TIP: When connectng stereo pairs in the node tree, make sure that the left and right images
are connected to the left and right inputs of the Disparity node.
Disparity generation, like Optical Flow, is computationally expensive, so the general idea is that you
can pre-generate these channels, either overnight or on a render farm, and save them into an
EXR sequence.
The toolset is designed around this philosophy.
Stereo Camera
There are two ways to set up a stereoscopic camera. The common way is to simply add a Camera 3D
and adjust the eye separation and convergence distance parameters.
The other way is to connect another camera to the RightStereoCamera input port of the Camera 3D.
When viewing the scene through the original camera or rendering, the connected camera is used for
creating the right-eye content.
Stereo Materials
Using the Stereo Mix material node, it is possible to assign different textures per eye.
NewEye, StereoAlign
NewEye and StereoAlign use and destroy the Disparity channel to do interpolation on the
color channel.
The hidden channels are destroyed in the process because, after the nodes have been applied, the
original Disparity channels would be invalid.
For these nodes to work, there must be either an upstream Disparity node generating the hidden
channels or an OpenEXR Loader bringing these channels in.
DisparityToZ, ZToDisparity
These nodes pass through, modify, or generate new aux channels, but do not destroy any.
TIP: If the colors between shots are different, use Color Corrector or Color Curves to do a
global alignment first before calculating the Disparity map. Feed the image you will change
into the orange input and the reference into the green input. In the Histogram section of the
Color Corrector, select Match, and also select Snapshot Match Time. In the Color Curves’
Reference section, select Match Reference.
In the above example, the workflow on the right takes the left and right eye, generates the disparity,
and then NewEye is used to generate a new eye for the image right away.
The example on the left renders the frames with disparity to intermediate EXR images. These images
are then loaded back into Stereo nodes and used to create the NewEye images.
By using Render nodes to compute the disparity first, the later processing of the creative operations
can be a much faster and interactive experience.
Although not shown in the above diagram, it is usually a good idea to color correct the right eye to be
similar to the left eye before disparity generation, as this helps with the disparity-tracking algorithm.
The color matching does not need to be perfect—for example, it can be accomplished using the
“Match” option in a Color Corrector’s histogram options.
You would expect for non-occluded pixels that Dleft = -Dright, although, due to the disparity
generation algorithm, this is only an approximate equality.
NOTE: Disparity stores both X and Y values because rarely are left/right images perfectly
registered in Y, even when taken through a carefully set up camera rig.
Both Disparity and Optical Flow values are stored as un-normalized pixel shifts. In particular, note that
this breaks from Fusion’s resolution-independent convention. After much consideration, this
convention was chosen so the user wouldn’t have to worry about rescaling the Disparity/Flow values
when cropping an image or working out scale factors when importing/exporting these channels to
other applications. Because the Flow and Disparity channels store things in pixel shifts, this can cause
problems with Proxy and AutoProxy. Fusion follows the convention that, for proxied images, these
channels store unscaled pixel shifts valid for the full-sized image. So if you wish to access the Disparity
values in a script or via a probe, you need to remember to always scale them by (image. Width/image.
OriginalWidth, image. Height/ image. OriginalHeight).
TIP: Although you can use the Channel Booleans to copy any aux channel into RGBA, it
involves a few additional clicks when compared to CopyAux.
One thing to be aware of is that aux channels tend to consume a lot of memory. A float-32 1080p
image containing just RGBA uses about 32 MB of memory, but with all the aux channels enabled it
consumes around 200 MB of memory.
Semi-Transparent Objects
The Optical Flow and Disparity generation algorithms Fusion uses assume there is only one layer per
pixel when tracking pixels from frame to frame. In particular, transparent objects and motion blur will
cause problems. For example, a shot flying through the clouds with the semi-transparent clouds in the
foreground and a distant landscape background will confuse the Optical Flow/Stereo algorithms, as
they do not recognize overlapping objects with different motions. Usually the optical flow will end up
tracking regions of one object or the other. If the transparent object and the background are near the
same depth and consequently have the same disparity, then it is not a problem.
Motion Blur
Motion blur is also a serious problem for the reason explained in the previous point. The Disparity and
Optical Flow algorithms are unsure whether to assign a pixel in the motion blur to the moving object or
the background pixel. Because the algorithms used are global in nature, not only the vectors on the
motion blur will be wrong, but it will confuse the algorithm on regions close to the motion blur.
Depth of Field
Depth of field is also another problem related to the above two problems. The problem occurs when
you have a defocused foreground object over a background object that is moving (Optical Flow case)
or shifts between L/R (Stereo Disparity case). The blurred edges will confuse the tracking because
they can’t figure out that the edges are actually two separate objects.
For example:
(xleft, yleft) + (Dleft. x, Dleft. y) -> (xright, yright) (xright, yright) +
(Dright. x, Dright. y) -> (xleft, yleft)
You would expect for non-occluded pixels that Dleft = -Dright, although due to the disparity generation
algorithm, this is only an approximate equality. Note that Disparity stores both X and Y values because
rarely are left/right images perfectly registered in Y, even when taken through a carefully set up
camera rig.
Disparity and Optical Flow values are stored as un-normalized pixel shifts. In particular, note that this
breaks from Fusion’s resolution-independent convention. After much consideration, this convention
was chosen so the user wouldn’t have to worry about rescaling the Disparity/Flow values when
cropping an image or working out scale factors when importing/exporting these channels to other
applications. Because the Flow and Disparity channels store things in pixel shifts, this can cause
problems with Proxy and AutoProxy. The convention that Fusion follows is that, for proxied images,
these channels store unscaled pixel shifts valid for the full-sized image. So if you wish to access the
disparity values in a script or via a probe, you need to remember to always scale them by (image.
Width/image. OriginalWidth, image. Height/image. OriginalHeight).
When using Vector and BackVector aux channels, remember that all nodes expect these aux channels
to be filled with the flow between sequential frames.
When working with these channels, it is the user’s responsibility to follow these rules (or for clever
users to abandon them). Nodes like TimeStretcher will not function correctly since they still expect the
channels to contain flow forward/back by 1 frame.
Fusion
Page Effects
Chapter 29
3D Nodes
This chapter covers, in great detail, the nodes used for creating 3D composites.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Alembic Mesh 3D [ABC] 639 Renderer 3D [3RN] 691
Bender 3D [3BN] 642 Replace Material 3D [3RPL] 700
Camera 3D [3CM] 644 Replace Normals 3D [3RPN] 702
Cube 3D [3CB] 653 Replicate 3D [3REP] 704
Custom Vertex 3D [3CV] 655
Ribbon 3D [3RI] 710
Displace 3D [3DI] 660
Shape 3D [3SH] 712
Duplicate 3D [3DP] 662
Soft Clip [3SC] 715
FBX Exporter 3D [FBX] 667
Spherical Camera [3SC] 717
FBX Mesh 3D [FBX] 670
Text 3D [3TXT] 720
Fog 3D [3FO] 672
Transform 3D [3XF] 729
Image Plane 3D [3IM] 675
Triangulate 3D [3TRI] 732
Locator 3D [3LO] 677
Merge 3D [3MG] 679 UV Map 3D [3UV] 733
The first method is the preferred method; both Alembic and FBX nodes by themselves import the
entire model as one object. However, the Import menu breaks down the model, lights, camera, and
animation into a string of individual nodes. This makes it easy to edit and modify and use subsections
of the imported Alembic mesh. Also, transforms in the file are read into Fusion splines and into the
Transform 3D nodes, which get saved with the comp. Later, when reloading the comp, the transforms
are loaded from the comp and not the Alembic file. Fusion handles the meshes differently, always
reloading them from the Alembic file.
Arbitrary user data varies depending on the software creating the Alembic file, and therefore this type
of metadata is mostly ignored.
Animation
This section includes one option for the Resampling rate. When exporting an Alembic animation, it is
saved to disk using frames per second (fps). When importing Alembic data into Fusion, the fps are
detected and entered into the Resample Rate field unless you have changed it previously in the
current comp. Ideally, you should maintain the exported frame rate as the resample rate, so your
samples match up with the original. The Detected Sampling Rates information at the top of the dialog
can give an idea of what to pick if you are unsure. However, using this field, you can change the frame
rate to create effects like slow motion.
Not all objects and properties in a 3D scene have an agreed upon universal convention in the Alembic
file format. That being the case, Lights, Materials, Curves, Multiple UVs, and Velocities are not currently
supported when you import Alembic files.
Since the FBX file format does support materials and lights, we recommend the use of FBX for lights,
cameras, and materials. Use Alembic for meshes only.
Inputs
The AlembicMesh3D node has two inputs in the Node Editor. Both are optional since the node is
designed to use the imported mesh.
– SceneInput: The orange input can be used to connect an additional 3D scene or model. The
imported Alembic objects combine with the other 3D geometry.
– MaterialInput: The optional green input is used to apply a material to the geometry by
connecting a 2D bitmap image. It applies the connected image to the surface of the geometry
in the scene.
Inspector
Controls Tab
The first tab in the Inspector is the Controls tab. It includes a series of unique controls specific to the
Alembic Mesh 3D node as well as six groupings of controls that are common to most 3D nodes. The
“Common Controls” section at the end of this chapter includes detailed descriptions of the
common controls.
Below are descriptions of the Alembic Mesh 3D specific controls.
Filename
The complete file path of the imported Alembic file is displayed here. This field allows you to change
or update the file linked to this node.
Wireframe
Enabling this option causes the mesh to display only the wireframe for the object in the viewer. When
enabled, there is a second option for wireframe anti-aliasing. You can also render these wireframes
out to a file if the Renderer 3D node has the OpenGL render type selected.
Common Controls
Controls, Materials, Transform, and Settings Tabs
The controls for Visibility, Lighting, Matte, Blend Mode, Normals/Tangents, and Object ID in the
Controls tab are common in many 3D nodes. The Materials tab, Transforms tab and Settings tab in the
Inspector are also duplicated in other 3D nodes. These common controls are described in detail at the
end of this chapter in “The Common Controls” section.
Bender 3D [3BN]
Bender 3D Introduction
The Bender 3D node is used to bend, taper, twist, or shear 3D geometry based on the geometry’s
bounding box. It works by connecting any 3D scene or object to the orange input on the Bender 3D
node, and then adjusting the controls in the Inspector. Only the geometry in the scene is modified.
Any lights, cameras, or materials are passed through unaffected.
The Bender node does not produce new vertices in the geometry; it only alters existing vertices in the
geometry. So, when applying the Bender 3D node to primitives, like the Shape 3D, or Text 3D nodes,
increase the Subdivision setting in the primitive’s node to get a higher-quality result.
Inputs
The following inputs appear on the Bender 3D node in the Node Editor.
– SceneInput: The orange scene input is the required input for the Bender 3D node. You use this
input to connect another node that creates or contains a 3D scene or object.
Inspector
Bender 3D controls
Controls Tab
The first tab in the Inspector is the Controls tab. It includes all the controls for the Bender 3D node.
Bender Type
The Bender Type menu is used to select the type of deformation to apply to the geometry. There are
four modes available: Bend, Taper, Twist, and Shear.
Amount
Adjusting the Amount slider changes the strength of the deformation.
Axis
The Axis control determines the axis along which the deformation is applied. It has a different meaning
depending on the type of deformation. For example, when bending, this selects the elbow in
conjunction with the Angle control. In other cases, the deform is applied around the specified axis.
Angle
The Angle thumbwheel control determines what direction about the axis a bend or shear is applied.
It is not visible for taper or twist deformations.
Group Objects
If the input of the Bender 3D node contains multiple 3D objects, either through a Merge 3D or strung
together, the Group Objects checkbox treats all the objects in the input scene as a single object, and
the common center is used to deform the objects, instead of deforming each component object
individually.
Common Controls
Settings
The Settings tab in the Inspector is common to all 3D nodes. This common tab is described in detail at
the end of this chapter in “The Common Controls” section.
Camera 3D [3CM]
Camera Projection
The Camera 3D node can also be used to perform Camera Projection by projecting a 2D image
through the camera into 3D space. Projecting a 2D image can be done as a simple Image Plane
aligned with the camera, or as an actual projection, similar to the behavior of the Projector 3D node,
with the added advantage of being aligned precisely with the camera. The Image Plane, Projection,
and Materials tabs do not appear until you connect a 2D image to the magenta image input on the
Camera 3D node in the Node Editor.
Stereoscopic
The Camera node has built-in stereoscopic features. They offer control over eye separation and
convergence distance. The camera for the right eye can be replaced using a separate camera node
connected to the green left/right stereo camera input. Additionally, the plane of focus control for depth
of field rendering is also available here.
If you add a camera by dragging the camera icon from the toolbar onto the 3D view, it automatically
connects to the Merge 3D you are viewing. Also, the current viewer is set to look through the
new camera.
Alternatively, it is possible to copy the current viewer to a camera (or spotlight or any other object) by
selecting the Copy PoV To option in the viewer’s contextual menu, under the Camera submenu.
Displaying a camera node directly in the viewer shows only an empty scene; there is nothing for the
camera to see. To view the scene through the camera, view the Merge 3D node where the camera is
connected, or any node downstream of that Merge 3D. Then right-click on the viewer and select
Camera > [Camera name] from the contextual menu. Right-clicking on the axis label found in the lower
corner of each 3D viewer also displays the Camera submenu.
The aspect of the viewer may be different from the aspect of the camera, so the camera view may not
match the actual boundaries of the image rendered by the Renderer 3D node. Guides can be enabled
to represent the portion of the view that the camera sees and assist you in framing the shot. Right-click
on the viewer and select an option from the Guides > Frame Aspect submenu. The default option uses
the format enabled in the Composition > Frame Format preferences. To toggle the guides on or off,
select Guides > Show Guides from the viewers’ contextual menu, or use the Command-G (macOS) or
Ctrl-G (Windows) keyboard shortcut when the viewer is active.
Camera 3D controls
Controls Tab
The Camera3D Inspector includes six tabs along the top. The first tab, called the Controls tab, contains
some of the most fundamental camera settings, including the camera’s clipping plains, field of view,
focal length, and stereoscopic properties. Some tabs are not displayed until a required connection is
made to the Camera 3D node.
Projection Type
The Projection Type menu is used to select between Perspective and Orthographic cameras.
Generally, real-world cameras are perspective cameras. An orthographic camera uses parallel
orthographic projection, a technique where the view plane is perpendicular to the viewing direction.
This produces a parallel camera output that is undistorted by perspective.
Orthographic cameras present controls only for the near and far clipping planes, and a control to set
the viewing scale.
NOTE: A smaller range between the near and far clipping planes allows greater accuracy in
all depth calculations. If a scene begins to render strange artifacts on distant objects, try
increasing the distance for the Near Clip plane.
Angle of View
Angle of View defines the area of the scene that can be viewed through the camera. Generally, the
human eye can see more of a scene than a camera, and various lenses record different degrees of the
total image. A large value produces a wider angle of view, and a smaller value produces a narrower, or
more tightly focused, angle of view.
Just as in a real-world camera, the angle of view and focal length controls are directly related. Smaller
focal lengths produce a wider angle of view, so changing one control automatically changes the
other to match.
Focal Length
In the real world, a lens’ Focal Length is the distance from the center of the lens to the film plane. The
shorter the focal length, the closer the focal plane is to the back of the lens. The focal length is
measured in millimeters. The angle of view and focal length controls are directly related. Smaller focal
lengths produce a wider angle of view, so changing one control automatically changes the
other to match.
The relationship between focal length and angle of view is angle = 2 * arctan[aperture / 2 /
focal_length].
Use the vertical aperture size to get the vertical angle of view and the horizontal aperture size to get
the horizontal angle of view.
Stereo
The Stereo section includes options for setting up 3D stereoscopic cameras. 3D stereoscopic
composites work by capturing two slightly different views, displayed separately to the left and right
eyes. The mode menu determines if the current camera is a stereoscopic setup or a mono camera.
When set to the default mono setting, the camera views the scene as a traditional 2D film camera.
Three other options in the mode menu determine the method used for 3D stereoscopic cameras.
Toe In
In a toe-in setup, both cameras are rotating in on a single focal point. Though the result is
stereoscopic, the vertical parallax introduced by this method can cause discomfort by the audience.
Toe-in stereoscopic works for convergence around the center of the images but exhibits keystoning,
or image separation, to the left and right edges. This setup is can be used when the focus point and
the convergence point need to be the same. It is also used in cases where it is the only way to match a
live-action camera rig.
Off Axis
Regarded as the correct way to create stereo pairs, this is the default method in Fusion. Off Axis
introduces no vertical parallax, thus creating stereo images with less eye strain. Sometimes called a
skewed-frustum setup, this is akin to a lens shift in the real world. Instead of rotating the two cameras
inward as in a toe-in setup, Off Axis shifts the lenses inward.
Rig Attached To
This drop-down menu allows you to control which camera is used to transform the stereoscopic setup.
Based on this menu, transform controls appear in the viewer either on the right camera, left camera, or
between the two cameras. The ability to switch the transform controls through rigging can assist in
matching the animation path to a camera crane or other live-action camera motion. The Center option
places the transform controls between the two cameras and moves each evenly as the separation and
convergence are adjusted. Left puts the transform controls on the left camera, and the right camera
moves as the separation and convergence are adjusted. Right puts the transform controls on the right
camera, and the left camera moves as adjustments are made to separation and convergence.
Eye Separation
Eye Separation defines the distance between both stereo cameras. Setting Eye Separation to a value
larger than 0 shows controls for each camera in the viewer when this node is selected. Note that there
is no Convergence Distance control in Parallel mode.
Convergence Distance
This control sets the stereoscopic convergence distance, defined as a point located along the Z-axis
of the camera that determines where both left- and right-eye cameras converge. The Convergence
Distance controls are only available when setting the Mode menu to Toe-In or Off Axis.
Film Back
Film Gate
The size of the film gate represents the dimensions of the aperture. Instead of setting the aperture’s
width and height, you can choose it using the list of preset camera types in the Film Gate menu.
Selecting one of the options automatically sets the aperture width and aperture height to match.
Aperture Width/Height
The Aperture Width and Height sliders control the dimensions of the camera’s aperture or the portion
of the camera that lets light in on a real-world camera. In video and film cameras, the aperture is the
mask opening that defines the area of each frame exposed. The Aperture control uses inches as its
unit of measurement.
– Inside: The image source defined by the film gate is scaled uniformly until one of its dimensions
(X or Y) fits the inside dimensions of the resolution gate mask. Depending on the relative
dimensions of image source and mask background, either the image source’s width or height may
be cropped to fit the dimension of the mask.
– Width: The image source defined by the film gate is scaled uniformly until its width (X) fits the
width of the resolution gate mask. Depending on the relative dimensions of image source and
mask, the image source’s Y-dimension might not fit the mask’s Y-dimension, resulting in either
cropping of the image source in Y or the image source not covering the mask’s height entirely.
– Height: The image source defined by the film gate is scaled uniformly until its height (Y) fits the
height of the resolution gate mask. Depending on the relative dimensions of image source and
mask, the image source’s X-dimension might not fit the mask’s X-dimension, resulting in either
cropping of the image source in X or the image source not covering the mask’s width entirely.
– Outside: The image source defined by the film gate is scaled uniformly until one of its dimensions
(X or Y) fits the outside dimensions of the resolution gate mask. Depending on the relative
dimensions of image source and mask, either the image source’s width or height may be cropped
or not fit the dimension of the mask.
– Stretch: The image source defined by the film gate is stretched in X and Y to accommodate the
full dimensions of the generated resolution gate mask. This might lead to visible distortions of the
image source.
Control Visibility
This section allows you to selectively activate the onscreen controls that are displayed along with
the camera.
– Show View Controls: Displays or hides all camera onscreen controls in the viewers.
– Frustum: Displays the actual viewing cone of the camera.
– View Vector: Displays a white line inside the viewing cone, which can be used to determine the
shift when in Parallel mode.
– Near Clip: The Near clipping plane. This plane can be subdivided for better visibility.
– Far Clip: The Far clipping plane. This plane can be subdivided for better visibility.
– Focal Plane: The plane based on the Plane of Focus slider explained in the Controls tab above.
This plane can be subdivided for better visibility.
– Convergence Distance: The point of convergence when using Stereo mode. This plane can be
subdivided for better visibility.
Import Camera
The Import Camera button displays a dialog to import a camera from another application.
It supports the following file types:
*dotXSI .xsi
Image Tab
When a 2D image is connected to the magenta image input on the Camera3D node, an Image tab is
created at the top of the inspector. The connected image is always oriented so it fills the camera’s
field of view.
Except for the controls listed below, the options in this tab are identical to those commonly found in
other 3D nodes. For more detail on visibility, lighting, matte, blend mode, normals/tangents, and
Object ID, see “The Common Controls” section at the end of this chapter.
Fill Method
This menu configures how to scale the image plane if the camera has a different aspect ratio.
– Inside: The image plane is scaled uniformly until one of its dimensions (X or Y) fits the inside
dimensions of the resolution gate mask. Depending on the relative dimensions of image source
and mask background, either the image source’s width or height may be cropped to fit the
dimensions of the mask.
– Width: The image plane is scaled uniformly until its width (X) fits the width of the mask. Depending
on the relative dimensions of image source and the resolution gate mask, the image source’s
Y-dimension might not fit the mask’s Y-dimension, resulting in either cropping of the image source
in Y or the image source not covering the mask’s height entirely.
– Height: The image plane is scaled uniformly until its height (Y) fits the height of the mask.
Depending on the relative dimensions of image source and the resolution gate mask, the image
source’s X-dimension might not fit the mask’s X-dimension, resulting in either cropping of the
image source in X or the image source not covering the mask’s width entirely.
– Outside: The image plane is scaled uniformly until one of its dimensions (X or Y) fits the outside
dimensions of the resolution gate mask. Depending on the relative dimensions of image source
and mask, either the image source’s width or height may be cropped or not fit the respective
dimension of the mask.
– Depth: The Depth slider controls the image plane’s distance from the camera.
Materials Tab
The options presented in the Materials tab are identical to those commonly found in other 3D nodes.
For more detail on Diffuse, Specular, Transmittance, and Martial ID controls, see the “Common
Controls” section at the end of this chapter.
Projection Tab
When a 2D image is connected to the camera node, a fourth projection tab is displayed at the top of
the Inspector. Using this Projection tab, it is possible to project the image into the scene. A projection
is different from an image plane in that the projection falls onto the geometry in the scene exactly as if
there were a physical projector present in the scene. The image is projected as light, which means the
Renderer 3D node must be set to enable lighting for the projection to be visible.
Projection Mode
– Light: Defines the projection as a spotlight.
– Ambient Light: Defines the projection as an ambient light.
– Texture: Allows a projection that can be relighted using other lights. Using this setting requires a
Catcher node connected to the applicable inputs of the specific material.
Camera Projection: When importing a camera from a 3D application that is also used as a
projector, make sure that the Fit Resolution Gate options on the Controls tab as well as the
Projection tab are in sync. Only the first one automatically sets to what the 3D app was using.
The latter might have to be adjusted manually.
Image Plane: The camera’s image plane isn‘t just a virtual guide for you in the viewers.
It‘s actual geometry that you can also project on to. To use a different image on the image
plane, you need to insert a Replace Material node after your Camera node.
Parallel Stereo: There are three ways you can achieve real Parallel Stereo mode:
– Connect an additional external (right) camera to the green Right Stereo Camera
input of your camera.
– Create separate left and right cameras.
– When using Toe-In or Off Axis, set the Convergence Distance slider to a very large
value of 999999999.
Rendering Overscan: If you want to render an image with overscan, you also must modify
your scene‘s Camera3D. Since overscan settings aren’t exported along with camera data from
3D applications, this is also necessary for cameras you’ve imported via .fbx or .ma files. The
solution is to increase the film back’s width and height by the factor necessary to account for
extra pixels on each side.
Cube 3D [3CB]
Inputs
The following are optional inputs that appear on the Cube3D node in the Node Editor:
Cube 3D node connected to a 3D scene exported from the Camera Tracker node
Inspector
Cube 3D controls
Lock Width/Height/Depth
This checkbox locks the Width, Height, and Depth dimensions of the cube together. When selected,
only a Size control is displayed; otherwise, separate Width, Height, and Depth sliders are shown.
Size or Width/Height/Depth
If the Lock checkbox is selected, then only the Size slider is shown; otherwise, separate sliders are
displayed for Width, Height, and Depth. The Size and Width sliders are the same control renamed, so
any animation applied to Size is also applied to Width when the controls are unlocked.
Subdivision Level
Use the Subdivision Level slider to set the number of subdivisions used when creating the
image plane.
The 3D viewers and renderer use vertex lighting, meaning all lighting is calculated at the vertices on
the 3D geometry and then interpolated from there. Therefore, the more subdivisions in the mesh, the
more vertices are available to represent the lighting. For example, make a sphere and set the
subdivisions to be small so it looks chunky. With lighting on, the object looks like a sphere but has
some amount of fracturing resulting from the large distance between vertices. When the subdivisions
are high, the vertices are closer and the lighting becomes more even. So, increasing subdivisions can
be useful when working interactively with lights.
Cube Mapping
Enabling the Cube Mapping checkbox causes the cube to wrap its first texture across all six faces using
a standard cubic mapping technique. This approach expects a texture laid out in the shape of a cross.
Wireframe
Enabling this checkbox causes the mesh to render only the wireframe for the object when rendering
with the OpenGL renderer in the Renderer 3D node.
Common Controls
Controls, Materials, Transform, and Settings Tabs
The remaining controls for Visibility, Lighting, Matte, Blend Mode, Normals/Tangents, and Object ID are
common to many 3D nodes. The same is true of the Materials, Transform, and Settings tabs. Their
descriptions can be found in “The Common Controls” section at the end of this chapter.
NOTE: Modifying the X, Y, and Z positions of a 3D object does not modify the normals/
tangents. You can use a ReplaceNormals node afterward to recompute the normals/tangents.
TIP: Not all geometry has every attribute. For example, most Fusion geometry does not have
vertex colors, with the exception of particles and some imported FBX/Alembic meshes. No
geometry currently has environment coordinates, and only particles have velocities. If an
attribute is not present on the input geometry, it is assumed to have a default value.
Inputs
The Custom Vertex 3D node includes four inputs. The orange scene input is the only one of the four
that is required.
– SceneInput: The orange scene input takes 3D geometry or a 3D scene from a 3D node output.
This is the 3D scene or geometry that is manipulated by the calculations in the Custom Vertex
3D node.
– ImageInput1, ImageInput2, ImageInput3: The three image inputs using green, magenta, and
teal colors are optional inputs that can be used for compositing.
NOTE: Missing attributes on the input geometry are created if the expression for an attribute
is nontrivial. The values for the attributes are given as in the above point. For example, if the
input geometry does not have normals, then the values of (nx, ny, nz) is always (0,0,1). To
change this, you could use a ReplaceNormals node beforehand to generate them.
Vertex Tab
Using the fields in the Vertex tab, vertex calculations can be performed on the Position, Normals,
Vertex Color, Texture Coordinates, Environment Coordinates, UV Tangents, and Velocity attributes.
The vertices are defined by three XYZ Position values in world space as px, py, pz. Normals, which
define as a vector the direction the vertex is pointing as nx, ny, nz.
Vertex color is the Red, Green, Blue, and Alpha color of the point as vcr, vcg, vcb, vca.
Numbers Tab
Numbers 1-8
Numbers are variables with a dial control that can be animated or connected to modifiers exactly as
any other control might. The numbers can be used in equations on vertices at current time: n1, n2,
n3, n4,… or at any time: n1_at(float t), n2_at(float t), n3_at(float t), n4_at(float t), where t is the time you
want. The values of these controls are available to expressions in the Setup and Intermediate tabs.
They can be renamed and hidden from the viewer using the Config tab.
Points 1-8
The point controls represent points in the Custom Vertex 3D tool, not the vertices. These eight point
controls include 3D X,Y,Z position controls for positioning points at the current time: (p1x, p1y, p1z, p2x,
p2y, p2z) or at any time: p1x_at(float t), p1y_at(float t), p1z_at(float t), p2x_at(float t), p2y_at(float t), p2z_
at(float t), where t is the time you want. For example, you can use a point to define a position in 3D
space to rotate the vertices around. They can be renamed and hidden from the viewer using the
Config tab. They are normal positional controls and can be animated or connected to modifiers as any
other node might.
LUT Tab
LUTs 1-4
The Custom Vertex 3D node provides four LUT splines. A LUT is a lookup table that will return a value
from the height of the LUT spline. For example, getlut1(float x), getlut2(float x),...
where x = 0 … 1 accesses the LUT values.
The values of these controls are available to expressions in the Setup and Intermediate tabs using the
getlut# function. For example, setting the R, G, B, and A expressions to getlut1(r1), getlut2(g1),
getlut3(b1), and getlut4(a1) respectively, would cause the Custom Vertex 3D node to mimic the Color
Curves node.
These controls can be renamed using the options in the Config tab to make their meanings more
apparent, but expressions still see the values as lut1, lut2,...lut8.
Setups 1-8
Up to eight separate expressions can be calculated in the Setup tab of the Custom Vertex 3D node.
The Setup expressions are evaluated once per frame, before any other calculations are performed.
The results are then made available to the other expressions in the node as variables s1, s2,
s3, and s4.
Think of them as global setup scripts that can be referenced by the intermediate and channel scripts
for each vertex.
For example, Setup scripts can be used to transform vertex from model to world space.
NOTE: Because these expressions are evaluated once per frame only and not for each pixel,
it makes no sense to use per-pixel variables like X and Y or channel variables like r1, g1, b1,
and so on. Allowable values include constants, variables like n1…n8, time, W and H, and so on,
and functions like sin() or getr1d().
Intermediate Tab
Intermediates 1-8
An additional eight expressions can be calculated in the Intermediate tab. The Intermediate
expressions are evaluated once per vertex, after the Setup expressions are evaluated. Results are
available as variables i1, i2, i3, i4, i5, i6, i7, i8, which can be referenced by channel scripts. Think of
them as “per vertex setup” scripts.
For example, you can run the script to produce the new vertex (i.e., new position, normal, tangent,
UVs, etc.) or transform from world space back to model space.
Random Seed
Use this to set the seed for the rand() and rands() functions. Click the Reseed button to set the seed to
a random value. This control may be needed if multiple Custom Vertex 3D nodes are required with
different random results for each.
Number Controls
There are eight sets of Number controls, corresponding to the eight sliders in the Numbers tab.
Disable the Show Number checkbox to hide the corresponding Number slider, or edit the Name for
Number text field to change its name.
Point Controls
There are eight sets of Point controls, corresponding to the eight controls in the Points tab. Disable the
Show Point checkbox to hide the corresponding Point control and its crosshair in the viewer. Similarly,
edit the Name for Point text field to change the control’s name.
Common Controls
Settings Tab
The Settings tab controls are common to many 3D nodes, and their descriptions can be found in “The
Common Controls” section at the end of this chapter.
Displace 3D [3DI]
TIP: Passing a particle system through a Displace 3D node disables the Always Face Camera
option set in the pEmitter. Particles are not treated as point-like objects; each of the four
particle vertices are individually displaced, which may or may not be the preferred outcome.
Inputs
The following two inputs appear on the Displace 3D node in the Node Editor:
– SceneInput: The orange scene input is the required input for the Displace 3D node. You use
this input to connect another node that creates or contains a 3D scene or object.
– Input: This green input is used to connect a 2D image that is used to displace the object
connected to the Scene input. If no image is provided, this node effectively passes the scene
straight through to its output. So, although not technically a required input, there isn’t much use
for adding this node unless you connect this input correctly.
Inspector
Displace 3D controls
Channel
Determines which channel of the connected input image is used to displace the geometry.
Camera Displacement
– Point to Camera: When the Point to Camera checkbox is enabled, each vertex is displaced toward
the camera instead of along its normal. One possible use of this option is for displacing a camera’s
image plane. The displaced camera image plane would appear unchanged when viewed through
the camera but is deformed in 3D space, allowing one to comp-in other 3D layers that correctly
interact in Z.
– Camera: This menu is used to select which camera in the scene is used to determine the camera
displacement when the Point to Camera option is selected.
Common Controls
Settings Tab
The Settings tab controls are common to many 3D nodes, and their descriptions can be found in “The
Common Controls” section at the end of this chapter.
Duplicate 3D [3DP]
Inputs
The Duplicate 3D node has a single input by default where you connect a 3D scene. An optional Mesh
input appears based on the settings of the node.
– SceneInput: The orange Scene Input is a required input. The scene or object you connect to
this input is duplicated based on the settings in the Control tab of the Inspector.
– MeshInput: A green optional mesh input appears when the Region’s tab Region menu is set to
mesh. The mesh can be any 3D model, either generated in Fusion or imported.
A Cube 3D is duplicated
Inspector
Duplicate 3D controls
Controls Tab
The Controls tab includes all the parameters you can use to create, offset, and scale copies of the
object connected to the scene input on the node.
Time Offset
Use the Time Offset slider to offset any animations that are applied to the source geometry by a set
amount per copy. For example, set the value to -1.0 and use a cube set to rotate on the Y-axis as the
source. The first copy shows the animation from a frame earlier; the second copy shows animation
from a frame before that, etc. This can be used with great effect on textured planes—for example,
where successive frames of a clip can be shown.
Transform Method
– Linear: When set to Linear, transforms are multiplied by the number of the copy, and the total
scale, rotation, and translation are applied in turn, independent of the other copies.
– Accumulated: When set to Accumulated, each object copy starts at the position of the previous
object and is transformed from there. The result is transformed again for the next copy
Transform Order
With this menu, the order in which the transforms are calculated can be set. It defaults to Scale-
Rotation-Transform (SRT).
Using different orders results in different positions of your final objects.
Translation
The X, Y, and Z Offset sliders set the offset position applied to each copy. An X offset of 1 would offset
each copy 1 unit along the X-axis from the last copy.
Rotation
The buttons along the top of this group of rotation controls set the order in which rotations are applied
to the geometry. Setting the rotation order to XYZ would apply the rotation on the X-axis first, followed
by the Y-axis rotation, then the Z-axis rotation.
The three Rotation sliders set the amount of rotation applied to each copy.
Scale
– Lock: When the Lock XYZ checkbox is selected, any adjustment to the duplicate scale is applied
to all three axes simultaneously. If this checkbox is disabled, the Scale slider is replaced with
individual sliders for the X, Y, and Z scales.
– Scale: The Scale controls tell Duplicate how much scaling to apply to each copy.
Jitter Tab
The options in the Jitter tab allow you to randomize the position, rotation, and size of all the copies
created in the Controls tab.
Random Seed
The Random Seed slider is used to generate a random starting point for the amount of jitter applied to
the duplicated objects. Two Duplicate nodes with identical settings but different random seeds
produce two completely different results.
Randomize
Click the Randomize button to auto generate a random seed value.
Jitter Probability
Adjusting this slider determines the percentage of copies that are affected by the jitter. A value of 1.0
means 100% of the copies are affected, while a value of 0.5 means 50% are affected.
Time Offset
Use the Time Offset slider to offset any animations that are applied to the source geometry by a set
amount per copy. For example, set the value to –1.0 and use a cube set to rotate on the Y-axis as the
source. The first copy shows the animation from a frame earlier; the second copy shows animation
from a frame before that, etc. This can be used with great effect on textured planes—for example,
where successive frames of a clip can be shown.
Rotation Jitter
Use these three controls to adjust the amount of variation in the X, Y, and Z rotation of the
duplicated objects.
Pivot Jitter
Use these three controls to adjust the amount of variation in the rotational pivot center of the
duplicated objects. This affects only the additional jitter rotation, not the rotation produced by the
Rotation settings in the Controls tab.
Scale Jitter
Use this control to adjust the amount of variation in the scale of the duplicated objects. Disable the
Lock XYZ checkbox to adjust the scale variation independently on all three axes.
Region Tab
The options in the Region tab allow you to define an area in the viewer where the copies can appear
or are prevented from appearing. Like most parameters in Fusion, this area can be animated to cause
the copied object to pop on and off the screen based on the region’s shape and setting.
Region Tab
The Region section includes two settings for controlling the shape of the region and the affect the
region has on the duplicate objects.
– Region Mode: There are three options in the Region Mode menu. The default, labeled “Ignore
region” bypasses the node entirely and causes no change to the copies of objects from how they
are set in the Controls and Jitter tabs. The menu option labeled “When inside region” causes the
copied objects to appear only when their position falls inside the region defined in this tab. The
last menu option, “When not Inside region” causes the copied objects to appear only when their
position falls outside the region defined in this tab.
Common Controls
Settings Tab
The Settings tab controls are common to many 3D nodes, and their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Inspector
Controls Tab
The Controls tab includes all the parameters you used to decide how the FBX file is created and what
elements in the scene get exported.
Filename
This Filename field is used to display the location and file that is output by the node. You can click the
Browse button to open a file browser dialog and change the location where the file is saved.
Version
The Version menu is used to select the available versions for the chosen format. The menu’s contents
change dynamically to reflect the available versions for that format. If the selected format provides
only a single option, this menu is hidden.
Choosing Default for the FBX formats uses FBX2011.
Frame Rate
This menu sets the frame rate that is in the FBX scene.
Scale Units By
This slider changes the working units in the exported FBX file. Changing this can simplify workflows
where the destination 3D software that you have uses a different scale.
Geometry/Lights/Cameras
These three checkboxes determine whether the node attempts to export the named scene element.
For example, deselecting Geometry and Lights but leaving Cameras selected would output only the
cameras currently in the scene.
Render Range
Enabling this checkbox saves the Render Range information in the export file, so other applications
know the time range of the FBX scene.
Common Controls
Settings Tab
The Settings tab controls are common to many 3D nodes, and their descriptions can be found in “The
Common Controls” section at the end of this chapter.
Inputs
– SceneInput: The orange scene input is an optional connection if you wish to combine other 3D
geometry nodes with the imported FBX file.
– Material Input: The green input is the material input that accepts either a 2D image or a 3D
material. If a 2D image is provided, it is used as a diffuse texture map for the basic material tab
in the node. If a 3D material is connected, then the basic material tab is disabled.
Controls Tab
Most of the Controls tab is taken up by common controls. The FBX-specific controls included on this
tab are primarily information and not adjustments.
Size
The Size slider controls the size of the FBX geometry that is imported. FBX meshes have a tendency to
be much larger than Fusion’s default unit scale, so this control is useful for scaling the imported
geometry to match the Fusion environment.
FBX File
This field displays the filename and file path of the currently loaded FBX mesh. Click the Browse
button to open a file browser that can be used to locate a new FBX file. Despite the node’s name, this
node is also able to load a variety of other formats.
Object Name
This input shows the name of the mesh from the FBX file that is being imported. If this field is blank,
then the contents of the FBX geometry are imported as a single mesh. You cannot edit this field; it is
set by Fusion when using the File > Import > FBX Scene menu.
Take Name
FBX files can contain multiple instances of an animation, called Takes. This field shows the name of the
animation take to use from the FBX file. If this field is blank, then no animation is imported. You cannot
edit this field; it is set by Fusion when using the File > Import > FBX Scene menu.
Wireframe
Enabling this checkbox causes the mesh to render only the wireframe for the object. Only the OpenGL
renderer in the Renderer 3D node supports wireframe rendering.
Common Controls
Controls, Materials, Transform, and Settings Tabs
The remaining controls for Visibility, Lighting, Matte, Blend Mode, Normals/Tangents, and Object ID
are common to many 3D nodes. The same is true of the Materials, Transform, and Settings tabs.
Their descriptions can be found in “The Common Controls” section at the end of this chapter.
Fog 3D [3FO]
Inputs
The Fog 3D node has two inputs in the Node Editor, only one of which is required for the Fog 3D to
project onto a 3D scene.
Inspector
Controls Tab
The Controls tab includes all the parameters you use to decide how the Fog looks and projects onto
the geometry in the scene.
Color
This control can be used to set the color of the fog. The color is also multiplied by the density texture
image, if one is connected to the green input on the node.
Radial
By default, the fog is created based on the perpendicular distance to a plane (parallel with the near
plane) passing through the eye point. When the Radial option is checked, the radial distance to the eye
point is used instead of the perpendicular distance. The problem with perpendicular distance fog is
that when you move the camera about, as objects on the left or right side of the frustum move into the
center, they become less fogged although they remain the same distance from the eye. Radial fog
fixes this. Radial fog is not always desirable, however. For example, if you are fogging an object close
to the camera, like an image plane, the center of the image plane could be unfogged while the edges
could be fully fogged.
Type
This control is used to determine the type of falloff applied to the fog.
– Linear: Defines a linear falloff for the fog.
– Exp: Creates an exponential nonlinear falloff.
– Exp2: Creates a stronger exponential falloff.
Common Controls
Settings Tab
The Settings tab controls are common to many 3D nodes, and their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Inputs
Of the two inputs on this node, the material input is the primary connection you use to add an image to
the planar geometry created in this node.
– SceneInput: This orange input expects a 3D scene. As this node creates flat, planar geometry,
this input is not required.
– MaterialInput: The green-colored material input accepts either a 2D image or a 3D material. It
provides the texture and aspect ratio for the rectangle based on the connected source such as
a Loader node in Fusion Studio or a MediaIn node in DaVinci Resolve. The 2D image is used as
a diffuse texture map for the basic material tab in the Inspector. If a 3D material is connected,
then the basic material tab is disabled.
Controls Tab
Most of the Controls tab is taken up by common controls. The Image Plane specific controls at the top
of the Inspector allow minor adjustments.
Lock Width/Height
When checked, the subdivision of the plane is applied evenly in X and Y. When unchecked, there are
two sliders for individual control of the subdivisions in X and Y. This defaults to on.
Subdivision Level
Use the Subdivision Level slider to set the number of subdivisions used when creating the image
plane. If the Open GL viewer and renderer are set to Vertex lighting, the more subdivisions in the
mesh, the more vertices are available to represent the lighting. So, high subdivisions can be useful
when working interactively with lights.
Wireframe
Enabling this checkbox causes the mesh to render only the wireframe for the object when using the
OpenGL renderer.
Common Controls
Controls, Materials, Transform, and Settings Tabs
The remaining controls for Visibility, Lighting, Matte, Blend Mode, Normals/Tangents, and Object ID
are common to many 3D nodes. The same is true of the Materials, Transform, and Settings tabs.
Their descriptions can be found in “The Common Controls” section at the end of this chapter.
Inputs
Two inputs accept 3D scenes as sources. The orange scene input is required, while the green Target
input is optional.
– SceneInput: The required orange scene input accepts the output of a 3D scene. This scene
should contain the object or point in 3D space that you want to covert to 2D coordinates.
– Target: The optional green target input accepts the output of a 3D scene. When provided, the
transform center of the scene is used to set the position of the Locator. The transformation
controls for the Locator become offsets from this position.
Locator 3D controls
Controls Tab
Most of the controls for the locator 3D are cosmetic, dealing with how the locator appears and whether
it is rendered in the final output. However, the Camera Settings are critical to getting the results you’re
looking for.
Size
The Size slider is used to set the size of the Locator’s onscreen crosshair.
Color
A basic Color control is used to set the color of the Locator’s onscreen crosshair.
Matte
Enabling the Is Matte option applies a special texture to this object, causing this object to not only
become invisible to the camera, but also making everything that appears directly behind the camera
invisible as well. This option overrides all textures. For more information, see Chapter 86,
“3D Compositing Basics” in the DaVinci Resolve Reference Manual or Chapter 25 in the Fusion
Reference Manual.
– Is Matte: When activated, objects whose pixels fall behind the matte object’s
pixels in Z do not get rendered.
– Opaque Alpha: Sets the Alpha value of the matte object to 1. This checkbox is visible only when
the Is Matte option is enabled.
– Infinite Z: Sets the value in the Z-channel to infinity. This checkbox is visible only when the Is
Matte option is enabled.
Make Renderable
Defines whether the Locator is rendered as a visible object by the OpenGL renderer. The software
renderer is not currently capable of rendering lines and hence ignores this option.
Unseen by Camera
This checkbox control appears when the Make Renderable option is selected. If the Unseen by
Camera checkbox is selected, the Locator is visible in the viewers but not rendered into the output
image by the Renderer 3D node.
Camera
This drop-down control is used to select the Camera in the scene that defines the screen space used
for 3D to 2D coordinate transformation.
Common Controls
Transform and Settings tabs
The remaining Transform and Settings tabs are common to many 3D nodes. Their descriptions can be
found in “The Common Controls” section at the end of this chapter.
Merge 3D [3MG]
Merge 3D Introduction
The Merge 3D node is the primary node in Fusion that you use to combine separate 3D elements into
the same 3D environment.
For example, in a scene created with an image plane, a camera, and a light, the camera would not be
able to see the image plane and the light would not affect the image plane until all three objects are
introduced into the same environment using the Merge 3D node.
The Merge provides the standard transformation controls found on most nodes in Fusion’s 3D suite.
Unlike those nodes, changes made to the translation, rotation, or scale of the Merge affect all the
objects connected to the Merge. This behavior forms the basis for all parenting in
Fusion’s 3D environment.
Merge 3D with a connected Image Plane, FBX Mesh object, SpotLight, and camera
Inspector
Merge 3D controls
Controls Tab
The Controls tab is used only to pass through any lights connected to the Merge 3D node.
Common Controls
Transform and Settings Tabs
The remaining controls for the Transform and Settings tabs are common to most 3D nodes. Their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Inputs
– SceneInput: The orange Scene input accepts the output of a Merge 3D node or any node
creating a 3D scene.
Override 3D controls
Controls Tab
The function of the controls found in the Controls tab is straightforward. First, you select the option to
override using the Do [Option] checkbox. That reveals a control that can be used to set the value of
the option itself. The individual options are not documented here; a full description of each can be
found in any geometry creation node in this chapter, such as the Image Plane, Cube, or Shape nodes.
Do [Option]
Enables the override for this option.
[Option]
If the Do [Option] checkbox is enabled, then the control for the property itself becomes visible. The
control values of the properties for all upstream objects are overridden by the new value.
Common Controls
Settings Tabs
The Settings tab includes controls common to most 3D nodes. Their descriptions can be found in “The
Common Controls” section at the end of this chapter.
NOTE: A null object is an invisible 3D object that has all the same transform properties of a
visible 3D object.
Inputs
The Point Cloud has only a single input for a 3D scene.
– SceneInput: This orange input accepts a 3D scene.
Controls Tab
The Controls tab is where you can import the point cloud from a file and controls its appearance in
the viewer.
Style
The Style menu allows you to display the point cloud as cross hairs or points in the viewer.
Lock X/Y/Z
Deselect this checkbox to provide individual control over the size of the X, Y, and Z arms of the points
in the cloud.
Size X/Y/Z
These sliders can be used to increase the size of the onscreen crosshairs used to represent
each point.
Density
This slider defines the probability of displaying a specific point. If the value is 1, then all points are
displayed. A value of 0.2 shows only every fifth point.
Color
Use the standard Color control to set the color of onscreen crosshair controls.
Make Renderable
Determines whether the point cloud is visible in the OpenGL viewer and in final renderings made by
the OpenGL renderer. The software renderer does not currently support rendering of visible
crosshairs for this node.
Unseen by Camera
This checkbox control appears when the Make Renderable option is selected. If the Unseen by
Cameras checkbox is selected, the point cloud is visible in the viewers but not rendered into the
output image by the Renderer 3D node.
Common Controls
Transform and Settings Tabs
The remaining Transform and Settings tabs are common to many 3D nodes. Their descriptions can be
found in “The Common Controls” section at the end of this chapter.
Frequently, one or more of the points in an imported point cloud is manually assigned to track the
position of a specific feature. These points usually have names that distinguish them from the rest of
the points in the cloud. To see the current name for a point, hover the mouse pointer directly over a
point, and after a moment a small tooltip appears with the name of the point.
When the Point Cloud 3D node is selected, a submenu is added to the viewer’s contextual menu with
several options that make it simple to locate, rename, and separate these points from the rest of the
point cloud.
To project re-lightable textures or textures for non-diffuse color channels (like Specular Intensity or
Bump), use the Texture projection mode instead:
– Projections in Texture mode only strike objects that use the output of the Catcher node for all or
part of the material applied to that object.
– Texture mode projections clip the geometry according to the Alpha channel of the
projected image.
See the section for the Catcher node for additional details.
Inputs
The Projector 3D has two inputs: one for the scene you are projecting on to and another for the
projected image.
– SceneInput: The orange scene input accepts a 3D scene. If a scene is connected to this input,
then transformations applied to the spotlight also affect the rest of the scene.
– ProjectiveImage: The white input expects a 2D image to be used for the projection. This
connection is required.
Inspector
Projector 3D controls
Color
The input image is multiplied by this color before being projected into the scene.
Intensity
Use this slider to set the Intensity of the projection when the Light and Ambient Light projection modes
are used. In Texture mode, this option scales the Color values of the texture after multiplication by
the color.
Decay Type
A projector defaults to No Falloff, meaning that its light has equal intensity on geometry, despite the
distance from the projector to the geometry. To cause the intensity to fall off with distance, set the
Decay type to either Linear or Quadratic modes.
Angle
The Cone Angle of the node refers to the width of the cone where the projector emits its full intensity.
The larger the angle, the wider the cone angle, up to a limit of 90 degrees.
Fit Method
The Fit Method determines how the projection is fitted within the projection cone.
The first thing to know is that although this documentation may call it a “cone,” the Projector 3D and
Camera 3D nodes do not project an actual cone; it’s more of a pyramid of light with its apex at the
camera/projector. The Projector 3D node always projects a square pyramid of light—i.e., its X and Y
angles of view are the same. The pyramid of light projected by the Camera 3D node can be non-
square depending on what the Film Back is set to in the camera. The aspect of the image connected
into the Projector 3D/Camera 3D does not affect the X/Y angles of the pyramid, but rather the image is
scaled to fit into the pyramid based upon the fit options.
When both the aspect of the pyramid (AovY/AovX) and the aspect of the image (height * pixelAspectY)/
(width * pixelAspectX) are the same, there is no need for the fit options, and in this case the fit options
all do the same thing. However, when the aspect of the image and the pyramid (as determined by the
Film Back settings in Camera 3D) are different, the fit options become important.
For example, Fit by Width fits the width of the image across the width of the Camera 3D pyramid. In
this case, if the image has a greater aspect ratio than the aspect of the pyramid, some of the projection
extends vertically outside of the pyramid.
There are five options:
– Inside: The image is uniformly scaled so that its largest dimension fits inside the cone. Another
way to think about this is that it scales the image as big as possible subject to the restriction that
the image is fully contained within the pyramid of the light. This means, for example, that nothing
outside the pyramid of light ever receives any projected light.
– Width: The image is uniformly scaled so that its width fits inside the cone. Note that the image
could still extend outside the cone in its height direction.
– Height: The image is uniformly scaled so that its height fits inside the cone. Note that the image
could still extend outside the cone in its width direction.
Projection Mode
– Light: Projects the texture as a diffuse/specular light.
– Ambient Light: Uses an ambient light for the projection.
– Texture: When used in conjunction with the Catcher node, this mode allows re-lightable texture
projections. The projection strikes only objects that use the catcher material as part of their
material shaders.
One useful trick is to connect a Catcher node to the Specular Texture input on a 3D Material node
(such as a Blinn). This causes any object using the Blinn material to receive the projection as part
of the specular highlight. This technique can be used in any material input that uses texture maps,
such as the Specular and Reflection maps.
Shadows
Since the projector is based on a spotlight, it is also capable of casting shadows using shadow maps.
The controls under this reveal are used to define the size and behavior of the shadow map.
– Enable Shadows: The Enable Shadows checkbox should be selected if the light is to produce
shadows. This defaults to selected.
– Shadow Color: Use this standard Color control to set the color of the shadow.
This defaults to black (0, 0, 0).
– Density: The Shadow Density determines the transparency of the shadow. A density of 1.0
produces a completely transparent shadow, whereas lower values make the shadow transparent.
– Shadow Map Size: The Shadow Map Size control determines the size of the bitmap used to create
the shadow map. Larger values produce more detailed shadow maps at the expense of memory
and performance.
– Shadow Map Proxy: The Shadow Map Proxy determines the size of the shadow map used for
proxy and auto proxy calculations. A value of 0.5 would use a 50% shadow map.
– Multiplicative/Additive Bias: Shadows are essentially textures applied to objects in the scene,
so there is occasionally Z-fighting, where the portions of the object that should be receiving the
shadows render over the top of the shadow instead.
– Multiplicative and Additive Bias: Bias works by adding a small depth offset to move the shadow
away from the surface it is shadowing, eliminating the Z-fighting. Too little bias and the objects can
self-shadow themselves. Too much bias and the shadow can become separated from the surface.
Adjust the multiplicative bias first, then fine tune the result using the additive bias control.
– Force All Materials Non-Transmissive: Normally, an RGBAZ shadow map is used when rendering
shadows. By enabling this option, you are forcing the renderer to use a Z-only shadow map.
This can lead to significantly faster shadow rendering while using a fifth as much memory. The
disadvantage is that you can no longer cast “stained-glass”-like shadows.
– Shadow Map Sampling: Sets the quality for sampling of the shadow map.
Softness Falloff The Softness Falloff slider appears when the Softness is set to variable.
This slider controls how fast the softness of shadow edges grows with distance.
More precisely, it controls how fast the shadow map filter size grows based on
the distance between shadow caster and receiver. Its effect is mediated by the
values of the Min and Max Softness sliders.
Min Softness The Min Softness slider appears when the Softness is set to variable. This slider
controls the Minimum Softness of the shadow. The closer the shadow is to the
object casting the shadow, the sharper it is up to the limit set by this slider.
Max Softness The Max Softness slider appears when the Softness is set to variable.
This slider controls the Maximum Softness of the shadow. The farther the
shadow is from the object casting the shadow, the softer it is up to the limit set
by this slider.
Common Controls
Transform and Settings Tabs
The remaining Transform and Settings tabs are common to many 3D nodes. Their descriptions can be
found in “The Common Controls” section at the end of this chapter.
Renderer 3D [3RN]
NOTE: The Open GL renderer respects the Color Depth option in the Image tab of the
Renderer 3D node. This can cause slowdowns on certain graphics cards when rendering to
int16 or float32.
Inputs
The Renderer 3D node has two inputs on the node. The main scene input takes in the Merge 3D or
other 3D nodes that need to be converted to 2D. The effect mask limits the Renderer 3D output.
– SceneInput: The orange scene input is a required input that accepts a 3D scene that you want
to convert to 2D.
– EffectMask: The blue effects mask input uses a 2D image to mask the output of the node.
Renderer 3D connected directly after a Merge 3D, rendering the 3D scene to a 2D image
Render 3D controls
Controls Tab
Camera
The Camera menu is used to select which camera from the scene is used when rendering. The Default
setting uses the first camera found in the scene. If no camera is located, the default perspective view
is used instead.
Eye
The Eye menu is used to configure rendering of stereoscopic projects. The Mono option ignores the
stereoscopic settings in the camera. The Left and Right options translate the camera using the stereo
Separation and Convergence options defined in the camera to produce either left- or right-eye
outputs. The Stacked option places the two images one on top of the other instead of side by side.
Reporting
The first two checkboxes in this section can be used to determine whether the node prints warnings
and errors produced while rendering to the console. The second set of checkboxes tells the node
whether it should abort rendering when a warning or error is encountered. The default for this node
enables all four checkboxes.
Renderer Type
This menu lists the available render engines. Fusion provides three: the software renderer, OpenGL
renderer, and the OpenGL UV render engine. Additional renderers can be added via third-
party plug-ins.
Software Controls
Output Channels
Besides the usual Red, Green, Blue, and Alpha channels, the software renderer can also embed the
following channels into the image. Enabling additional channels consumes additional memory and
processing time, so these should be used only when required.
– RGBA: This option tells the renderer to produce the Red, Green, Blue, and Alpha color channels of
the image. These channels are required, and they cannot be disabled.
– Z: This option enables rendering of the Z-channel. The pixels in the Z-channel contain a value that
represents the distance of each pixel from the camera. Note that the Z-channel values cannot
include anti-aliasing. In pixels where multiple depths overlap, the frontmost depth value is used for
this pixel.
– Coverage: This option enables rendering of the Coverage channel. The Coverage channel
contains information about which pixels in the Z-buffer provide coverage (are overlapping with
other objects). This helps nodes that use the Z-buffer to provide a small degree of anti-aliasing.
The value of the pixels in this channel indicates, as a percentage, how much of the pixel is
composed of the foreground object.
– BgColor: This option enables rendering of the BgColor channel. This channel contains the color
values from objects behind the pixels described in the Coverage channel.
– Normal: This option enables rendering of the X, Y, and Z Normals channels. These three channels
contain pixel values that indicate the orientation (direction) of each pixel in the 3D space. A color
channel containing values in a range from [–1,1] represents each axis.
– TexCoord: This option enables rendering of the U and V mapping coordinate channels. The pixels
in these channels contain the texture coordinates of the pixel. Although texture coordinates are
processed internally within the 3D system as three-component UVW, Fusion images store only UV
components. These components are mapped into the Red and Green color channel.
– ObjectID: This option enables rendering of the ObjectID channel. Each object in the 3D
environment can be assigned a numeric identifier when it is created. The pixels in this floating-
point image channel contain the values assigned to the objects that produced the pixel. Empty
pixels have an ID of 0, and the channel supports values as high as 65534. Multiple objects can
share a single Object ID. This buffer is useful for extracting mattes based on the shapes of objects
in the scene.
– MaterialID: This option enables rendering of the Material ID channel. Each material in the 3D
environment can be assigned a numeric identifier when it is created. The pixels in this floating-
point image channel contain the values assigned to the materials that produced the pixel. Empty
pixels have an ID of 0, and the channel supports values as high as 65534. Multiple materials
can share a single Material ID. This buffer is useful for extracting mattes based on a texture; for
example, a mask containing all the pixels that comprise a brick texture.
Lighting
– Enable Lighting: When the Enable Lighting checkbox is selected, objects are lit by any lights in
the scene. If no lights are present, all objects are black.
– Enable Shadows: When the Enable Shadows checkbox is selected, the renderer produces
shadows, at the cost of some speed.
Output Channels
In addition to the usual Red, Green, Blue, and Alpha channels, the OpenGL render engine can also
embed the following channels into the image. Enabling additional channels consumes additional
memory and processing time, so these should be used only when required.
– RGBA: This option tells the renderer to produce the Red, Green, Blue, and Alpha color channels of
the image. These channels are required, and they cannot be disabled.
– Z: This option enables rendering of the Z-channel. The pixels in the Z-channel contain a value that
represents the distance of each pixel from the camera. Note that the Z-channel values cannot
include anti-aliasing. In pixels where multiple depths overlap, the frontmost depth value is used for
this pixel.
– Normal: This option enables rendering of the X, Y, and Z Normals channels. These three channels
contain pixel values that indicate the orientation (direction) of each pixel in the 3D space. A color
channel containing values in a range from [–1,1] is represented by each axis.
Anti-Aliasing
Anti-aliasing can be enabled for each channel through the Channel menu. It produces an output image
with higher quality anti-aliasing by brute force, rendering a much larger image, and then rescaling it
down to the target resolution. Rendering a larger image in the first place, and then using a Resize node
to bring the image to the desired resolution can achieve the exact same results. Using the
supersampling built in to the renderer offers two distinct advantages over this method.
The rendering is not restricted by memory or image size limitations. For example, consider the steps to
create a float-16 1920 x 1080 image with 16x supersampling. Using the traditional Resize node would
require first rendering the image with a resolution of 30720 x 17280, and then using a Resize to scale
this image back down to 1920 x 1080. Simply producing the image would require nearly 4 GB of
memory. When anti-aliasing is performed on the GPU, the OpenGL renderer can use tile rendering to
significantly reduce memory usage.
The GL renderer can perform the rescaling of the image directly on the GPU more quickly than the
CPU can manage it. Generally, the more GPU memory the graphics card has, the faster the operation
is performed.
Interactively, Fusion skips the anti-aliasing stage unless the HiQ button is selected in the Time Ruler.
Final quality renders always include supersampling, if it is enabled.
Because of hardware limitations, point geometry (particles) and lines (locators) are always rendered at
their original size, independent of supersampling. This means that these elements are scaled down
from their original sizes, and likely appear much thinner than expected.
TIP: For some things, sometimes using an SS Z-buffer improves quality, but for other things
like using the merge’s PerformDepthMerge option, it may make things worse.
Enable (LowQ/HiQ)
These two check boxes are used to enable anti aliasing of the rendered image.
Filter Type
When downsampling the supersized image, the surrounding pixels around a given pixel are often used
to give a more realistic result. There are various filters available for combining these pixels. More
complex filters can give better results but are usually slower to calculate. The best filter for the job
often depends on the amount of scaling and on the contents of the image itself.
The functions of these filters are shown in the image above. From left to right these are:
Bi-Linear (triangle) This uses a simplistic filter, which produces relatively clean and fast results.
This produces better results with continuous tone images but is slower
Bi-Spline (cubic) than Quadratic. If the images have fine detail in them, the results may be
blurrier than desired.
This produces good results with continuous tone images which are scaled down,
Catmul-Rom
producing sharp results with finely detailed images.
This is similar to Catmull-Rom but produces better results with finely detailed
Mitchell
images. It is slower than Catmull-Rom.
This is an advanced filter that produces very sharp, detailed results, however,
Sinc
it may produce visible `ringing' in some situations.
Bessel This is similar to the Sinc filter but may be slightly faster.
Window Method
The Window Method menu appears only when the reconstruction filter is set to Sinc or Bessel.
Accumulation Effects
Accumulation effects are used for creating depth of field effects. Enable both the Enable Accumulation
Effects and Depth of Field checkboxes, and then adjust the quality and Amount sliders.
The blurrier you want the out-of-focus areas to be, the higher the quality setting you need.
A low amount setting causes more of the scene to be in focus.
The accumulation effects work in conjunction with the Focal plane setting located in the Camera 3D
node. Set the Focal Plane to the same distance from the camera as the subject you want to be in
focus. Animating the Focal Plane setting creates rack of focus effects.
Lighting
– Enable Lighting: When the Enable Lighting checkbox is selected, any lights in the scene light
objects. If no lights are present, all objects are black.
– Enable Shadows: When the Enable Shadows checkbox is selected, the renderer produces
shadows, at the cost of some speed.
Texturing
– Texture Depth: Lets you specify the bit depth of texture maps.
– Warn about unsupported texture depths: Enables a warning if texture maps are in an
unsupported bit depth that Fusion can’t process.
Lighting Mode
The Per-vertex lighting model calculates lighting at each vertex of the scene’s geometry. This
produces a fast approximation of the scene’s lighting but tends to produce blocky lighting on poorly
tessellated objects. The Per-pixel method uses a different approach that does not rely on the detail in
the scene’s geometry for lighting, so it generally produces superior results.
Although the per-pixel lighting with the OpenGL renderer produces results closer to that produced by
the more accurate software renderer, it still has some disadvantages. The OpenGL renderer is less
capable of dealing correctly with semi-transparency, soft shadows, and colored shadows, even with
per-pixel lighting. The color depth of the rendering is limited by the capabilities of the graphics card in
the system.
Shading Model
Use this menu to select a shading model to use for materials in the scene. Smooth is the shading
model employed in the viewers, and Flat produces a simpler and faster shading model.
Wireframe
Renders the whole scene as wireframe. This shows the edges and polygons of the objects. The edges
are still shaded by the material of the objects.
Wireframe Anti-Aliasing
Enables anti-aliasing for the Wireframe render.
OpenGL UV Renderer
The OpenGL UV renderer is a special case render engine. It is used to take a model with existing
textures and render it out to produce an unwound flattened 2D version of the model. Optionally,
lighting can be baked in. This is typically done so you can then paint on the texture and reapply it.
Baked-in lighting: After you have baked lighting into a model’s texture, you need to be careful
to turn lighting off on the object later when you render it with the baked-in lighting texture.
Single textures/multiple destinations: Beware of cases where a single area of the texture map
is used on multiple areas of the model. This is often done to save texture memory and decrease
modeling time. An example is the texture for a person where the artist mirrored the left side
mesh/uvs/texture to produce the right side. Trying to bake in lighting in this case won’t work.
Unwrapped more the one mesh: Unwrapping more than one mesh at once can cause
problems. The reason is that most models are authored so they make maximum usage of (u,v)
in [0,1] x [0,1], so that in general models overlap each other in UV space.
Seams: When the UV gutter size is left at 0, this produces seams when the model is retextured
with the unwrapped texture.
UV Gutter Size: Increase this value to hide seams between faces.
Common Controls
Image and Settings Tabs
The remaining controls for the Image and Settings tabs are common to many 3D nodes.
Their descriptions can be found in “The Common Controls” section at the end of this chapter.
Inputs
The Replace Material node has two inputs: one for the 3D scene, object, or 3D text that contains the
original material, and a material input for the new replacement material.
– SceneInput: The orange scene input accepts a 3D scene or 3D text that you want to replace
the material.
– MaterialInput: The green material input accepts either a 2D image or a 3D material. If a 2D
image is provided, it is used as a diffuse texture map for the basic material built into the node. If
a 3D material is connected, then the basic material is disabled.
Inspector
Controls Tab
Enable
This checkbox enables the material replacement. This is not the same as the red switch in the upper-
left corner of the Inspector. The red switch disables the tool altogether and passes the image on
without any modification. The enable checkbox is limited to the effect part of the tool. Other parts, like
scripts in the Settings tab, still process as normal.
Replace Mode
The Replace Mode section offers four methods of replacing each RGBA channel:
– Keep: Prevents the channel from being replaced by the input material.
– Replace: Replaces the material for the corresponding color channel.
– Blend: Blends the materials together.
– Multiply: Multiplies the channels of both inputs.
Inputs
The Replace Normals node has a single input for the 3D scene or incoming geometry.
– SceneInput: The orange scene input accepts a 3D scene or 3D geometry that contains the
normal coordinates you want to modify.
Control Tab
The options in the Control tab deal with repairing 3D geometry and then recomputing
normals/tangents.
Recompute
Controls when normals/tangents are recomputed.
– Always: The normals on the mesh are always recomputed.
– If Not Present: The normals on the mesh are recomputed only if they are not present.
– Never: The normals are never computed. This option is useful when animating.
Smoothing Angle
Adjacent faces with angles in degrees smaller than this value have their adjoining edges smoothed
across. A typical value one might choose for the Smoothing Angle is between 20 and 60 degrees.
There is special case code for 0.0f and 360.0f (f stands for floating-point value). When set to 0.0f,
faceted normals are produced; this is useful for artistic effect.
Flip Normals
Flipping of tangents can sometimes be confusing. Flip has an effect if the mesh has tangent vectors.
Most meshes in Fusion don’t have tangent vectors until they reach a Renderer 3D, though. Also, when
viewing tangent vectors in the viewers, the tangent vectors are created if they don’t exist. The
confusing thing is if you view a Cube 3D that has no tangent vectors and press the FlipU/FlipV button,
nothing happens. This is a result of there being no tangent vectors to create, but later the GL renderer
can create some (unflipped) tangent vectors.
#1 The FBX importer recomputes the normals if they don’t exist, but you can get a higher-
quality result from the Replace Normals node.
#2 Bump maps can sometimes depend on the model’s normals. Specifically, when you
simplify a complex high polygon model to a low polygon model + bump map, the normals and
bump map can become “linked.” Recomputing the normals in this case can make the model
look funny. The bump map was intended to be used with the original normals.
#3 Most primitives in Fusion are not generated with tangents; when needed, they are
generated on the fly by a Renderer 3D and cached.
#4 Tangents currently are only needed for bump mapping. If a material needs bump mapping,
then tangents are created. These tangents are created with some default settings (e.g.,
Smoothing Angle, and so on). If you don’t want Fusion automatically creating tangents, you
can use the Replace Normals node to create them manually.
#5 All computations are done in the local coordinates of the geometries instead of in the
coordinate system of the Replace Normals 3D node. This can cause problems when there is a
non-uniform scale applied to the geometry before Replace Normals 3D is applied.
Common Controls
Settings Tab
The Settings tab is common to many 3D nodes. The description of these controls can be found in
“The Common Controls” section at the end of this chapter.
Replicate 3D [3REP]
Inputs
There are two inputs on the Replicate 3D node: one for the destination geometry that contains the
vertices, and one for the 3D geometry you want to replicate.
– Destination: The orange destination input accepts a 3D scene or geometry with vertex
positions, either from the mesh or 3D particle animations.
– Input[#]: The input accepts the 3D scene or geometry for replicating. Once this input is
connected, a new input for alternating 3D geometry is created.
Inspector
Controls Tab
Step
Defines how many positions are skipped. For example, a step of 3 means that only every third vertice
of the destination mesh is used, while a step of 1 means that all positions are used.
The Step setting helps to keep reasonable performance for big destination meshes. On parametric
geometry like a torus, it can be used to isolate certain parts of the mesh.
Input Mode
This menu defines in which order multiple input scenes are replicated at the destination. No matter
which setting you choose, if only one input scene is supplied this setting has no effect.
– When set to Loop, the inputs are used successively. The first input is at the first position, the
second input at the second position, and so on. If there are more positions in the destination
present than inputs, the sequence is looped.
– When set to Random, a definite but random input for each position is used based on the seed in
the Jitter tab. This input mode can be used to simulate variety with few input scenes.
– The Death of Particles setting causes the input geometries’ IDs to change; therefore, their copy
order may change.
Time Offset
Use the Time Offset slider to offset any animations that are applied to the input geometry by a set
amount per copy. For example, set the value to –1.0 and use a cube set to rotate on the Y-axis as the
source. The first copy shows the animation from a frame earlier; the second copy shows animation
from a frame before that, etc.
This can be used with great effect on textured planes—for example, where successive frames of a
video clip can be shown.
Alignment
Alignment specifies how to align the copies in respect of the destination mesh normal or
particle rotation.
– Not Aligned: Does not align the copy. It stays rotated in the same direction as its input mesh.
– Aligned TBN: This mode results in a more accurate and stable alignment based on the tangent,
binormal, and normal of the destination point. This works best for particles and geometric shapes.
On unwelded meshes, two copies of multiple unwelded points at the same position may lead to
different alignments because of their individual normals.
Color
Affects the diffuse color or shader of each copy based on the input’s particle color.
– Use Object Color: Does not use the color of the destination particle.
– Combine Particle Color: Uses the shader of any input mesh and modifies the diffuse color to
match the color from the destination particle.
– Use Particle Color: Replaces the complete shader of any input mesh with a default shader. Its
diffuse color is taken from the destination particle.
Translation
These three sliders tell the node how much offset to apply to each copy. An X Offset of 1 would offset
each copy one unit, one unit along the X-axis from the last copy.
Rotation Order
These buttons can be used to set the order in which rotations are applied to the geometry. Setting the
rotation order to XYZ would apply the rotation on the X-axis first, followed by the Y-axis rotation, and
then the Z-axis rotation.
XYZ Rotation
These three rotation sliders tell the node how much rotation to apply to each copy.
XYZ Pivot
The pivot controls determine the position of the pivot point used when rotating each copy.
Lock XYZ
When the Lock XYZ checkbox is selected, any adjustment to the scale is applied to all three axes
simultaneously.
If this checkbox is disabled, the Scale slider is replaced with individual sliders for the X, Y,
and Z scales.
Scale
The Scale control sets how much scaling to apply to each copy.
Jitter Tab
The Jitter tab can be used to introduce randomness to various parameters.
Random Seed/Randomize
The Random Seed is used to generate the jitter applied to the replicated objects. Two Replicate nodes
with identical settings but different random seeds will produce two completely different results. Click
the Randomize button to assign a Random Seed value.
Time Offset
Use the Time Offset slider to offset any animations that are applied to the source geometry. Unlike
Time Offset on the Controls tab, Jitter Time Offset is random, based on the Random Seed setting.
Ribbon 3D [3RI]
Inputs
There are two inputs on the Ribbon 3D node: one for the destination geometry that contains the
vertices, and one for the 3D geometry you want to replicate.
– 3D Scene: The orange input accepts a 3D scene or geometry.
– Material: The input accepts the 2D texture for the ribbon.
Ribbon 3D controls
Controls Tab
The Controls tab determines the number of ribbon strands, their size, length, and spacing.
Number of Lines
The number of parallel lines drawn between the start point and end point.
Line Thickness
Line thickness is allowed in the user interface to take on a floating-point value, but some graphics
cards allow only integer values. Some cards may only allow lines equal to or thicker than one, or max
out at a certain value.
Subdivision Level
The number of vertices on each line between start point and end points. The higher the number, the
more precise and smoother 3D displacement appears.
Ribbon Width
Determines how far the lines are apart from each other.
Start
XYZ control to set the start point of the ribbon.
End
XYZ control to set the end point of the ribbon.
Ribbon Rotation
Allows rotation of the ribbon around the virtual axis defined by start point and end points.
Anti-Aliasing
Allows you to apply anti-aliasing to the rendered lines. Using anti-aliasing isn’t necessarily
recommended. When activated, there may be be gaps between the line segments. This is especially
noticeable with high values of line thickness. Again, the way lines are drawn is completely up to the
graphics card, which means that these artifacts can vary from card to card.
Shape 3D [3SH]
Inputs
There are two optional inputs on the Shape 3D. The scene input can be used to combine additional
geometry with the Shape 3D, while the material input can be used to texture map the Shape
3D object.
– SceneInput: Although the Shape 3D creates its own 3D geometry, you can use the orange
scene input to combine an additional 3D scene or geometry.
– MaterialInput: The green input accepts either a 2D image or a 3D material. If a 2D image is
provided, it is used as a diffuse texture map for the basic material built into the node. If a 3D
material is connected, then the basic material is disabled.
Shape 3D controls
Controls Tab
The Controls tab allows you to select a shape and modify its geometry. Different controls appear
based on the specific shape that you choose to create.
Shape
This menu allows you to select the primitive geometry produced by the Shape 3D node. The
remaining controls in the Inspector change to match the selected shape.
– Lock Width/Height/Depth: [plane, cube] If this checkbox is selected, the width, height, and depth
controls are locked together as a single size slider. Otherwise, individual controls over the size of
the shape along each axis are provided.
– Size Width/Height/Depth: [plane, cube] Used to control the size of the shape.
Cube Mapping
When Cube is selected in the shape menu, the Cube uses cube mapping to apply the Shape node’s
texture (a 2D image connected to the material input on the node).
Radius
When a Sphere, Cylinder, Cone, or Torus is selected in the shape menu, this control sets the radius of
the selected shape.
Top Radius
When a cone is selected in the Shape menu, this control is used to define a radius for the top of a
cone, making it possible to create truncated cones.
Start/End Angle
When the Sphere, Cylinder, Cone, or Torus shape is selected in the Shape menu, this range control
determines how much of the shape is drawn. A start angle of 180° and end angle of 360° would only
draw half of the shape.
Start/End Latitude
When a Sphere or Torus is selected in the Shape menu, this range control is used to crop or slice the
object by defining a latitudinal subsection of the object.
Section
When the Torus is selected in the Shape menu, Section controls the thickness of the tube making up
the torus.
Subdivision Level/Base/Height
The Subdivision controls are used to determine the tessellation of the mesh on all shapes. The higher
the subdivision, the more vertices each shape has.
Wireframe
Enabling this checkbox causes the mesh to render only the wireframe for the object.
Common Controls
Controls, Materials, Transform and Settings Tabs
The controls for Visibility, Lighting, Matte, Blend Mode, Normals/Tangents, and Object ID in the
Controls tab are common in many 3D nodes. The Materials tab, Transforms tab, and Settings tab in the
Inspector are also duplicated in other 3D nodes. These common controls are described in detail at the
end of this chapter in “The Common Controls” section.
NOTE: If you pipe the texture directly into the sphere, it is also mirrored horizontally. You can
change this by using a Transform node first.
Inputs
The Soft Clip includes only a single input for a 3D scene that includes a camera connected to it.
– SceneInput: The orange scene input is a required connection. It accepts a 3D scene input that
includes a Camera 3D node.
Controls Tab
The Controls tab determines how an object transitions between opaque and transparent as it moves
closer to the camera.
Enable
This checkbox can be used to enable or disable the node. This is not the same as the red switch in the
upper-left corner of the Inspector. The red switch disables the tool altogether and passes the image
on without any modification. The Enable checkbox is limited to the effect of the tool. Other parts, like
scripts in the Settings tab, still process as normal.
Smooth Transition
By default, an object coming closer and closer to the camera slowly fades out with a linear
progression. With the Smooth Transition checkbox enabled, the transition changes to a nonlinear
curve, arguably a more natural-looking transition.
Radial
By default, the soft clipping is done based on the perpendicular distance to a plane (parallel with the
near plane) passing through the eye point. When the Radial option is checked, the Radial distance to
the eye point is used instead of the Perpendicular distance. The problem with Perpendicular distance
soft clipping is that when you move the camera about, as objects on the left or right side of the frustum
move into the center, they become less clipped, although they remain the same distance from the eye.
Radial soft clip fixes this. Sometimes Radial soft clipping is not desirable.
For example, if you apply soft clip to an object that is close to the camera, like an image plane, the
center of the image plane could be unclipped while the edges could be fully clipped because they are
farther from the eye point.
Transparent/Opaque Distance
Defines the range of the soft clip. The objects begin to fade in from an opacity of 0 at the Transparent
distance and are fully visible at the Opaque distance. All units are expressed as distance from the
camera along the Z-axis.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Spherical camera node has two inputs.
– Image: This orange image input requires an image in a spherical layout, which can be any of
LatLong (2:1 equirectangular), Horizontal/Vertical Cross, or Horizontal/Vertical Strip.
– Stereo Input: The green input for a right stereo camera if you are working in stereo VR.
Controls Tab
Layout
– VCross and HCross: VCross and HCross are the six square faces of a cube laid out in a cross,
vertical or horizontal, with the forward view in the center of the cross, in a 3:4 or 4:3 image.
– VStrip and HStrip: VStrip and HStrip are the six square faces of a cube laid vertically or
horizontally in a line, ordered as Left, Right, Up, Down, Back, Front (+X, -X, +Y, -Y, +Z, -Z), in a 1:6 or
6:1 image.
– LatLong: LatLong is a single 2:1 image in equirectangular mapping.
Near/Far Clip
The clipping plane is used to limit what geometry in a scene is rendered based on the object’s
distance from the camera’s focal point. This is useful for ensuring that objects that are extremely close
to the camera are not rendered and for optimizing a render to exclude objects that are too far away to
be useful in the final rendering.
The default perspective camera ignores this setting unless the Adaptively Adjust Near/Far Clip
checkbox control below is disabled.
The values are expressed in units, so a far clipping plane of 20 means that any objects more than 20
units from the camera are invisible to the camera. A near clipping plane of 0.1 means that any objects
closer than 0.1 units are also invisible.
NOTE: A smaller range between the near and far clipping plane allows greater accuracy in all
depth calculations. If a scene begins to render strange artifacts on distant objects, try
increasing the distance for the near clip plane. Use the vertical aperture size to get the
vertical angle of view and the horizontal aperture size to get the horizontal angle of view.
Stereo Method
This control allows you to adjust your stereoscopic method to your preferred working model.
Toe In
Both cameras point at a single focal point. Though the result is stereoscopic, the vertical parallax
introduced by this method can cause discomfort by the audience.
Off Axis
Often regarded as the correct way to create stereo pairs, this is the default method in Fusion. Off Axis
introduces no vertical parallax, thus creating less stressful stereo images.
Parallel
The cameras are shifted parallel to each other. Since this is a purely parallel shift, there is no
Convergence Distance control. Parallel introduces no vertical parallax, thus creating less stressful
stereo images.
Eye Separation
Defines the distance between both stereo cameras. If the Eye Separation is set to a value larger
than 0, controls for each camera are shown in the viewer when this node is selected. There is no
Convergence Distance control in Parallel mode.
Convergence Distance
This control sets the stereoscopic convergence distance, defined as a point located along the Z-axis
of the camera that determines where both left and right eye cameras converge.
Control Visibility
Allows you to selectively activate the onscreen controls that are displayed along with the camera.
– Frustum: Displays the actual viewing cone of the camera.
– View Vector: Displays a white line inside the viewing cone, which can be used to determine the
shift when in Parallel mode.
– Near Clip: The Near clipping plane. This plane can be subdivided for better visibility.
– Far Clip: The Far clipping plane. This plane can be subdivided for better visibility.
– Plane of Focus: The camera focal point according to the Plane of Focus slider explained above.
This plane can be subdivided for better visibility.
– Convergence Distance: The point of convergence when using Stereo mode. This plane can be
subdivided for better visibility.
Text 3D [3TXT]
Inputs
– SceneInput: The orange scene input accepts a 3D scene that can be combined with the 3D
text created in the node.
– ColorImage: The green color image input accepts a 2D image and wraps it around the text as
a texture. This input is visible only when Image is selected in the Material Type menu located in
the Shading tab.
– BevelTexture: The magenta bevel texture input accepts a 2D image and wraps it around the
bevel as a texture. This input is visible only when one Material is disabled in the Shader tab and
Image is selected in the Bevel Type menu.
Inspector
Text 3D controls
Text Tab
The Text 3D text tab in the Inspector is divided into three sections: Text, Extrusion, and Advanced
Controls. The Text section includes parameters that are familiar to anyone who has used a word
processor. It includes commonly used text formatting options. The Extrusion section includes controls
to extrude the text and create beveled edges for the text. The Advanced controls are used for
kerning options.
Styled Text
The Edit box in this tab is where the text to be created is entered. Any common character can be
typed into this box. The common OS clipboard shortcuts (Command-C or Ctrl-C to copy, Command-X
or Ctrl-X to cut, Command-V or Ctrl-V to paste) also work; however, right-clicking on the Edit box
displays a custom contextual menu with several modifiers you can add for more animation and
formatting options.
Color
This control sets the basic tint color of the text. This is the same Color control displayed in the Material
type section of the Shader tab.
Size
This control is used to increase or decrease the size of the text. This is not like selecting a point size in
a word processor. The size is relative to the width of the image.
Tracking
The Tracking parameter adjusts the uniform spacing between each character of text.
Line Spacing
Line Spacing adjusts the distance between each line of text. This is sometimes called leading in
word-processing applications.
V Anchor
The Vertical Anchor controls consist of three buttons and a slider. The three buttons are used to align
the text vertically to the top, middle, or bottom baseline of the text. The slider can be used to
customize the alignment. Setting the Vertical Anchor affects how the text is rotated but also the
location for line spacing adjustments. This control is most often used when the Layout type is set to
Frame in the Layout tab.
V Justify
The Vertical Justify slider allows you to customize the vertical alignment of the text from the V Anchor
setting to full justification so it is aligned evenly along the top and bottom edges. This control is most
often used when the Layout type is set to Frame in the Layout tab.
H Anchor
The Horizontal Anchor controls consist of three buttons and a slider. The three buttons justify the text
alignment to the left edge, middle, or right edge of the text. The slider can be used to customize the
justification. Setting the Horizontal Anchor affects how the text is rotated but also the location for
tracking (leading) spacing adjustments. This control is most often used when the Layout type is set to
Frame in the Layout tab.
H Justify
The Horizontal Justify slider allows you to customize the justification of the text from the H Anchor
setting to full justification so it is aligned evenly along the left and right edges. This control is most
often used when the Layout type is set to Frame in the Layout tab.
Direction
This menu provides options for determining the direction in which the text is to be written.
Line Direction
These menu options are used to determine the text flow from top to bottom, bottom to top, left to right,
or right to left.
Write On
This range control is used to quickly apply simple Write On and Write Off animation to the text. To
create a Write On effect, animate the End portion of the control from 1 to 0 over the length of time
required. To create a Write Off effect, animate the Start portion of the range control from 0 to 1.
Bevel Depth
Increase the value of the Bevel Depth slider to bevel the text. The text must have extrusion before this
control has any effect.
Bevel Width
Use the Bevel Width control to increase the width of the bevel.
Smoothing Angle
Use this control to adjust the smoothing angle applied to the edges of the bevel.
Front/Back Bevel
Use these checkboxes to enable beveling for the front and back faces of the text separately
Custom Extrusion
In Custom mode, the Smoothing Angle controls the smoothing of normals around the edges of a text
character. The spline itself controls the smoothing along the extrusion profile. If a spline segment is
smoothed, for example by using the shortcut Shift-S, the normals are smoothed as well. If the control
point is linear, the shading edge is sharp. The first and last control points on the spline define the
extent of the text.
TIP: Splines can also be edited from within the Spline Editor panel. It provides a larger
working space for working with any spline including the Custom Extrusion.
Extrusion profile spline control: Do not try to go to zero size at the Front/Back face.
This results in Z-fighting resulting from self-intersecting faces. To avoid this problem, make
sure the first and last point have their profiles set to 0.
Force Monospaced
This slider control can be used to override the kerning (spacing between characters) that is defined in
the font. Setting this slider to zero (the default value) causes Fusion to rely entirely on the kerning
defined with each character. A value of one causes the spacing between characters to be completely
even, or monospaced.
Layout Tab
The Layout Tab is used to position the text in one of four different layout types.
Text 3D Layout tab for changing the layout of the text block
Layout Type
This menu selects the layout type for the text.
– Point: Point layout is the simplest of the layout modes. Text is arranged around an adjustable
center point.
– Frame: Frame layout allows you to define a rectangular frame used to align the text. The alignment
controls are used to justify the text vertically and horizontally within the boundaries of the frame.
– Circle: Circle layout places the text around the curve of a circle or oval. Control is offered over
the diameter and width of the circular shape. When the layout is set to this mode, the Alignment
controls determine whether the text is positioned along the inside or outside of the circle’s edge,
and how multiple lines of text are justified.
– Path: Path layout allows you to shape your text along the edges of a path. The path can be
used simply to add style to the text, or it can be animated using the Position on Path control that
appears when this mode is selected.
Size
This slider is used to control the scale of the layout element. For instance, increasing size when the
layout is set to Frame increases the frame size the text is within.
Rotation Order
These buttons allow you to select the order in which 3D rotations are applied to the text.
X, Y, and Z
These angle controls can be used to adjust the angle of the Layout element along any axis.
Fit Characters
This menu control is visible only when the Layout type is set to Circle. This menu is used to select how
the characters are spaced to fit along the circumference.
Position on Path
The Position on Path control is used to control the position of the text along the path. Values less than
zero or greater than one cause the text to move beyond, continuing in the same direction set by the
last two points on the path.
Transform Tab
There are actually two Transform tabs in the Text 3D Inspector. The first Transform tab is unique to the
Text 3D tool, while the second is the common Transform tab found on many 3D nodes. The Text
3D-specific Transform tab is described below since it contains some unique controls for this node.
Transform
This menu determines the portion of the text affected by the transformations applied in this tab.
Transformations can be applied to line, word, and character levels simultaneously. This menu is only
used to keep the number of visible controls to a reasonable number.
Spacing
The Spacing slider is used to adjust the amount of space between each line, word, or character.
Values less than one usually cause the characters to begin overlapping.
Pivot X, Y, and Z
This provides control over the exact position of the axis. By default, the axis is positioned at the
calculated center of the line, word, or character. The pivot control works as an offset, such that a value
of 0.1, 0.1 in this control would cause the axis to be shifted downward and to the right for each of the
text elements. Positive values in the Z-axis slider move the axis further along the axis (away from the
viewer). Negative values bring the axis of rotation closer.
Rotation Order
These buttons are used to determine the order in which transforms are applied. X, Y, and Z would
mean that the rotation is applied to X, then Y, and then Z.
X, Y, and Z
These controls can be used to adjust the angle of the text elements in any of the three dimensions.
Shear X and Y
Adjust these sliders to modify the slanting of the text elements along the X- and Y-axis.
Size X and Y
Adjust these sliders to modify the size of the text elements along the X- and Y-axis.
Shading
The Shading tab for the Text 3D node controls the overall appearance of the text and how lights affect
its surface.
Type
To use a solid color texture, select the Solid mode. Selecting the Image mode reveals a new external
input on the node that can be connected to another 2D image.
Specular Color
Specular Color determines the color of light that reflects from a shiny surface. The more specular a
material is, the glossier it appears. Surfaces like plastics and glass tend to have white specular
highlights, whereas metallic surfaces like gold have specular highlights that tend to inherit their color
from the material color. The basic shader material does not provide an input for textures to control the
specularity of the object. Use nodes from the 3D Material category when more precise control is
required over the specular appearance.
Specular Intensity
Specular Intensity controls the strength of the specular highlight. If the specular intensity texture port
has a valid input, then this value is multiplied by the Alpha value of the input.
Specular Exponent
Specular Exponent controls the falloff of the specular highlight. The greater the value, the sharper the
falloff, and the smoother and glossier the material appears. The basic shader material does not
provide an input for textures to control the specular exponent of the object. Use nodes from the 3D
Material category when more precise control is required over the specular exponent.
Image Source
This control determines the source of the texture applied to the material. If the option is set to Tool,
then an input appears on the node that can be used to apply the output of a 2D node as the texture.
Selecting Clip opens a file browser that can be used to select an image or image sequence from disk.
The Brush option provides a list of clips found in the Fusion\brushes folder.
Bevel Material
This option appears only when the Use One Material checkbox control is selected. The controls under
this option are an exact copy of the Material controls above but are applied only to the beveled edge
of the text.
Uncapped 3D Text
To hide the front face of extruded text, uncheck Use One Material on the Shading tab and
reduce the first material’s color to black, including its Alpha value.
Text 3D Modifiers
Right-clicking within the Styled Text box displays a menu with the following text modifiers. Only one
modifier can be applied to a Text 3D Styled Text box. Below is a brief list of the text specific modifiers,
but for more information see Chapter 122, “Modifiers” in the DaVinci Resolve Reference Manual or
Chapter 61 in the Fusion Reference Manual.
Animate
Use the Animate command to set to a keyframe on the entered text and animate the content over time.
Comp Name
Comp Name puts the name of the composition in the Styled Text box and is generally used as a quick
way to create slates.
Follower
Follower is a text modifier that can be used to ripple animation applied to the text across each
character in the text. See “Text Modifiers” at the end of this chapter.
Publish
Publish the text for connection to other text nodes.
Text Scramble
A text modifier ID is used to randomize the characters in the text. See “Text Modifiers” at the end of
this chapter.
Text Timer
A text modifier is used to count down from a specified time or to output the current date and time. See
“Text Modifiers” at the end of this chapter.
Time Code
A text modifier is used to output Time Code for the current frame. See “Text Modifiers” at the end of
this chapter.
Connect To
Use this option to connect the text generated by this Text node to the published output of
another node.
Inputs
The Transform node has a single required input for a 3D scene or 3D object.
– Scene Input: The orange scene input is connected to a 3D scene or 3D object to apply a
second set of transformation controls.
Transform 3D controls
Controls Tab
The Controls tab is the primary tab for the Transform 3D node. It includes controls to translate, rotate,
or scale all elements within a scene without requiring a Merge 3D node.
Translation
– X, Y, Z Offset: Controls are used to position the 3D element in 3D space.
Rotation
– Rotation Order: Use these buttons to select the order used to apply the rotation along each axis
of the object. For example, XYZ would apply the rotation to the X-axis first, followed by the Y-axis,
and then the Z-axis.
– X, Y, Z Rotation: Use these controls to rotate the object around its pivot point. If the Use Target
checkbox is selected, then the rotation is relative to the position of the target; otherwise, the
global axis is used.
Pivot Controls
– X, Y, Z Pivot: A pivot point is the point around which an object rotates. Normally, an object rotates
around its own center, which is considered to be a pivot of 0,0,0. These controls can be used to
offset the pivot from the center.
Scale
– X, Y, Z Scale: If the Lock X/Y/Z checkbox is checked, a single scale slider is shown. This adjusts
the overall size of the object. If the Lock checkbox is unchecked, individual X, Y, and Z sliders are
displayed to allow scaling in any dimension.
NOTE: If the Lock checkbox is checked, scaling of individual dimensions is not possible, even
when dragging specific axes of the Transformation widget in Scale mode.
Import Transform
Opens a file browser where you can select a scene file saved or exported by your 3D application.
It supports the following file types:
dotXSI .xsi
The Import Transform button imports only transformation data. For 3D geometry, lights and cameras,
consider using the File > FBX Import option from the menus.
Transform 3D onscreen
transformation controls
Inputs
The Triangulate 3D node has a single required input for a 3D scene or 3D object.
– Scene Input: The orange scene input is connected to the 3D scene or 3D object you
want to triangulate.
Inspector
Triangulate 3D controls
Controls Tab
There are no controls for this node.
UV Map 3D [3UV]
NOTE: The UV Map 3D node does not put a texture or material on the mesh; it only modifies
the texture coordinates that the materials use. This may be confusing because the material
usually sits upstream, as seen in the Basic Node Setup example below.
UV Map 3D is placed after the Merge 3D, with a camera connected to line up the texture
Inspector
UV Map 3D controls
Controls Tab
The UV Map 3D Controls tab allows you to select Planar, Cylindrical, Spherical, XYZ, and Cubic
mapping modes, which can be applied to basic Fusion primitives as well as imported geometry. The
position, rotation, and scale of the texture coordinates can be adjusted to allow fine control over the
Map Mode
The Map mode menu is used to define how the texture coordinates are created. You can think of this
menu as a way to select the virtual geometry that projects the UV space on the object.
– Planar: Creates the UV coordinates using a plane.
– Cylindrical: Creates the UV coordinates using a cylindrical-shaped object.
– Spherical: The UVs are created using a sphere.
– XYZ to UVW: The position coordinates of the vertices are converted to UVW coordinates directly.
This is used for working with procedural textures.
– CubeMap: The UVs are created using a cube.
– Camera: Enables the Camera input on the node. After connecting a camera to the node, the
texture coordinates are created based on camera projection.
Orientation X/Y/Z
Defines the reference axis for aligning the Map mode.
Fit
Clicking this button fits the Map mode to the bounding box of the input scene.
Center
Clicking this button moves the center of the Map mode to the bounding box center of the input scene.
Size X/Y/Z
Defines the size of the projection object.
Center X/Y/Z
Defines the position of the projection object.
Rotation/Rotation Order
Use these buttons to select which order is used to apply the rotation along each axis of the object. For
example, XYZ would apply the rotation to the X-axis first, followed by the Y-axis, and then the Z-axis.
Rotation X/Y/Z
Sets the orientation of the projection object for each axis, independent from the rotation order.
Tile U/V/W
Defines how often a texture fits into the projected UV space on the applicable axis. Note that the UVW
coordinates are transformed, not a texture. This works best when used in conjunction with the Create
Texture node.
Flip U/V/W
Mirrors the texture coordinates around the applicable axis.
NOTE: To utilize the full capabilities of the UV Map 3D node, it helps to have a basic
understanding of how 2D images are mapped onto 3D geometry. When a 2D image is
applied to a 3D surface, it is converted into a texture map that uses UV coordinates to
determine how the image translates to the object. Each vertex on a mesh has a (U, V) texture
coordinate pair that describes the appearance the object takes when it is unwrapped and
flattened. Different mapping modes use different methods for working out how the vertices
transform into a flat 2D texture. When using the UV Map 3D node to modify the texture
coordinates on a mesh, it’s best to do so using the default coordinate system of the mesh or
primitive. So the typical workflow would look like Shape 3D > UV Map 3D > Transform 3D. The
Transformation tab on the Shape node would be left to its default values, and the Transform
3D node following the UV Map 3D does any adjustments needed to place the node in the
scene. Modifying/animating the transform of the Shape node causes the texture to slide
across the shape, which is generally undesirable. The UV Map 3D node modifies texture
coordinates per vertex and not per pixel. If the geometry the UV map is applied to is poorly
tessellated, then undesirable artifacts may appear.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Weld 3D [3WE]
Instead of round tripping back to your 3D modeling application to fix the “duplicated” vertices, the
Weld 3D node allows you to do this in Fusion. Weld 3D welds together vertices with the same or
nearly the same positions. This can be used to fix cracking issues when vertices are displaced by
Inputs
The Weld 3D node has a single input for a 3D scene or 3D object you want to repair.
– Scene Input: The orange scene input is connected to the 3D scene or 3D object
you want to fix.
Inspector
Weld 3D controls
Controls Tab
The Controls tab for the Weld 3D node includes a simple Weld Mode menu. You can choose between
welding vertices or fracturing them.
Fracture
Fracturing is the opposite of welding, so all vertices are unwelded. This means that all polygon
adjacency information is lost. For example, an Image Plane 3D normally consists of connected quads
that share vertices. Fracturing the image plane causes it to become a bunch of unconnected quads.
Tolerance
In auto mode, the Tolerance value is automatically detected. This should work in most cases.
It can also be adjusted manually if needed.
Use Weld 3D when issues occur with the geometry. Don’t use it everywhere just because it’s
there, as it influences render time.
Weld 3D is intended to be used as a mesh robustness tool and not as a mesh editing tool to
merge vertices. If you can see the gap between the vertices you want to weld in the 3D view,
you are probably misusing Weld 3D. Unexpected things may happen when you do this; do so
at your own peril.
Limitations
Setting the tolerance too large can cause edges/faces to collapse to points.
If your model has detail distributed over several orders of scale, picking a tolerance value can
be hard or impossible.
For example, suppose you have a model of the International Space Station and there are lots
of big polygons and lots of really tiny polygons. If you set the tolerance too large, small
polygons that shouldn’t merge do; if you set the tolerance too small, some large polygons
won’t be merged.
Vertices that are far from the origin can fail to merge correctly. This is because bignumber +
epsilon can exactly equal bignumber in float math. This is one reason it may be best to merge
in local coordinates and not in world coordinates.
Sometimes Weld 3D-ing a mesh can make things worse. Take Fusion’s cone, for instance. The
top vertex of the cone is currently duplicated for each adjoining face, and they all have
different normals. If you weld the cone, the top vertices merge and only have one normal,
making the lighting look weird.
Weld 3D is not multithreaded.
Warning
Do not misuse Weld 3D to simplify (reduce the polygon count of) meshes. It is designed to
efficiently weld vertices that differ by only very small values, like a 0.001 distance.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inspector
Target Object
This control should be connected to the 3D node that produces the original coordinates to be
transformed. To connect a node, drag and drop a node from the node tree into the Text Edit control, or
right-click the control and select the node from the contextual menu. It is also possible to type the
node’s name directly into the control.
Sub ID
The Sub ID slider can be used to target an individual sub-element of certain types of geometry, such
as an individual character produced by a Text 3D node or a specific copy created by a
Duplicate 3D node.
Scene Input
This control should be connected to the 3D node that outputs the scene containing the object at the
new location. To connect a node, drag and drop a node from the node tree into the Text Edit control,
or right-click the control and select an object from the Connect To submenu.
These controls are often displayed in the lower half of the Controls tab. They appear in nodes that
create or contain 3D geometry.
Visibility
– Visible: If this option is enabled, the object is visible in the viewers and in final renders. When
disabled, the object is not visible in the viewers nor is it rendered into the output image by the
Renderer 3D node. Also, a non-visible object does not cast shadows.
– Unseen by Cameras: When the Unseen by Cameras checkbox is enabled, the object is visible
in the viewers (unless the Visible checkbox is disabled), except when viewed through a camera.
Also, the object is not rendered into the output image by the Renderer 3D node. However,
shadows cast by an unseen object are still visible when rendered by the software renderer in the
Renderer 3D node, though not by the OpenGL renderer.
– Cull Front Face/Back Face: Use these options to eliminate rendering and display of certain
polygons in the geometry. If Cull Back Face is selected, polygons facing away from the camera are
not rendered and do not cast shadows. If Cull Front Face is selected, polygons facing toward the
camera are not rendered and do not cast shadows. Enabling both options has the same effect as
disabling the Visible checkbox.
– Suppress Aux Channels for Transparent Pixels: In previous versions of Fusion, transparent pixels
were excluded by the software and Open GL render options in the Renderer 3D node. To be
more specific, the software renderer excluded pixels with R,G,B,A set to 0, and the GL renderer
excluded pixels with A set to 0. This is now optional. The reason you might want to do this is to
get aux channels (e.g., Normals, Z, UVs) for the transparent areas. For example, suppose you want
to replace the texture on a 3D element that is transparent in certain areas with a texture that is
Lighting
– Affected by Lights: Disabling this checkbox causes lights in the scene to not affect the object.
The object does not receive nor cast shadows, and it is shown at the full brightness of its color,
texture, or material.
– Shadow Caster: Disabling this checkbox causes the object not to cast shadows on other objects
in the scene.
– Shadow Receiver: Disabling this checkbox causes the object not to receive shadows cast by
other objects in the scene.
Matte
Enabling the Is Matte option applies a special texture, causing the object to not only become invisible
to the camera, but also making everything that appears directly behind the camera invisible as well.
This option overrides all textures. For more information on Fog 3D and Soft Clipping, see Chapter 86,
“3D Compositing Basics” in the DaVinci Resolve Reference Manual or Chapter 25 in the Fusion
Reference Manual.
– Is Matte: When activated, objects whose pixels fall behind the matte object’s pixels in Z do not get
rendered. Two additional options are displayed when the Is Matte checkbox is activated.
– Opaque Alpha: When the Is Matte checkbox is enabled, the Opaque Alpha checkbox sets the
Alpha value of the matte object to 1.
– Infinite Z: This option sets the value in the Z-channel to infinite. This checkbox is visible only when
the Is Matte option is enabled.
Blend Mode
A Blend mode specifies which method is used by the renderer when combining this object with the
rest of the scene. The blend modes are essentially identical to those listed in the section for the 2D
Merge node. For a detailed explanation of each mode, see the section for that node.
The blending modes were originally designed for use with 2D images. Using them in a lit 3D
environment can produce undesirable results. For best results, use the Apply modes in unlit 3D
scenes using the software option in the Renderer 3D node.
– OpenGL Blend Mode: Use this menu to select the blending mode that is used when the geometry
is processed by the OpenGL renderer in the Renderer 3D node. This is also the mode used when
viewing the object in the viewers. Currently the OpenGL renderer supports a limited number of
blending modes.
– Software Blend Mode: Use this menu to select the blending mode that is used when the
geometry is processed by the software renderer. Currently, the software renderer supports all the
modes described in the Merge node documentation, except for the Dissolve mode.
Normal/Tangents
Normals are imaginary lines perpendicular to each point on the surface of an object. They are used to
illustrate the exact direction and orientation of every polygon on 3D geometry. Knowing the direction
and orientation determines how the object gets shaded. Tangents are lines that exists along the
surface’s plane. These lines are tangent to a point on the surface. The tangent lines are used to
describe the direction of textures you apply to the surface of 3D geometry.
– Scale: This slider increases or decreases the length of the vectors for both normals and tangents.
Object ID
Use this slider to select which ID is used to create a mask from the object of an image. Use the
Sample button in the same way as the Color Picker to grab IDs from the image displayed in the viewer.
The image or sequence must have been rendered from a 3D software package with those
channels included.
The controls in the Materials tab are used to determine the appearance of the 3D object when lit. Most
of these controls directly affect how the object interacts with light using a basic shader. For more
advanced control over the objects appearance, you can use tools from the 3D Materials category of
the Effects Library. These tools can be used to assemble a more finely detailed and precise shader.
When a shader is constructed using the 3D Material tools and connected to the 3D Object’s material
input, the controls in this tab are replaced by a label that indicates that an external material is
currently in use.
Diffuse Color
The Diffuse Color determines the basic color of an object when the surface of that object is either lit
indirectly or lit by an ambient light. If a valid image is provided to the tools diffuse texture input, then
the RGB values provided here are also multiplied by the color values of the pixels in the diffuse
texture. The Alpha channel of the diffuse material can be used to control the transparency of
the surface.
Alpha
This slider sets the material’s Alpha channel value. This affects diffuse and specular colors equally, and
affects the Alpha value of the material in the rendered output. If the tools diffuse texture input is used,
then the Alpha value provided here is multiplied by the Alpha channel of the pixels in the image.
Opacity
Reducing the material’s Opacity decreases the color and Alpha values of the specular and diffuse
colors equally, making the material transparent and allowing hidden objects to be seen through the
material.
Specular
The Specular section provides controls for determining the characteristics of light that reflects toward
the viewer. These controls affect the appearance of the specular highlight that appears on the surface
of the object.
Specular Color
Specular Color determines the color of light that reflects from a shiny surface. The more specular a
material is, the glossier it appears. Surfaces like plastics and glass tend to have white specular
highlights, whereas metallic surfaces like gold have specular highlights that tend to inherit their color
from the material color. The basic shader material does not provide an input for textures to control the
specularity of the object. Use tools from the 3D Material category when more precise control is
required over the specular appearance.
Specular Intensity
Specular Intensity controls how strong the specular highlight is. If the specular intensity texture input
has a valid connection, then this value is multiplied by the Alpha value of the input.
Specular Exponent
Specular Exponent controls the falloff of the specular highlight. The greater the value, the sharper the
falloff, and the smoother and glossier the material appears. The basic shader material does not
provide an input for textures to control the specular exponent of the object. Use tools from the 3D
Material category when more precise control is required over the specular exponent.
Transmittance
Transmittance controls the way light passes through a material. For example, a solid blue sphere casts
a black shadow, but one made of translucent blue plastic would cast a much lower density
blue shadow.
There is a separate opacity option. Opacity determines how transparent the actual surface is when it is
rendered. Fusion allows adjusting both opacity and transmittance separately. This might be a bit
counter-intuitive to artists who are unfamiliar with 3D software at first. It is possible to have a surface
that is fully opaque but transmits 100% of the light arriving upon it, effectively making it a luminous/
emissive surface.
Alpha Detail
When the Alpha Detail slider is set to 0, the Alpha channel of the object is ignored and the entire
object casts a shadow. If it is set to 1, the Alpha channel determines what portions of the object cast
a shadow.
Color Detail
The Color Detail slider modulates light passing through the surface by the diffuse color + texture
colors. Use this to throw a shadow that contains color details of the texture applied to the object.
Increasing the slider from 0 to 1 brings in more of diffuse color + texture color into the shadow. Note
that the Alpha and opacity of the object are ignored when transmitting color, allowing an object with a
solid Alpha to still transmit its color to the shadow.
Saturation
The Saturation slider controls the saturation of the color component transmitted to the shadow. Setting
this to 0.0 results in monochrome shadows.
Receives Lighting/Shadows
These checkboxes control whether the material is affected by lighting and shadows in the scene. If
turned off, the object is always fully lit and/or unshadowed.
Two-Sided Lighting
This makes the surface effectively two-sided by adding a second set of normals facing the opposite
direction on the back side of the surface. This is normally off, to increase rendering speed, but can be
turned on for 2D surfaces or for objects that are not fully enclosed, to allow the reverse or interior
surfaces to be visible as well.
Normally, in a 3D application, only the front face of a surface is visible and the back face is culled, so
that if a camera were to revolve around a plane in a 3D application, when it reached the backside, the
plane would become invisible. Making a plane two sided in a 3D application is equivalent to adding
another plane on top of the first but rotated by 180 degrees so the normals are facing the opposite
direction on the backside. Thus, when you revolve around the back, you see the second image plane
that has its normals facing the opposite way.
Fusion does exactly the same thing as 3D applications when you make a surface two sided. The
confusion about what two-sided lighting does arises because Fusion does not cull backfacing
polygons by default. If you revolve around a one-sided plane in Fusion, you still see it from the
backside (but you are seeing the frontside bits duplicated through to the backside as if it were
transparent). Making the plane two sided effectively adds a second set of normals to the backside of
the plane.
Note that this can become rather confusing once you make the surface transparent, as the same rules
still apply and produce a result that is counterintuitive. If you view from the frontside a transparent
two-sided surface illuminated from the backside, it looks unlit.
Material ID
This control is used to set the numeric identifier assigned to this material. The Material ID is an integer
number that is rendered into the MatID auxiliary channel of the rendered image when the Material ID
option is enabled in the Renderer 3D tool. For more information, see Chapter 86, “3D Compositing
Basics” in the DaVinci Resolve Reference Manual or Chapter 25 in the Fusion Reference Manual.
Many tools in the 3D category include a Transform tab used to position, rotate, and scale the object
in 3D space.
Translation
X, Y, Z Offset
These controls can be used to position the 3D element.
Rotation
Rotation Order
Use these buttons to select which order is used to apply rotation along each axis of the object. For
example, XYZ would apply the rotation to the X axis first, followed by the Y axis and then finally
the Z axis.
X, Y, Z Rotation
Use these controls to rotate the object around its pivot point. If the Use Target checkbox is selected,
then the rotation is relative to the position of the target; otherwise, the global axis is used.
Pivot
X, Y, Z Pivot
A Pivot point is the point around which an object rotates. Normally, an object rotates around its own
center, which is considered to be a pivot of 0,0,0. These controls can be used to offset the pivot from
the center.
Scale
X, Y, Z Scale
If the Lock X/Y/Z checkbox is checked, a single Scale slider is shown. This adjusts the overall size of
the object. If the Lock checkbox is unchecked, individual X, Y, and Z sliders are displayed to allow
individual scaling in each dimension. Note: If the Lock checkbox is checked, scaling of individual
dimensions is not possible, even when dragging specific axes of the Transformation Widget in
scale mode.
Import Transform
Opens a file browser where you can select a scene file saved or exported by your 3D application. It
supports the following file types:
dotXSI .xsi
The Import Transform button imports only transformation data. For 3D geometry, lights, and cameras,
consider using the File > FBX Import option.
Most of the controls in the Transform tab are represented in the viewer with onscreen controls for
transformation, rotation, and scaling. To change the mode of the onscreen controls, select one of the
three buttons in the toolbar in the upper left of the viewer. The modes can also be toggled using the
keyboard shortcut Q for translation, W for rotation, and E for scaling. In all three modes, individual axes
of the control may be dragged to affect just that axis, or the center of the control may be dragged to
affect all three axes.
The scale sliders for most 3D tools default to locked, which causes uniform scaling of all three axes.
Unlock the Lock X/Y/Z Scale checkbox to scale an object on a single axis only.
Settings Tab
Comment Tab
The Comment tab contains a single text control that is used to add comments and notes to the tool.
When a note is added to a tool, a small red dot icon appears next to the setting’s tab icon and a text
bubble appears on the node. To see the note in the Node Editor, hold the mouse pointer over the
node for a moment. The contents of the Comments tab can be animated over time, if required.
Scripting Tab
The Scripting tab is present on every tool in Fusion. It contains several edit boxes used to add scripts
that process when the tool is rendering. For more details on the contents of this tab, please consult the
scripting documentation.
3D Light Nodes
This chapter details the 3D Light nodes available when creating 3D composites in
Fusion. The abbreviations next to each node name can be used in the Select Tool
dialog when searching for tools and in scripting references.
Contents
Ambient Light [3AL] 749
Directional Light [3DL] 750
Point Light [3PL] 752
Spot Light [3SL] 754
The Common Controls 758
Inputs
The Ambient Light node includes a single optional orange input for a 3D scene or 3D geometry.
– SceneInput: The orange input is an optional input that accepts a 3D scene. If a scene is
provided, the Transform controls in this node apply to the entire scene provided.
Controls Tab
The Controls tab is used to set the color and brightness of the ambient light.
Enabled
When the Enabled checkbox is turned on, the ambient light affects the scene. When the checkbox to
turned off, the light is turned off. This checkbox performs the same function as the red switch to the
left of the node’s name in the Inspector.
Color
Use this standard Color control to set the color of the light.
Intensity
Use this slider to set the Intensity of the ambient light. A value of 0.2 indicates 20% percent light. A
perfectly white texture lit only with a 0.2 ambient light would render at 20% gray (.2, .2, .2).
Common Controls
Transform and Settings Tabs
The options presented in the Transform and Settings tabs are commonly found in other lighting nodes.
For more detail on the controls found in these tabs, see “The Common Controls” section at the end of
this chapter.
Inputs
The Directional Light node includes a single optional orange input for a 3D scene or 3D geometry.
– SceneInput: The orange input is an optional input that accepts a 3D scene. If a scene is
provided, the Transform controls in this node apply to the entire scene provided.
Inspector
Enabled
When the Enabled checkbox is turned on, the directional light affects the scene. When the checkbox
is turned off, the light is turned off. This checkbox performs the same function as the red switch to the
left of the node’s name in the Inspector.
Color
Use this standard Color control to set the color of the light.
Intensity
Use this slider to set the Intensity of the directional light. A value of 0.2 indicates 20% percent light.
Common Controls
Transform and Settings Tabs
The options presented in the Transform and Settings tabs are commonly found in other lighting nodes.
For more detail on the controls found in these tabs, see “The Common Controls” section at the end of
this chapter.
Inputs
The Point Light node includes a single optional orange input for a 3D scene or 3D geometry.
– SceneInput: The orange input is an optional input that accepts a 3D scene. If a scene is
provided, the Transform controls in this node apply to the entire scene provided.
Inspector
Controls Tab
The Controls tab is used to set the color and brightness of the point light. The position and distance of
the light source are controlled in the Transform tab.
Enabled
When the Enabled checkbox is turned on, the point light affects the scene. When the checkbox is
turned off the light is turned off. This checkbox performs the same function as the red switch to the left
of the node’s name in the Inspector.
Color
Use this standard Color control to set the color of the light.
Intensity
Use this slider to set the Intensity of the point light. A value of 0.2 indicates 20% percent light.
Decay Type
A point light defaults to No Decay, meaning that its light has equal intensity at all points in the scene.
To cause the intensity to fall off with distance, set the Decay Type to either Linear or Quadratic modes.
Inputs
The Spot Light node includes a single optional orange input for a 3D scene or 3D geometry.
– SceneInput: The orange input is an optional input that accepts a 3D scene. If a scene is
provided, the Transform controls in this node apply to the entire scene provided.
Controls Tab
The Controls tab is used to set the color and brightness of the spotlight. The position, rotation, and
distance of the light source are controlled in the Transform tab.
Enabled
When the Enabled checkbox is turned on, the spotlight affects the scene. When the checkbox is
turned off the light is turned off. This checkbox performs the same function as the red switch to the left
of the node’s name in the Inspector.
Color
Use this standard Color control to set the color of the light.
Intensity
Use this slider to set the Intensity of the spot light. A value of 0.2 indicates 20% percent light.
Decay Type
A spotlight defaults to No Falloff, meaning that its light has equal intensity on geometry despite the
distance from the light to the geometry. To cause the intensity to fall off with distance, set the Decay
type to either Linear or Quadratic modes.
Cone Angle
The Cone Angle of the light refers to the width of the cone where the light emits its full intensity.
The larger the angle, the wider the cone angle, up to a limit of 90 degrees.
Penumbra Angle
The Penumbra Angle determines the area beyond the cone angle where the light’s intensity falls off
toward 0. A larger penumbra angle defines a larger falloff, while a value of 0 generates a hard-
edged light.
Shadows
This section provides several controls used to define the shadow map used when this spotlight
creates shadows. For more information, see Chapter 25, “3D Compositing Basics” in the Fusion
Reference Manual or Chapter 86 in the DaVinci Resolve Reference Manual.
Enable Shadows
The Enable Shadows checkbox should be selected if the light is to produce shadows. This defaults
to selected.
Shadow Color
Use this standard Color control to set the color of the shadow. This defaults to black (0, 0, 0).
Density
The shadow density determines the transparency of the shadow. A density of 1.0 produces a
completely opaque shadow, whereas lower values make the shadow more transparent.
Multiplicative/Additive Bias
Shadows are essentially textures applied to objects in the scene, so there is occasionally Z-fighting,
where the portions of the object that should be receiving the shadows render over the top of the
shadow. Biasing works by adding a small depth offset to move the shadow away from the surface it is
shadowing, eliminating the Z-fighting. Too little bias and the objects can self-shadow themselves. Too
much bias and the shadow can become separated from the surface. Adjust the Multiplicative Bias first,
and then fine tune the result using the Additive Bias control.
For more information, see the Multiplicative and Additive Bias section of Chapter 86, “3D Compositing”
in the DaVinci Resolve Reference Manual, or Chapter 25 in the Fusion Reference Manual.
Softness
Soft edges in shadows are produced by filtering the shadow map when it is sampled. Fusion provides
two separate filtering methods for rendering shadows, which produce different effects.
– Constant: Shadows edges have a constant softness. A filter with a constant width is used when
sampling the shadow map. Adjusting the Constant Softness slider controls the size of the filter.
Note that the larger you make the filter, the longer it takes to render the shadows.
– Variable: The shadow edge softness grows the further the shadow receiver is positioned from the
shadow caster. The variable softness is achieved by changing the size of the filter based on the
distance between the receiver and caster. When this option is selected, the Softness Falloff, Min
Softness, and Max Softness sliders appear.
Constant Softness
If the Softness is set to Constant, then this slider appears. It can be used to set the overall softness of
the shadow.
Softness Falloff
The Softness Falloff slider appears when the Softness is set to variable. This slider controls how fast
the softness of shadow edges grows with distance. More precisely, it controls how fast the shadow
map filter size grows based upon the distance between the shadow caster and receiver. Its effect is
mediated by the values of the Min and Max Softness sliders.
Min Softness
The Min Softness slider appears when the Softness is set to Variable. This slider controls the Minimum
Softness of the shadow. The closer the shadow is to the object casting the shadow, the sharper it is,
up to the limit set by this slider.
Max Softness
The Max Softness slider appears when the Softness is set to Variable. This slider controls the
Maximum Softness of the shadow. The further the shadow is from the object casting the shadow, the
softer it is, up to the limit set by this slider.
Common Controls
Transform and Settings Tabs
The options presented in the Transform and Settings tabs are commonly found in other lighting nodes.
For more detailed information on the controls found in these tabs, see “The Common Controls” section
at the end of this chapter.
Many tools in the 3D category include a Transform tab used to position, rotate, and scale the object
in 3D space.
Translation
X, Y, Z Offset
These controls can be used to position the 3D element.
Rotation
Rotation Order
Use these buttons to select which order is used to apply Rotation along each axis of the object. For
example, XYZ would apply the rotation to the X axis first, followed by the Y axis, and finally the Z axis.
X, Y, Z Rotation
Use these control to rotate the object around its pivot point. If the Use Target checkbox is selected,
then the rotation is relative to the position of the target; otherwise, the global axis is used.
Pivot
X, Y, Z Pivot
A pivot point is the point around which an object rotates. Normally, an object rotates around its own
center, which is considered to be a pivot of 0,0,0. These controls can be used to offset the pivot from
the center.
Use Target
Selecting the Use Target checkbox enables a set of controls for positioning an XYZ target. When
Target is enabled, the object always rotates to face the target. The rotation of the object becomes
relative to the target.
Import Transform
Opens a file browser where you can select a scene file saved or exported by your 3D application. It
supports the following file types:
dotXSI .xsi
The Import Transform button imports only transformation data. For 3D geometry, lights, and cameras,
consider using the File > FBX Import option.
The Common Settings tab can be found on almost every tool found in Fusion. The following controls
are specific settings for 3D nodes.
Comment Tab
The Comment tab contains a single text control that is used to add comments and notes to the tool.
When a note is added to a tool, a small red dot icon appears next to the setting’s tab icon, and a text
bubble appears on the node. To see the note in the Node Editor, hold the mouse pointer over the
node for a moment. The contents of the Comments tab can be animated over time, if required.
Scripting Tab
The Scripting tab is present on every tool in Fusion. It contains several edit boxes used to add scripts
that process when the tool is rendering. For more details on the contents of this tab, please consult the
scripting documentation.
3D Material Nodes
This chapter details the 3D Material nodes available when creating 3D composites in
Fusion. The abbreviations next to each node name can be used in the Select Tool
dialog when searching for tools and in scripting references.
Contents
Blinn [3BI] 762
Channel Boolean [3BOL] 766
Cook Torrance [3CT] 769
Material Merge 3D [3MM] 773
Phong [3PH] 774
Reflect [3RR] 778
Stereo Mix [3SMM] 781
Ward [3WD] 783
The Common Controls 787
Inputs
There are five inputs on the Blinn node that accept 2D images or 3D materials. These inputs control
the overall color and image used for the 3D object as well as the color and texture used in the
specular highlight. Each of these inputs multiplies the pixels in the texture map by the equivalently
named parameters in the node itself. This provides an effective method for scaling parts of
the material.
– Diffuse Texture: The orange Diffuse Texture input accepts a 2D image or a 3D material to be
used as a main object texture map.
– Specular Color Material: The green Specular Color material input accepts a 2D image or a 3D
material to be used as the color texture map for specula highlight areas.
– Specular Intensity Materials: The magenta Specular Intensity material input accepts a 2D
image or a 3D material to be used to alter the intensity of specular highlights. When the input
is a 2D image, the Alpha channel is used to create the map, while the color channels are
discarded.
– Specular Exponent Material: The teal Specular Exponent material input accepts a 2D image
or a 3D material that is used as a falloff map for the material’s specular highlights. When the
input is a 2D image, the Alpha channel is used to create the map, while the color channels are
discarded.
– Bump Map Material: The white Bump Map material input accepts only a 3D material. Typically,
you connect the texture into a Bump Map node, and then connect the Bump Map node to this
input. This input uses the RGB information as texture-space normals.
Inspector
Blinn controls
Controls Tab
The Controls tab is the primary tab for the Blinn node. It controls the color and shininess applied to the
surface of the 3D geometry.
Diffuse Color
A material’s Diffuse Color describes the base color presented by the material when it is lit indirectly or
by ambient light. If a diffuse texture map is provided, then the color value provided here is multiplied
by the color values in the texture.
Alpha
This slider sets the material’s Alpha channel value. This affects diffuse and specular colors equally and
affects the Alpha value of the material in the rendered output. If a diffuse texture map is provided, then
the Alpha value set here is multiplied by the Alpha values in the texture map.
Opacity
Reducing the material’s opacity decreases the color and Alpha values of the specular and diffuse
colors equally, making the material transparent.
Specular
The parameters in the Specular section describe the look of the specular highlight of the surface.
These values are evaluated in a different way for each illumination model.
Specular Color
Specular Color determines the color of light that reflects from a shiny surface. The more specular a
material is, the glossier it appears. Surfaces like plastics and glass tend to have white specular
highlights, whereas metallic surfaces like gold have specular highlights that inherit their color from the
material color. If a specular texture map is provided, then the value provided here is multiplied by the
color values from the texture.
Specular Intensity
Specular Intensity controls how strong the specular highlight is. If the specular intensity texture is
provided, then this value is multiplied by the Alpha value of the texture.
Specular Exponent
Specular Exponent controls the falloff of the specular highlight. The greater the value, the sharper the
falloff, and the smoother and glossier the material appears. If the specular exponent texture is
provided, then this value is multiplied by the Alpha value of the texture map.
Transmittance
Transmittance controls the way light passes through a material. For example, a solid blue sphere casts
a black shadow, but one made of translucent blue plastic would cast a much lower density
blue shadow.
There is a separate Opacity option. Opacity determines how transparent the actual surface is when it
is rendered. Fusion allows adjusting both opacity and transmittance separately. At first, this might be a
bit counterintuitive to those who are unfamiliar with 3D software. It is possible to have a surface that is
fully opaque but transmits 100% of the light arriving upon it, effectively making it a luminous/
emissive surface.
Attenuation
Attenuation determines how much color is passed through the object. For an object to have
transmissive shadows, set the attenuation to (1, 1, 1), which means 100% of green, blue, and red light
passes through the object. Setting this color to RGB (1, 0, 0) means that the material transmits 100% of
Alpha Detail
When the Alpha Detail slider is set to 0, the Alpha channel of the object is ignored and the entire
object casts a shadow. If it is set to 1, the Alpha channel determines what portions of the object
cast a shadow.
Color Detail
The Color Detail slider modulates light passing through the surface by the diffuse color + texture
colors. Use this to throw a shadow that contains color details of the texture applied to the object.
Increasing the slider from 0 to 1 brings in more diffuse color + texture color into the shadow. Note that
the Alpha and opacity of the object are ignored when transmitting color, allowing an object with a solid
Alpha to still transmit its color to the shadow.
Saturation
The Saturation slider controls the saturation of the color component transmitted to the shadow. Setting
this to 0.0 results in monochrome shadows.
Receives Lighting/Shadows
These checkboxes control whether the material is affected by lighting and shadows in the scene. If
turned off, the object is always fully lit and/or unshadowed.
Two-Sided Lighting
This effectively makes the surface two sided by adding a second set of normals facing the opposite
direction on the backside of the surface. This is normally off to increase rendering speed, but it can be
turned on for 2D surfaces or for objects that are not fully enclosed, to allow the reverse or interior
surfaces to be visible as well.
Normally, in a 3D application, only the front face of a surface is visible and the back face is culled, so
that if a camera were to revolve around a plane in a 3D application, when it reached the backside, the
plane would become invisible. Making a plane two sided in a 3D application is equivalent to adding
another plane on top of the first but rotated by 180 degrees so the normals are facing the opposite
direction on the backside. Thus, when you revolve around the back, you see the second image plane,
which has its normals facing the opposite way.
Fusion does exactly the same thing as 3D applications when you make a surface two sided. The
confusion about what two-sided lighting does arises because Fusion does not cull back-facing
polygons by default. If you revolve around a one-sided plane in Fusion, you still see it from the
backside (but you are seeing the frontside duplicated through to the backside as if it were transparent).
Making the plane two sided effectively adds a second set of normals to the backside of the plane.
NOTE: This can become rather confusing once you make the surface transparent, as the
same rules still apply and produce a result that is counterintuitive. If you view from the
frontside a transparent two-sided surface illuminated from the backside, it looks unlit.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
There are two inputs on the Channel Boolean Node: one for the foreground material, and one for the
background material. Both inputs accept either a 2D image or a 3D material like Blinn, Cook-Torrence,
or Phong node.
– BackgroundMaterial: The orange background material input accepts a 2D image or
a 3D material.
– ForegroundMaterial: The green foreground input also accepts a 2D image or a 3D material.
A Channel Boolean used to combine and operate on Cook Torrance and Blinn nodes
Controls Tab
The Controls tab includes a section for each RGBA channel. Within each channel are two input menus
called Operand A and Operand B. The function performed on these two inputs is selected in the
Operation menu.
Operand A/B
The Operand menus, one for each output RGBA channel, allow you to set the desired input information
for the corresponding channel.
– Red/Green/Blue/Alpha FG
Reads the color information of the foreground material.
– Red/Green/Blue/Alpha BG
Reads the color information of the background material.
– Black/White/Mid Gray
Sets the value of the channel to 0, 0.5, or 1.
– Hue/Lightness/Saturation FG
Reads the color information of the foreground material, converts it into the HLS color space, and
puts the selected information into the corresponding channel.
– Hue/Lightness/Saturation BG
Reads the color information of the background material, converts it into the HLS color space, and
puts the selected information into the corresponding channel.
– Luminance FG
Reads the color information of the foreground material and calculates the luminance value for
the channel.
– Luminance BG
Operation
Determines the Operation of how the operands are combined.
– A: Uses Operand A only for the output channel.
– B: Uses Operand B only for the output channel.
– 1-A: Subtracts the value of Operand A from 1.
– 1-B: Subtracts the value of Operand B from 1.
– A+B: Adds the value of Operand A and B.
– A-B: Subtracts the value of Operand B from A.
– A*B: Multiplies the value of both Operands.
– A/B: Divides the value of Operand B from A.
– min(A,B): Compares the values of Operands A and B and returns the smaller one.
– max(A,B): Compares the values of Operands A and B and returns the bigger one.
– avg(A,B): Returns the average value of both Operands.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
There are six inputs on the Cook Torrance node that accept 2D images or 3D materials. These inputs
control the overall color and image used for the 3D object as well as controlling the color and texture
used in the specular highlight. Each of these inputs multiplies the pixels in the texture map by the
equivalently named parameters in the node itself. This provides an effective method for scaling parts
of the material.
– Diffuse Color Material: The orange Diffuse Color material input accepts a 2D image or a 3D
material to be used as overall color and texture of the object.
– Specular Color Material: The green Specular Color material input accepts a 2D image or a 3D
material to be used as the color and texture of the specular highlight.
– Specular Intensity Material: The magenta Specular Intensity material input accepts a 2D image
or a 3D material to alter the intensity of the specular highlight. When the input is a 2D image,
the Alpha channel is used to create the map, while the color channels are discarded.
– Specular Roughness Material: The white Specular Roughness material input accepts a 2D
image or a 3D material to be used as a map for modifying the roughness of the specular
highlight. The Alpha of the texture map is multiplied by the value of the roughness control.
– Specular Refractive Index Material: The white Specular Refractive Index material input accepts
a 2D image or a 3D material, using the RGB channels as the refraction texture.
– Bump Map Material: The white Bump Map material input accepts only a 3D material. Typically,
you connect the texture into a Bump Map node, and then connect the Bump Map node to this
input. This input uses the RGB information as texture-space normals.
Each of these inputs multiplies the pixels in the texture map by the equivalently named parameters in
the node itself. This provides an effective method for scaling parts of the material.
When nodes have as many inputs as this one does, it is often difficult to make connections with any
precision. Hold down the Option (macOS) or Alt (Windows) key while dragging the output from another
node over the node tile, and keep holding Option or Alt when releasing the left mouse button. A small
drop-down menu listing all the inputs provided by the node appears. Click on the desired input to
complete the connection.
A Cook Torrance shader with diffuse and specular color materials connected
Inspector
Controls Tab
The Controls tab contains parameters for adjusting the main color, highlight, and lighting properties of
the Cook Torrance shader node.
Diffuse
Diffuse describes the base surface characteristics without any additional effects like reflections or
specular highlights. Besides defining the base color of an object, the diffuse color also defines the
transparency of the object. The Alpha in a diffuse texture map can be used to make portions of the
surface transparent.
Alpha
This slider sets the material’s Alpha channel value. This affects diffuse and specular colors equally, and
affects the Alpha value of the material in the rendered output. If a diffuse texture map is provided, then
the Alpha value set here is multiplied by the Alpha values in the texture map.
Opacity
Reducing the material’s Opacity decreases the color and Alpha values of the specular and diffuse
colors equally, making the material transparent.
Specular
The parameters in the Specular section describe the look of the specular highlight of the surface.
These values are evaluated in a different way for each illumination model.
Specular Color
Specular Color determines the color of light that reflects from a shiny surface. The more specular a
material is, the glossier it appears. Surfaces like plastics and glass tend to have white specular
highlights, whereas metallic surfaces like gold have specular highlights that inherit their color from the
material color. If a specular texture map is provided, then the value provided here is multiplied by the
color values from the texture.
Specular Intensity
Specular Intensity controls how strong the specular highlight is. If the specular intensity texture is
provided, then this value is multiplied by the Alpha value of the texture.
Roughness
The Roughness of the specular highlight describes diffusion of the specular highlight over the surface.
The greater the value, the wider the falloff, and the more brushed and metallic the surface appears. If
the roughness texture map is provided, then this value is multiplied by the Alpha value from
the texture.
Do Fresnel
Selecting this checkbox adds Fresnel calculations to the materials illumination model. This provides
more realistic-looking metal surfaces by taking into account the refractiveness of the material.
Refractive Index
This slider appears when the Do Fresnel checkbox is selected. The Refractive Index applies only to
the calculations for the highlight; it does not perform actual refraction of light through transparent
surfaces. If the refractive index texture map is provided, then this value is multiplied by the Alpha value
of the input.
Transmittance
Transmittance controls the way light passes through a material. For example, a solid blue sphere casts
a black shadow, but one made of translucent blue plastic would cast a much lower density
blue shadow.
There is a separate Opacity option. Opacity determines how transparent the actual surface is when it
is rendered. Fusion allows adjusting both opacity and transmittance separately. At first, this might be a
bit counterintuitive to those who are unfamiliar with 3D software. It is possible to have a surface that is
fully opaque but transmits 100% of the light arriving upon it, effectively making it a luminous/
emissive surface.
Alpha Detail
When the Alpha Detail slider is set to 0, the Alpha channel of the object is ignored and the entire
object casts a shadow. If it is set to 1, the Alpha channel determines what portions of the object
cast a shadow.
Color Detail
The Color Detail slider modulates light passing through the surface by the diffuse color + texture
colors. Use this to throw a shadow that contains color details of the texture applied to the object.
Increasing the slider from 0 to 1 brings in more diffuse color + texture color into the shadow. Note that
the Alpha and opacity of the object are ignored when transmitting color, allowing an object with a solid
Alpha to still transmit its color to the shadow.
Saturation
The Saturation slider controls the saturation of the color component transmitted to the shadow. Setting
this to 0.0 results in monochrome shadows.
Receives Lighting/Shadows
These checkboxes control whether the material is affected by lighting and shadows in the scene. If
turned off, the object is always fully lit and/or unshadowed.
Two-Sided Lighting
This effectively makes the surface two sided by adding a second set of normals facing the opposite
direction on the backside of the surface. This is normally off to increase rendering speed, but it can be
turned on for 2D surfaces or for objects that are not fully enclosed, to allow the reverse or interior
surfaces to be visible as well.
Normally, in a 3D application, only the front face of a surface is visible and the back face is culled, so
that if a camera were to revolve around a plane in a 3D application, when it reached the backside, the
plane would become invisible. Making a plane two sided in a 3D application is equivalent to adding
another plane on top of the first but rotated by 180 degrees so the normals are facing the opposite
direction on the backside. Thus, when you revolve around the back, you see the second image plane,
which has its normals facing the opposite way.
NOTE: This can become rather confusing once you make the surface transparent, as the
same rules still apply and produce a result that is counterintuitive. If you view from the
frontside a transparent two-sided surface illuminated from the backside, it looks unlit.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Material Merge node includes two inputs for the two materials you want to combine.
– Background Material: The orange Background material input accepts a 2D image or a 3D
material to be used as the background material.
– Foreground Material: The green Foreground material input accepts a 2D image or a 3D
material to be used as the foreground material. A 2D image is treated as a diffuse texture map
in the basic shading model.
A Material Merge node combining a Blinn-based shader (teal underlay) and a Ward-based shader (orange underlay)
Controls Tab
The Controls tab includes a single slider for blending the two materials together.
Blend
The Blend behavior of the Material Merge is similar to the Dissolve (DX) node for images. The two
materials/textures are mixed using the value of the slider to determine the percentage each input
contributes. While the background and foreground inputs can be a 2D image instead of a material, the
output of this node is always a material.
Unlike the 2D Dissolve node, both foreground and background inputs are required.
Material ID
This slider sets the numeric identifier assigned to the resulting material. This value is rendered into the
MatID auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Phong [3PH]
When nodes have as many inputs as this one does, it is often difficult to make connections with any
precision. Hold down the Option or Alt key while dragging the output from another node over the
node tile, and keep holding Option or Alt when releasing the left mouse button. A small drop-down
menu listing all the inputs provided by the node appears. Click on the desired input to
complete the connection.
Phong controls
Controls Tab
The Controls tab contains parameters for adjusting the main color, highlight, and lighting properties of
the Phong shader node.
Diffuse
Diffuse describes the base surface characteristics without any additional effects like reflections or
specular highlights. Besides defining the base color of an object, the diffuse color also defines the
transparency of the object.
The Alpha in a diffuse texture map can be used to make portions of the surface transparent.
Diffuse Color
A material’s Diffuse Color describes the base color presented by the material when it is lit indirectly or
by ambient light. If a diffuse texture map is provided, then the color value provided here is multiplied
by the color values in the texture.
Alpha
This slider sets the material’s Alpha channel value. This affects diffuse and specular colors equally and
affects the Alpha value of the material in the rendered output. If a diffuse texture map is provided, then
the Alpha value set here is multiplied by the Alpha values in the texture map.
Opacity
Reducing the material’s Opacity decreases the color and Alpha values of the specular and diffuse
colors equally, making the material transparent.
Specular
The parameters in the Specular section describe the look of the specular highlight of the surface.
These values are evaluated in a different way for each illumination model.
Specular Intensity
Specular Intensity controls how strong the specular highlight is. If the specular intensity texture is
provided, then this value is multiplied by the Alpha value of the texture.
Specular Exponent
Specular Exponent controls the falloff of the specular highlight. The greater the value, the sharper the
falloff, and the smoother and glossier the material appears. If the specular exponent texture is
provided, then this value is multiplied by the Alpha value of the texture map.
Transmittance
Transmittance controls the way light passes through a material. For example, a solid blue sphere casts
a black shadow, but one made of translucent blue plastic would cast a much lower density
blue shadow.
There is a separate Opacity option. Opacity determines how transparent the actual surface is when it
is rendered. Fusion allows adjusting both opacity and transmittance separately. At first, this might be a
bit counterintuitive to those who are unfamiliar with 3D software. It is possible to have a surface that is
fully opaque but transmits 100% of the light arriving upon it, effectively making it a luminous/
emissive surface.
Attenuation
Attenuation determines how much color is passed through the object. For an object to have
transmissive shadows, set the attenuation to (1, 1, 1), which means 100% of green, blue, and red light
passes through the object. Setting this color to RGB (1, 0, 0) means that the material transmits 100% of
the red arriving at the surface but none of the green or blue light. This can be used to create “stained
glass”-styled shadows.
Alpha Detail
When the Alpha Detail slider is set to 0, the Alpha channel of the object is ignored and the entire
object casts a shadow. If it is set to 1, the Alpha channel determines what portions of the object
cast a shadow.
Color Detail
The Color Detail slider modulates light passing through the surface by the diffuse color + texture
colors. Use this to throw a shadow that contains color details of the texture applied to the object.
Increasing the slider from 0 to 1 brings in more diffuse color + texture color into the shadow. Note that
the Alpha and opacity of the object are ignored when transmitting color, allowing an object with a solid
Alpha to still transmit its color to the shadow.
Saturation
The Saturation slider controls the saturation of the color component transmitted to the shadow. Setting
this to 0.0 results in monochrome shadows.
Receives Lighting/Shadows
These checkboxes control whether the material is affected by lighting and shadows in the scene. If
turned off, the object is always fully lit and/or unshadowed.
NOTE: This can become rather confusing once you make the surface transparent, as the
same rules still apply and produce a result that is counterintuitive. If you view from the
frontside a transparent two-sided surface illuminated from the backside, it looks unlit.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Reflect [3RR]
Inputs
There are five inputs on the Reflect node that accept 2D images or 3D materials. These inputs control
the overall color and image used for the 3D object as well as controlling the color and texture used in
the reflective highlights.
– Background Material: The orange Background material input accepts a 2D image or a 3D
material. If a 2D image is provided, the node treats it as a diffuse texture map applied to a basic
material.
– Reflection Color Material: The white Reflection Color material input accepts a 2D image or a
3D material. The RGB channels are used as the reflection texture, and the Alpha is ignored.
– Reflection Intensity Material: The white Reflection Intensity material input accepts a 2D image
or a 3D material. The Alpha channel of the texture is multiplied by the intensity of the reflection.
– Refraction Tint Material: The white Refraction Tint material input accepts a 2D image or a 3D
material. The RGB channels are used as the refraction texture.
– Bump Map Texture: The white Bump Map texture input accepts only a 3D material. Typically,
you connect the texture into a Bump Map node, and then connect the Bump Map node to this
input. This input uses the RGB information as texture-space normals.
When nodes have as many inputs and some using the same color as this one does, it is often difficult
to make connections with any precision. Hold down the Option or Alt key while dragging the output
from another node over the node tile, and keep holding Option or Alt when releasing the left mouse
button. A small drop-down menu listing all the inputs provided by the node appears. Click on the
desired input to complete the connection.
Reflect controls
Controls Tab
The Controls tab contains parameters for adjusting the reflective strength based on the orientation of
the object, as well as the tint color of the Reflect shader node.
Reflection
Reflection Strength Variability
This multi-button control can be set to Constant or By Angle for varying the reflection intensity,
corresponding to the relative surface orientation to the viewer. The following three controls are visible
only when this control is set to By Angle.
Glancing Strength
[By Angle] Glancing Strength controls the intensity of the reflection for those areas of the geometry
where the reflection faces away from the camera.
Face On Strength
[By Angle] Face On Strength controls the intensity of the reflection for those parts of the geometry that
reflect directly back to the camera.
Falloff
[By Angle] Falloff controls the sharpness of the transition between the Glancing and Face On Strength
regions. It can be considered similar to applying gamma correction to a gradient between the Face On
and Glancing values.
Constant Strength
[Constant Angle] This control is visible only when the reflection strength variability is set to Constant. In
this case, the intensity of the reflection is constant despite the incidence angle of the reflection.
Refraction Index
This slider controls how strongly the environment map is deformed when viewed through a surface.
The overall deformation is based on the incidence angle. Since this is an approximation and not a
simulation, the results are not intended to model real refractions accurately.
Refraction Tint
The refraction texture is multiplied by the tint color for simulating color-filtered refractions. It can be
used to simulate the type of coloring found in tinted glass, as seen in many brands of beer bottles,
for example.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
This node has two inputs that are both required for this node to work. Both inputs accept either a 2D
image or a 3D material.
– LeftMaterial: The orange left material input accepts a 2D image or a 3D material to be used as
the material for the left eye rendering. If a 2D image is used, it is converted to a diffuse texture
map using the basic material type.
– RightMaterial: The green right material input accepts a 2D image or a 3D material to be used as
the material for the right eye rendering. If a 2D image is used, it is converted to a diffuse texture
map using the basic material type.
While the inputs can be either 2D images or 3D materials, the output is always a material.
A Stereo Mix node used to combine left and right images into a single stereo material
Inspector
Controls Tab
The Controls tab contains a single switch that swaps the left and right material inputs.
Swap
This option swaps both inputs of the node.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
There are six inputs on the Ward node that accept 2D images or 3D materials. These inputs control
the overall color and image used for the 3D object as well as controlling the color and texture used in
the specular highlight. Each of these inputs multiplies the pixels in the texture map by the equivalently
named parameters in the node itself. This provides an effective method for scaling parts of
the material.
– Diffuse Material: The orange Diffuse material input accepts a 2D image or a 3D material to be
used as a main color and texture of the object.
– Specular Color Material: The green Specular Color material input accepts a 2D image or a
3D material to be used as a highlight color and texture of the object.
– Specular Intensity Material: The magenta Specular Intensity material input accepts a 2D image
or a 3D material to be used as an intensity map for the material’s highlights. When the input is a
2D image, the Alpha channel is used to create the map, while the color channels are discarded.
– Spread U Material: The white Spread U material input accepts a 2D image or a 3D material.
The value of the Spread U option in the node’s controls is multiplied against the pixel values in
the material’s Alpha channel.
– Spread V Material: The white Spread V material input accepts a 2D image or a 3D material.
The value of the Spread V option in the node’s controls is multiplied against the pixel values in
the material’s Alpha channel.
– Bump Map Material: The white Bump Map material input accepts only a 3D material. Typically,
you connect the texture into a Bump Map node, and then connect the Bump Map node to this
input. This input uses the RGB information as texture-space normals.
When nodes have as many inputs and some using the same color as this one does, it is often difficult
to make connections with any precision. Hold down the Option or Alt key while dragging the output
from another node over the node tile, and keep holding Option or Alt when releasing the left mouse
button. A small drop-down menu listing all the inputs provided by the node appears. Click on the
desired input to complete the connection.
A Ward node used with a diffuse connection and specular color connection
Inspector
Ward controls
Controls Tab
The Controls tab contains parameters for adjusting the main color, highlight, and lighting properties of
the Ward shader node.
Diffuse
Diffuse describes the base surface characteristics without any additional effects like reflections or
specular highlights. Besides defining the base color of an object, the diffuse color also defines the
transparency of the object. The Alpha in a diffuse texture map can be used to make portions of the
surface transparent.
Alpha
This slider sets the material’s Alpha channel value. This affects diffuse and specular colors equally and
affects the Alpha value of the material in the rendered output. If a diffuse texture map is provided, then
the Alpha value set here is multiplied by the Alpha values in the texture map.
Opacity
Reducing the material’s Opacity decreases the color and Alpha values of the specular and diffuse
colors equally, making the material transparent.
Specular
The parameters in the Specular section describe the look of the specular highlight of the surface.
These values are evaluated in a different way for each illumination model.
Specular Color
Specular Color determines the color of light that reflects from a shiny surface. The more specular a
material is, the glossier it appears. Surfaces like plastics and glass tend to have white specular
highlights, whereas metallic surfaces like gold have specular highlights that inherit their color from the
material color. If a specular texture map is provided, then the value provided here is multiplied by the
color values from the texture.
Specular Intensity
Specular Intensity controls how strong the specular highlight is. If the specular intensity texture is
provided, then this value is multiplied by the Alpha value of the texture.
Spread U
Spread U controls the falloff of the specular highlight along the U-axis in the UV map of the object. The
smaller the value, the sharper the falloff, and the smoother and glossier the material appears in this
direction. If the Spread U texture is provided, then this value is multiplied by the Alpha value of
the texture.
Spread V
Spread V controls the falloff of the specular highlight along the V-axis in the UV map of the object. The
smaller the value, the sharper the falloff, and the smoother and glossier the material appear in this
direction. If the Spread V texture is provided, then this value is multiplied by the Alpha value of
the texture.
Transmittance
Transmittance controls the way light passes through a material. For example, a solid blue sphere casts
a black shadow, but one made of translucent blue plastic would cast a much lower density
blue shadow.
There is a separate Opacity option. Opacity determines how transparent the actual surface is when it
is rendered. Fusion allows adjusting both opacity and transmittance separately. At first, this might be a
bit counterintuitive to those who are unfamiliar with 3D software. It is possible to have a surface that is
fully opaque but transmits 100% of the light arriving upon it, effectively making it a luminous/
emissive surface.
Attenuation
Attenuation determines how much color is passed through the object. For an object to have
transmissive shadows, set the attenuation to (1, 1, 1), which means 100% of green, blue, and red light
passes through the object. Setting this color to RGB (1, 0, 0) means that the material transmits 100% of
Alpha Detail
When the Alpha Detail slider is set to 0, the Alpha channel of the object is ignored, and the entire
object casts a shadow. If it is set to 1, the Alpha channel determines what portions of the object
cast a shadow.
Color Detail
The Color Detail slider modulates light passing through the surface by the diffuse color + texture
colors. Use this to throw a shadow that contains color details of the texture applied to the object.
Increasing the slider from 0 to 1 brings in more diffuse color + texture color into the shadow. Note that
the Alpha and opacity of the object are ignored when transmitting color, allowing an object with a solid
Alpha to still transmit its color to the shadow.
Saturation
The Saturation slider controls the saturation of the color component transmitted to the shadow. Setting
this to 0.0 results in monochrome shadows.
Receives Lighting/Shadows
These checkboxes control whether the material is affected by lighting and shadows in the scene. If
turned off, the object is always fully lit and/or unshadowed.
Two-Sided Lighting
This effectively makes the surface two sided by adding a second set of normals facing the opposite
direction on the backside of the surface. This is normally off to increase rendering speed, but it can be
turned on for 2D surfaces or for objects that are not fully enclosed, to allow the reverse or interior
surfaces to be visible as well.
Normally, in a 3D application, only the front face of a surface is visible and the back face is culled, so
that if a camera were to revolve around a plane in a 3D application, when it reached the backside, the
plane would become invisible. Making a plane two sided in a 3D application is equivalent to adding
another plane on top of the first but rotated by 180 degrees so the normals are facing the opposite
direction on the backside. Thus, when you revolve around the back, you see the second image plane,
which has its normals facing the opposite way.
Fusion does exactly the same thing as 3D applications when you make a surface two sided. The
confusion about what two-sided lighting does arises because Fusion does not cull back-facing
polygons by default. If you revolve around a one-sided plane in Fusion you still see it from the
backside (but you are seeing the frontside duplicated through to the backside as if it were transparent).
Making the plane two sided effectively adds a second set of normals to the backside of the plane.
NOTE: This can become rather confusing once you make the surface transparent, as the
same rules still apply and produce a result that is counterintuitive. If you view from the
frontside a transparent two-sided surface illuminated from the backside, it looks unlit.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail in the following “The Common Controls” section.
Settings Tab
Common Settings tab can be found on most tools in Fusion The following controls are specific settings
for 3D nodes
Comment Tab
The Comment tab contains a single text control that is used to add comments and notes to the tool.
When a note is added to a tool, a small red dot icon appears next to the setting’s tab icon, and a text
bubble appears on the node. To see the note in the Node Editor, hold the mouse pointer over the
node for a moment. The contents of the Comments tab can be animated over time, if required.
Scripting Tab
The Scripting tab is present on every tool in Fusion. It contains several edit boxes used to add scripts
that process when the tool is rendering. For more details on the contents of this tab, please consult the
scripting documentation.
3D Texture Nodes
This chapter details the 3D Texture nodes available when creating 3D composites in
Fusion. The abbreviations next to each node name can be used in the Select Tool
dialog when searching for tools and in scripting references.
Contents
Bump Map [3Bu] 789
Catcher [3CA] 792
CubeMap [3CU] 794
Falloff [3FA] 797
Fast Noise Texture [3FN] 799
Gradient 3D [3GD] 802
Sphere Map [3SPM] 805
Texture 2D [3Tx] 807
Texture Transform [3TT] 809
The Common Controls 811
Inputs
The Bump Map node includes a single orange input for connecting a 2D image you want to use as the
bump map texture, or it can accept the output of the Create Bump Map node.
– ImageInput: The orange Image input is used to connect a 2D RGBA image for the bump
calculation or an existing bump map from the Create Bump map node.
A Bump Map is connected to the Bump Map material input on a material node.
Controls Tab
The Controls tab contains all parameters for modifying the input source and the appearance of
the bump map.
Filter Size
A custom filter generates the bump information. The drop-down menu sets the filter size.
Height Channel
Sets the channel from which to extract the grayscale information.
Clamp Z Normal
Clips the lower values of the blue channel in the resulting bump texture.
Height Scale
Changes the contrast of the resulting values in the bump map. Increasing this value yields a more
visible bump map.
Texture Depth
Optionally converts the resulting bump map texture into the desired bit depth.
Wrap Mode
Wraps the image at the borders, so the filter produces correct result when using seamless tile textures.
Height Map
Bump Map
Normals Map
NOTE: The Catcher material requires a Projector 3D or Camera 3D node in the scene, set to
project an image in Texture mode on the object to which the Catcher is connected. Without a
projection, or if the projection is not set to Texture mode, the Catcher simply makes the
object transparent and invisible.
Inputs
The Catcher node has no inputs. The output of the node is connected to the the diffuse color material
input of the Blinn, Cook Torrance, or other material node applied to the 3D geometry.
Inspector
Catcher controls
Controls Tab
The Options in the Controls tab determine how the Catcher handles the accumulation of multiple
projections.
Enable
Use this checkbox to enable or disable the node. This is not the same as the red switch in the upper-
left corner of the Inspector. The red switch disables the tool altogether and passes the image on
without any modification. The Enable checkbox is limited to the effect part of the tool. Other parts, like
scripts in the Settings tab, still process as normal.
Color Mode
The Color mode menu is used to control how the Catcher combines the light from multiple projectors.
It has no effect on the results when only one projector is in the scene. This control is designed to work
with the software renderer in the Renderer 3D node and has no effect when using the
OpenGL renderer.
Threshold
The Threshold can be used to exclude certain low values from the accumulation calculation.
For example, when using the Median Accumulation mode, a threshold of 0.01 would exclude any pixel
with a value of less than 0.01 from the median calculation.
Restrict by Projector ID
When active, the Catcher only receives light from projectors with a matching ID. Projectors with a
different ID are ignored.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the
MatID auxiliary channel if the corresponding option is enabled in the Renderer 3D node.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
CubeMap [3CU]
Inputs
The Inputs on this node change based on the settings of the Layout menu in the Inspector. The single
input uses a 2D image for the entire cube, while six inputs can handle a different 2D image for each
side of a cube.
– CrossImage: The orange Cross Image input is visible by default or when the Layout menu in
the Inspector is set to either Vertical Cross or Horizontal Cross. The input accepts a 2D image.
– CubeMap.[DIRECTION]: These six multi-colored inputs are visible only when the Layout menu
in the Inspector is set to Separate Images. Each input accepts an image aligned to match the
left, right, top, bottom, front, and back faces.
A Cube Map node receives a cross image input, creating an environment for the Shape 3D
Controls Tab
Layout
The Layout menu determines the type and number of inputs for the cube map texture.
Valid options are:
– Separate Images: This option exposes six inputs on the node, one for each face of the cube. If the
separate images are not square or not of the same size, they are rescaled into the largest 1:1 image
that can contain all of them.
– Vertical Cross: This option exposes a single input on the node. The image should be an
unwrapped texture of a cube containing all the faces organized into a Vertical Cross formation,
where the height is larger than the width. If the image aspect of the cross image is not 3:4, the
CubeMap node crops it down so it matches the applicable aspect ratio.
– Horizontal Cross: This option exposes a single input on the node. The image should be an
unwrapped texture of a cube containing all the faces organized into a Horizontal Cross formation,
where the width is larger than the height. If the image aspect of the cross image is not 4:3, the
CubeMap node crops it down so that matches the applicable aspect ratio.
Coordinate System
The coordinate system menu sets the position values used when converting the image into a texture.
– Model: This option orients the texture along the object local coordinate system.
– World: This option orients the resulting texture using the global or world coordinate system.
– Eye: This option aligns the texture map to the coordinate system of the camera or viewer.
Rotation
The rotation controls are divided into buttons that select the order of rotation along each axis of the
texture. For example, XYZ would apply the rotation to the X axis first, followed by the Y axis, and finally
the Z axis. The other half of the rotation controls are dials that rotate the texture around its pivot point.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Falloff [3FA]
Falloff example
While the inputs for this node can be images, the output is always a material.
The Falloff node uses one input for the material facing the camera and one for the material not directly facing the camera.
Inspector
Falloff controls
Color Variation
– Two Tone: Two regular Color controls define the colors for Glancing and Face On.
– Gradient: A Gradient control defines the colors for Glancing and Face On. This can be
used for a multitude of effects, like creating Toon Shaders, for example.
Face On Color
The Face On Color defines the color of surface parts facing the camera. If the Face On texture map is
provided, then the color value provided here is multiplied by the color values in the texture.
Reducing the material’s opacity decreases the color and Alpha values of the Face On material, making
the material transparent.
Glancing Color
The Glancing Color defines the color of surface parts more perpendicular to the camera. If the
Glancing material port has a valid input, then this input is multiplied by this color.
Reducing the material’s opacity decreases the color and Alpha values of the Glancing material, making
the material transparent.
Falloff
This value controls the transition between Glancing and Face On strength. It is very similar to a gamma
operation applied to a gradient, blending one value into another.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
A Fast Noise Texture node generates a seamless texture, taking advantage of UVW coordinates.
Inspector
Controls Tab
The parameters of the Fast Noise Texture node control the appearance and, for 2D, the animation of
the noise.
Output Mode
– 2D: Calculates the noise texture based on 2D texture coordinates (UV). This setting allows
smoothly varying the noise pattern with animation.
– 3D: Calculates the noise texture based on 3D texture coordinates (UVW). Nodes like Shape 3D
automatically provide a third texture coordinate; otherwise, a 3D texture space can be created
using the UV Map node. The 3D setting does not support animation of the noise pattern.
Brightness
This control adjusts the overall Brightness of the noise map.
Contrast
This control increases or decreases the overall Contrast of the noise map. It can exaggerate the effect
of the noise.
Scale
The scale of the noise map can be adjusted using the Scale slider, changing it from gentle variations
over the entire image to a tighter overall texture effect. This value represents the scale along
the UV axis.
Scale Z
(3D only) The Scale Z value scales the noise texture along the W-axis in texture space. W represents a
direction perpendicular to the UV plane for a 3D texture map.
Seethe
(2D only) The Seethe control smoothly varies the 2D noise pattern.
Seethe Rate
(2D only) As with the Seethe control above, the Seethe Rate also causes the noise map to evolve and
change. The Seethe Rate defines the rate at which the noise changes each frame, causing an
animated drift in the noise automatically, without the need for spline animation.
Discontinuous
Normally, the noise function interpolates between values to create a smooth continuous gradient of
results. You can enable the Discontinuous checkbox to create hard discontinuity lines along some of
the noise contours. The result is a dramatically different effect.
Invert
Enable the Invert checkbox to invert the noise, creating a negative image of the original pattern. This is
most effective when Discontinuous is also enabled.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
The gradient defaults to a linear gradient that goes from -1 to +1 along the Z-axis. All primitives in the
Shape 3D node can output a third texture coordinate for UVW mapping.
Inputs
The Gradient node has no Inputs. The output of the node is connected to a material input on
3D geometry.
A Gradient 3D node generates a resolution-independent gradient texture positioned by the UV Map tool
Gradient 3D controls
Controls Tab
The Controls tab for the Gradient node control the pattern and colors used for the gradient texture.
Gradient Type
Determines the type or pattern used for the gradient.
– Linear: A simple linear gradient.
– Reflect: Based on the Linear mode, this gradient is mirrored at the middle of the textured range.
– Square: The gradient is applied using a square pattern.
– Cross: Similar to the Reflect mode, but Cross uses two axes to apply the gradient.
– Radial: The Radial mode uses a circular pattern to apply the gradient.
Gradient 3D modes
Gradient Bar
The Gradient control consists of a bar where it is possible to add, modify, and remove color stops of
the gradient. Each triangular color stop on the Gradient bar represents a color in the gradient. It is
possible to animate the color as well as the position of the point. Furthermore, a From Image modifier
can be applied to the gradient to evaluate it from an image.
Scale
Allows sizing of the gradient.
Offset
Allows panning through the gradient.
Repeat
Defines how the left and right borders of the gradient are treated.
– Once: When using the Gradient Offset control to shift the gradient, the border colors keep their
values. Shifting the default gradient to the left results in a white border on the left, while shifting it
to the right results in a black border on the right.
– Repeat: When using the Gradient Offset control to shift the gradient, the border colors wrap
around. Shifting the default gradient to the left results in a sharp jump from white to black, while
shifting it to the right results in a sharp jump from black to white.
– Ping Pong: When using the Gradient Offset control to shift the gradient, the border colors ping-
pong back and forth. Shifting the default gradient to the left results in the edge fading from white
back to black, while shifting it to the right results in the edge fading from black back to white.
Sub Pixel
Determines the accuracy with which the gradient is created.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The single image input on the Sphere Map node accepts a 2D image texture in an equirectangular
format (where the X-axis represents 0–360 degrees longitude, and the Y-axis represents –90 to +90
degrees latitude.)
– ImageInput: The orange Image input accepts a 2D RGBA image. Preferably, this is an
equirectangular image that shows the entire vertical and horizontal angle of view up
to 360 degrees.
A Sphere Map node generates a reflective environment when connected to a Reflect node Reflection Color input.
Controls Tab
The Controls tab in the Inspector modifies the mapping of the image input to the sphere map.
Angular Mapping
Adjusts the texture coordinate mapping so the poles are less squashed and areas in the texture get
mapped to equal areas on the sphere. It turns the mapping of the latitude lines from a hemispherical
fisheye to an angular fisheye. This mapping attempts to preserve area and makes it easier to paint on
or modify a sphere map since the image is not as compressed at the poles.
Rotation
Offers controls to rotate the texture map.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
The node expects an image with an aspect ratio of 2:1. Otherwise, the image is clamped according to
the following rules:
– 2 * width > height: The width is fitted onto the sphere, and the poles display clamped edges.
– 2 * width < height: The height is fitted onto the sphere, and there is clamping about the 0-degree
longitude line.
Common Controls
Settings tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
NOTE: If you pipe the texture directly into the sphere, it is also mirrored horizontally. You can
change this by using a Transform node first.
Texture 2D [3Tx]
NOTE: Background pixels may have U and V values of 0.0, which set those pixels to the color
of the texture’s corner pixel. To restrict texturing to specific objects, use an effect mask based
on the Alpha of the object, or its Object or Material ID channel. For more information, see
Chapter 18, “Understanding Image Channels” in the Fusion Reference Manual or Chapter 79
in the DaVinci Resolve Reference Manual.
A Texture 2D node is used to set the 3D texture metadata for the input image.
Inspector
Texture 2D controls
Controls Tab
The Controls tab of the Inspector includes the following options.
U/V Offset
These sliders can be used to offset the texture along the U and V coordinates.
U/V Scale
These sliders can be used to scale the texture along the U and V coordinates.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inspector
NOTE: Not all Wrap modes are supported by all graphics cards.
Translation
The U, V, W translation sliders shift the texture along U, V, and W axes.
Rotation
Rotation Order buttons set the order in which the rotation is applied. In conjunction with the buttons,
the UVW dials define the rotation around the UVW axes.
Scale
U, V, W sliders scale the texture along the UVW axes.
Pivot
U, V, W Pivot sets the reference point for rotation and scaling.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in the following “The Common Controls” section.
Settings Tab
Comment Tab
The Comment tab contains a single text control that is used to add comments and notes to the tool.
When a note is added to a tool, a small red dot icon appears next to the setting’s tab icon, and a text
bubble appears on the node. To see the note in the Node Editor, hold the mouse pointer over the
node for a moment. The contents of the Comments tab can be animated over ime, if required.
Scripting Tab
The Scripting tab is present on every tool in Fusion. It contains several edit boxes used to add scripts
that process when the tool is rendering. For more details on the contents of this tab, please consult the
scripting documentation.
Blur Nodes
This chapter details the Blur nodes available in Fusion. The abbreviations next to each
node name can be used in the Select Tool dialog when searching for tools and in
scripting references.
Contents
Blur [Blur] 814
Defocus [DFO] 816
Directional Blur [DRBL] 818
Glow [GLO] 821
Sharpen [SHRP] 824
Soft Glow [SGlo] 826
Unsharp Mask [USM] 829
Vari Blur [VBL] 830
Vector Motion Blur [VBL] 832
The Common Controls 834
Inputs
The two inputs on the Blur node are used to connect a 2D image and an effect mask that can be used
to limit the blurred area.
– Input: The orange input is used for the primary 2D image that is blurred.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the blur to only
those pixels within the mask. An effect mask is applied to the tool after the tool is processed.
Inspector
Blur controls
Controls Tab
The Controls tab contains the primary controls necessary for customizing the blur operation, including
five filter algorithms.
Filter
The Filter menu is where you select the type of filter used to create the blur.
– Box Blur: This option is faster than the Gaussian blur but produces a lower-quality result.
– Bartlett: This option is a more subtle, anti-aliased blur filter.
– Multi-box: Multi-box uses a Box filter layered in multiple passes to approximate a Gaussian shape.
With a moderate number of passes (e.g., four), a high-quality blur can be obtained, often faster
than the Gaussian filter and without any ringing.
– Gaussian: Gaussian applies a smooth, symmetrical blur filter, using a sophisticated constant-time
Gaussian approximation algorithm.
– Fast Gaussian: Gaussian applies a smooth, symmetrical blur filter, using a sophisticated constant-
time Gaussian approximation algorithm. This mode is the default filter method.
NOTE: This is not the same as the RGBA checkboxes found under the common controls. The
node takes these selections into account before it processes the image, so deselecting a
channel causes the node to skip that channel when processing, speeding up the rendering of
the effect. In contrast, the channel controls under the Common Controls tab are applied after
the node has processed.
Lock X/Y
Locks the X and Y Blur sliders together for symmetrical blurring. This is enabled by default.
Blur Size
Sets the amount of blur applied to the image. When the Lock X and Y control is deselected,
independent control over each axis is provided.
Clipping Mode
This option determines how edges are handled when performing domain-of-definition rendering.
This is profoundly important for nodes like Blur, which may require samples from portions of the image
outside the current domain.
Blend
The Blend slider determines the percentage of the affected image that is mixed with original image. It
blends in more of the original image as the value gets closer to 0.
This control is a cloned instance of the Blend slider in the Common Controls tab. Changes made to this
control are simultaneously made to the one in the common controls.
Examples
Following is a comparison of Blur filters visualized as “cross-sections” of a filtered edge. As you can
see, Box creates a linear ramp, while Bartlett creates a somewhat smoother ramp. Multi-box and
Gaussian are indistinguishable unless you zoom in really close on the slopes. They both lead to even
smoother ramps, but as mentioned above, Gaussian overshoots slightly and may lead to negative
values if used on floating-point images.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Blur nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Defocus [DFO]
Inspector
Defocus controls
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the defocus operation.
Filter
Use this menu to select the exact method applied to create the defocus. Gaussian applies a simplistic
effect, while Lens mode creates a more realistic defocus. Lens mode takes significantly longer
than Gaussian.
Lock X/Y
When Lock X/Y is selected, this performs the same amount of defocusing to both the X- and Y-axis of
the image. Deselect to obtain individual control.
Defocus Size
The Defocus Size control sets the size of the defocus effect. Higher values blur the image by greater
amounts and produce larger blooms.
Bloom Threshold
Pixels with values above the set Bloom Threshold are defocused and have a glow applied (blooming).
Pixels below that value are only defocused.
The following four lens options are available only when the Filter is set to Lens.
– Lens Type: The basic shape used to create the “bad bokeh” effect. This can be refined further
with the Angle, Sides, and Shape sliders.
– Lens Angle: Defines the rotation of the shape. Best visible with NGon lens types. Because of the
round nature of a circle, this slider has no visible effect when the Lens Type is set to Circle.
– Lens Sides: Defines how many sides the NGon shapes have. Best visible with NGon lens types.
Because of the round nature of a circle, this slider has no visible effect when the Lens Type is set
to Circle.
– Lens Shape: Defines how pointed the NGons are. Higher values create a more pointed, starry
look. Lower values create smoother NGons. Best visible with NGon lens types and Lens Sides
between 5 and 10. Because of the round nature of a circle, this slider has no visible effect when
the Lens Type is set to Circle.
Clipping Mode
This option determines how edges are handled when performing domain-of-definition rendering. This
is profoundly important for nodes like Blur, which may require samples from portions of the image
outside the current domain.
– Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If the
upstream DoD is smaller than the frame, the remaining area in the frame is treated as black/
transparent.
– Domain: Setting this option to Domain respects the upstream domain of definition when applying
the node’s effect. This can have adverse clipping effects in situations where the node employs a
large filter.
– None: Setting this option to None does not perform any source image clipping at all. This means
that any data required to process the node’s effect that would normally be outside the upstream
DoD is treated as black/transparent.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Blur nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The two inputs on the Directional Blur node are used to connect a 2D image and an effect mask which
can be used to limit the blurred area.
– Input: The orange input is used for the primary 2D image that has the directional blur applied.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the directional
blur to only those pixels within the mask. An effect mask is applied to the tool after it is
processed.
Inspector
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the directional blur
operation.
Center X and Y
This coordinate control and related viewer crosshair affects the Radial and Zoom Motion Blur types
only. It is used to position where the blurring effect starts.
Length
Length adjusts the strength and heading of the effect. Values lower than zero cause blurs to head
opposite the angle control. Values greater than the slider maximum may be typed into the
slider’s edit box.
Angle
In both Linear and Center modes, this control modifies the direction of the directional blur. In the Radial
and Zoom modes, the effect is similar to the camera spinning while looking at the same spot. If the
setting of the length slider is other than zero, the effect creates a whirlpool effect.
Glow
This adds a Glow to the directional blur, which can be used to duplicate the effect of increased camera
exposure to light caused by longer shutter speeds.
Clipping Mode
This option determines how edges are handled when performing domain-of-definition rendering.
This is profoundly important for nodes like Blur, which may require samples from portions of the image
outside the current domain.
– Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If the
upstream DoD is smaller than the frame, the remaining area in the frame is treated as black/
transparent.
– Domain: Setting this option to Domain respects the upstream domain of definition when applying
the node’s effect. This can have adverse clipping effects in situations where the node employs a
large filter.
– None: Setting this option to None does not perform any source image clipping at all. This means
that any data required to process the node’s effect that would normally be outside the upstream
DoD is treated as black/transparent.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Blur nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Glow node has three inputs: an orange one for the primary 2D image input, a blue one for an
effect mask, and a third white input for a Glow mask.
– Input: The orange input is used for the primary 2D image that has the glow applied.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input restricts the source
of the glow to only those pixels within the mask. An effect mask is applied to the tool after it is
processed.
– Glow Mask: The Glow node supports pre-masking using the white glow mask input. A Glow
pre-mask filters the image before applying the glow. The glow is then merged back over the
original image. This is different from a regular effect mask that clips the rendered result.
The Glow mask allows the glow to extend beyond the borders of the mask, while restricting the source
of the glow to only those pixels within the mask.
Glow masks are identical to Effect masks in every other respect.
Glow controls
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the glow operation. A
Color Scale section at the bottom of the Inspector can be used for tinting the glow.
Filter
Use this menu to select the method of Blur used in the filter. The selections are described below.
– Box: A simple but very fast Box filter.
– Bartlett: Bartlett adds a softer, subtler glow with a smoother drop-off but may take longer to
render than Box.
– Multi-box: Multi-box uses a Box filter layered in multiple passes to approximate a Gaussian shape.
With a moderate number of passes (e.g., four), a high-quality blur can be obtained, often faster
than the Gaussian filter, and without any ringing.
– Gaussian: Gaussian adds a soft glow, blurred by the Gaussian algorithm.
– Fast Gaussian: Fast Gaussian adds a soft glow, blurred by the Gaussian algorithm. This is the
default method.
– Blend: Blend adds a nonlinear glow that is evenly visible in the whites and blacks.
– Hilight: Hilight adds a glow without creating a halo in the surrounding pixels.
– Solarize: Solarize adds a glow and solarizes the image.
NOTE: This is not the same as the RGBA checkboxes found under the common controls. The
node takes these selections into account before it processes the image, so deselecting a
channel causes the node to skip that channel when processing, speeding up the rendering of
the effect. In contrast, the channel controls under the Common Controls tab are applied after
the node has processed.
Glow Size
Glow Size determines the size of the glow effect. Larger values expand the size of the glowing
highlights of the image.
Num Passes
Only available in Multi-box mode. Larger values lead to a smoother distribution of the effect, but also
increase render times. It’s good to find the line between desired quality and acceptable render times.
Glow
The Glow slider determines the intensity of the glow effect. Larger values tend to completely blow the
image out to white.
Clipping Mode
This option determines how edges are handled when performing domain-of-definition rendering. This
is profoundly important for nodes like Blur, which may require samples from portions of the image
outside the current domain.
– Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If the
upstream DoD is smaller than the frame, the remaining area in the frame is treated as black/
transparent.
– Domain: Setting this option to Domain respects the upstream domain of definition when applying
the node’s effect. This can have adverse clipping effects in situations where the node employs a
large filter.
– None: Setting this option to None does not perform any source image clipping at all. This means
that any data required to process the node’s effect that would normally be outside the upstream
DoD is treated as black/transparent.
Blend
The Blend slider determines the percentage of the affected image that is mixed with original image. It
blends in more of the original image as the value gets closer to 0.
This control is a cloned instance of the Blend slider in the Common Controls tab. Changes made to this
control are simultaneously made to the one in the common controls.
Apply Mode
Three Apply Modes are available when it comes to applying the glow to the image.
– Normal: Default. This mode simply adds the glow directly over top of the original image.
– Merge Under: Merge Under places the glow beneath the image, based on the Alpha channel.
Threshold mode permits clipping of the threshold values.
– Threshold: This control clips the effect of the glow. A new range slider appears. Pixels in the
glowed areas with values below the low value are pushed to black. Pixels with values greater than
high are pushed to white.
– High-Low Range Control: Available only in Threshold mode. Pixels in the glowed areas with
values below the low value are pushed to black. Pixels with values greater than high are
pushed to white.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Blur nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Sharpen [SHRP]
Inputs
The two inputs on the Sharpen node are used to connect a 2D image and an effect mask that can limit
the area affected by the sharpen.
– Input: The orange input is used for the primary 2D image for sharpening.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the sharpen to
only those pixels within the mask. An effect mask is applied to the tool after it is processed.
Sharpen controls
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the sharpen operation.
NOTE: This is not the same as the RGBA checkboxes found under the common controls. The
node takes these selections into account before it processes the image, so deselecting a
channel causes the node to skip that channel when processing, speeding up the rendering of
the effect. In contrast, the channel controls under the Common Controls tab are applied after
the node has processed.
Lock X/Y
This locks the X and Y Sharpen sliders together for symmetrical sharpening. This is checked
by default.
Amount
This slider sets the amount of sharpening applied to the image. When the Lock X/Y control is
deselected, independent control over each axis is provided.
Clipping Mode
This option determines how edges are handled when performing domain-of-definition rendering. This
is profoundly important for nodes like Blur, which may require samples from portions of the image
outside the current domain.
– Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If the
upstream DoD is smaller than the frame, the remaining area in the frame is treated as black/
transparent.
– Domain: Setting this option to Domain respects the upstream domain of definition when applying
the node’s effect. This can have adverse clipping effects in situations where the node employs a
large filter.
– None: Setting this option to None does not perform any source image clipping at all. This means
that any data required to process the node’s effect that would normally be outside the upstream
DoD is treated as black/transparent.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Blur nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
Like the Glow node, Soft Glow also has three inputs: an orange one for the primary image input, a blue
one for an effect mask, and a third white input for a Glow mask.
– Input: The orange input is used for the primary 2D image for the soft glow.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the soft glow to
only those pixels within the mask. An effect mask is applied to the tool after it is processed.
– Glow Mask: The Soft Glow node supports pre-masking using the white glow mask input.
A Glow pre-mask filters the image before applying the soft glow. The soft glow is then
merged back over the original image. This is different from a regular effect mask that clips the
rendered result.
The Glow mask allows the soft glow to extend beyond the borders of the mask, while restricting the
source of the soft glow to only those pixels within the mask.
Glow masks are identical to effect masks in every other respect.
Inspector
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the soft glow operation. A
color scale section at the bottom of the Inspector can be used for tinting the soft glow.
Filter
Use this menu to select the method of Blur used in the filter. The selections are described below.
– Box: A simple but very fast Box filter.
– Bartlett: Bartlett adds a softer, subtler glow with a smoother drop-off but may take longer to
render than Box.
– Multi-box: Multi-box uses a Box filter layered in multiple passes to approximate a Gaussian shape.
With a moderate number of passes (e.g., four), a high-quality blur can be obtained, often faster
than the Gaussian filter and without any ringing.
– Gaussian: Gaussian adds a soft glow, blurred by the Gaussian algorithm.
This is the default method.
NOTE: This is not the same as the RGBA checkboxes found under the common controls. The
node takes these selections into account before it processes the image, so deselecting a
channel causes the node to skip that channel when processing, speeding up the rendering of
the effect. In contrast, the channel controls under the Common Controls tab are applied after
the node has processed.
Gain
The Gain control defines the brightness of the glow.
Lock X/Y
When Lock X/Y is checked, both the horizontal and vertical glow amounts are locked. Otherwise,
separate amounts of glow may be applied to each axis of the image.
Glow Size
This amount determines the size of the glow effect. Larger values expand the size of the glowing
highlights of the image.
Num Passes
Available only in Multi-box mode. Larger values lead to a smoother distribution of the effect, but also
increase render times. It’s good to find the line between desired quality and acceptable render times.
Clipping Mode
This option determines how edges are handled when performing domain-of-definition rendering. This
is profoundly important for nodes like Blur, which may require samples from portions of the image
outside the current domain.
– Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If the
upstream DoD is smaller than the frame, the remaining area in the frame is treated as black/
transparent.
– Domain: Setting this option to Domain respects the upstream domain of definition when applying
the node’s effect. This can have adverse clipping effects in situations where the node employs a
large filter.
– None: Setting this option to None does not perform any source image clipping at all. This means
that any data required to process the node’s effect that would normally be outside the upstream
DoD is treated as black/transparent.
Blend
The Blend slider determines the percentage of the affected image that is mixed with original image. It
blends in more of the original image as the value gets closer to 0.
This control is a cloned instance of the Blend slider in the Common Controls tab. Changes made to this
control are simultaneously made to the one in the common controls.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Blur nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The two inputs on the Unsharp Mask node are used to connect a 2D image and an effect mask for
limiting the effect.
– Input: The orange input is used for the primary 2D image for the Unsharp Mask.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the Unsharp
Mask to only those pixels within the mask. An effect mask is applied to the tool after it is
processed.
Inspector
NOTE: This is not the same as the RGBA checkboxes found under the common controls.
The node takes these selections into account before it processes the image, so deselecting a
channel causes the node to skip that channel when processing, speeding up the rendering of
the effect. In contrast, the channel controls under the Common Controls tab are applied after
the node has processed.
Lock X/Y
When Lock X/Y is checked, both the horizontal and vertical sharpen amounts are locked. Otherwise,
separate amounts of glow may be applied to each axis of the image.
Size
This control adjusts the size of blur filter applied to the extracted image. The higher this value, the
more likely it is that pixels are identified as detail.
Gain
The Gain control adjusts how much gain is applied to pixels identified as detail by the mask. Higher
values create a sharper image.
Threshold
This control determines the frequencies from the source image to be extracted. Raising the value
eliminates lower-contrast areas from having the effect applied.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Blur nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inspector
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the Vari Blur operation.
Method
Use this menu to select the method of Blur used in the filter. The selections are described below.
Quality
Increasing Quality gives smoother blurs, at the expense of speed. Quality set to 1 uses a very fast but
simple Box blur for all Method settings. A Quality of 2 is usually sufficient for low Blur Size values.
A Quality of 4 is generally good enough for most jobs unless Blur Size is particularly high.
Blur Channel
This selects which channel of the Blur Image controls the amount of blurring applied to each pixel.
Lock X/Y
When selected, only a Blur Size control is shown, and changes to the amount of blur are applied to
both axes equally. If the checkbox is cleared, individual controls appear for both X and Y Blur Size.
Blur Size
Increasing this control increases the overall amount of blur applied to each pixel. Those pixels where
the Blur image is black or nonexistent are blurred, despite the Blur Size.
Blur Limit
This slider limits the useable range from the Blur image. Some Z-depth images can have values that
go to infinity, which skew blur size. The Blur Limit is a way to keep values within range.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Blur nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Vector Motion Blur node has three inputs for a 2D image, a motion vector pass, and an
effect mask.
– Input: The required orange input is for a 2D image that receives the motion blur.
– Vectors: The green input is also required. This is where you connect a motion vector AOV
rendered from a 3D application or an EXR file generated from the Optical Flow node in Fusion.
– Vector Mask: The white Vector Mask input is an optional input that masks the image before
processing.
– Effect Mask: The common blue input is used for a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input
restricts the source of the motion blur to only those pixels within the mask. An effect mask is
applied to the tool after it is processed.
Inspector
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the Vector Motion
Blur operation.
Y Channel
Use this menu to select which channel of the image provides the vectors for the movement of the
pixels along the Y-axis.
Flip Channel
These checkboxes can be used to flip, or invert, the X and Y vectors. For instance, a value of 5 for a
pixel in the X-vector channel would become –5 when the X checkbox is enabled.
Scale
The X and Y vector channel values for a pixel are multiplied by the value of this slider. For example,
given a scale of 2 and a vector value of 10, the result would be 20. This slider splits to show Scale X
and Scale Y if the Lock Scale X/Y checkbox is not enabled.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Blur nodes. These common controls are
described in the following “The Common Controls” section.
Inspector
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this causes the tool to skip processing entirely, copying the input straight to the output.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels of the image not in the mask (i.e. set to 0) to become
black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around the
edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on Coverage and Background Color channels, see Chapter 18, “Understanding
Image Channels” in the Fusion Reference Manual or Chapter 79 in the DaVinci Resolve
Reference Manual.
Motion Blur
– Motion Blur: This toggles the rendering of Motion Blur on the tool. When this control is toggled
on, the tool’s predicted motion is used to produce the motion blur caused by the virtual camera’s
shutter. When the control is toggled off, no motion blur is created.
– Quality: Quality determines the number of samples used to create the blur. A Quality setting of
2 causes Fusion to create two samples to either side of an object’s actual motion. Larger values
produce smoother results but increase the render time.
– Shutter Angle: Shutter Angle controls the angle of the virtual shutter used to produce the motion
blur effect. Larger angles create more blur but increase the render times. A value of 360 is the
equivalent of having the shutter open for one whole frame exposure. Higher values are possible
and can be used to create interesting effects.
– Center Bias: Center Bias modifies the position of the center of the motion blur. This allows the
creation of motion trail effects.
– Sample Spread: Adjusting this control modifies the weighting given to each sample. This affects
the brightness of the samples.
Comments
The Comments field is used to add notes to a tool. Click in the field and type the text. When a note is
added to a tool, a small red square appears in the lower-left corner of the node when the full tile is
displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the note
in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Color Nodes
This chapter details the Color nodes available in Fusion. The abbreviations next to
each node name can be used in the Select Tool dialog when searching for tools and
in scripting references.
Contents
Auto Gain [AG] 838
Brightness Contrast [BC] 839
Channel Booleans [BOL] 842
Color Corrector [CC] 845
Color Curves [CCV] 855
Color Gain [CLR] 858
Color Matrix [CMX] 862
Color Space [CS] 866
Copy Aux [CPA] 868
Gamut [GMT] 872
Hue Curves [HCV] 874
OCIO CDL Transform [OCD] 877
OCIO Color Space [OCC] 880
OCIO File Transform [OCF] 882
Set Canvas Color [SCV] 883
White Balance [WB] 885
The Common Controls 889
Inputs
The two inputs on the Auto Gain node are the input and effect mask.
– Input: The orange input connects the primary 2D image for the auto gain.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the auto gain
adjustment to only those pixels within the mask. An effect mask is applied to the tool after the
tool is processed.
Inspector
NOTE: Variations over time in the input image can cause corresponding variations in the
levels of the result. For example, if a bright object moves out of an otherwise dark shot, the
remaining scene gets suddenly brighter, as the remaining darker values get stretched to
white. This also applies to sudden depth changes when Do Z is applied; existing objects may
be pushed forward or backward when a near or far object enters or leaves the scene.
Do Z
Select the Do Z checkbox to apply the Auto Gain effect to the Z or Depth channels. This can be useful
for matching the ranges of one Z-channel to another, or to view a float Z-channel in the RGB values.
Range
This Range control sets the black point and white point in the image. All tonal values in the image
rescale to fit within this range.
Example
Create a horizontal gradient with the Background node. Set one color to dark gray
(RGB Values 0.2). Set the other color to light gray (RGB Values 0.8).
Add an Auto Gain node and set the Low value to 0.0 and the High value to 0.5. This causes
the brightest pixels to be pushed down to 0.5, and the darkest pixels get pushed to black.
The remainder of the pixel values scale between those limits.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inspector
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the brightness, contrast
operations.
Gain
The gain slider is a multiplier of the pixel value. A Gain of 1.2 makes a pixel that is R0.5 G0.5 B0.4 into
R0.6 G0.6, B0.48 (i.e., 0.4 * 1.2 = 0.48) while leaving black pixels unaffected. Gain affects higher values
more than it affects lower values, so the effect is most influential in the midrange and top range of
the image.
Lift
While Gain scales the color values around black, Lift scales the color values around white. The pixel
values get multiplied by the value of this control. A Lift of 0.5 makes a pixel that is R0.0 G0.0 B0.0 into
R0.5 G0.5, B0.5 while leaving white pixels unaffected. Lift affects lower values more than it affects
higher values, so the effect is most influential in the midrange and low range of the image.
Gamma
Values higher than 1.0 raise the Gamma (mid-gray), whereas lower values decrease it. The effect of this
node is not linear, and existing black or white points are not affected at all. Pure gray colors are
affected the most.
Contrast
Contrast is the range of difference between the light to dark areas. Increasing the value of this slider
increases the contrast, pushing color from the midrange toward black and white. Reducing the contrast
causes the colors in the image to move toward midrange, reducing the difference between the darkest
and brightest pixels in the image.
Brightness
The value of the Brightness slider gets added to the value of each pixel in the image. This control’s
effect on an image is linear, so the effect is applied identically to all pixels regardless of value.
Saturation
Use this control to increase or decrease the amount of Saturation in the image. A saturation of 0 has
no color, reducing the image to grayscale.
Direction
Forward applies all values normally. Reverse effectively inverts all values.
Clip Black/White
The Clip Black and Clip White checkboxes clip out-of-range color values that can appear in an image
when processing in floating-point color depth. Out-of-range colors are below black (0.0) or above
white (1.0). These checkboxes have no effect on images processed at 8-bit or 16-bit per channel, as
such images cannot have out-of-range values.
Common Controls
Settings Tab
The Settings tab in the Inspector appears in other Color nodes. These common controls are described
in detail at the end of this chapter in “The Common Controls” section.
NOTE: Be aware of another similarly named Channel Boolean (3Bol), which is a 3D node used
to remap and modify channels of 3D materials. When modifying 2D channels, use the
Channel Booleans (with an “s”) node (Bol).
Inputs
There are four inputs on the Channel Booleans node in the Node Editor, but only the orange
Background input is required.
– Background: This orange input connects a 2D image that gets adjusted by the foreground
input image.
– Effect Mask: The blue effect mask input expects a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input
limits the channel booleans adjustment to only those pixels within the mask.
– Foreground: The green foreground input connects a 2D image that is used to adjust the
background input image.
– Matte: The white matte input can be used to combine external mattes with the foreground and
background operations.
Inspector
Operation
This menu selects the mathematical operation applied to the selected channels. The options are
as follows:
Copy
Copy the value from one color channel to another. For example, copy the foreground red channel into
the background’s Alpha channel to create a matte.
– Add: Add the color values from one color channel to another channel.
– Subtract: Subtract the color values of one color channel from another color channel.
– And: Perform a logical AND on the color values from color channel to color channel. The
foreground image generally removes bits from the color channel of the background image.
– Or: Perform a logical OR on the color values from color channel to color channel. The foreground
image generally adds bits from the color channel of the background image.
Inspector
Examples
To copy the Alpha channel of one image to its color channels, set the red, green, and blue
channels to Alpha BG. Set the Operation to Copy.
To copy the Alpha channel from another image, set operation type to Alpha FG.
To replace the existing Alpha channel of an image with the Alpha of another image, choose
“Do Nothing” for To Red, To Green, and To Blue and “Alpha FG” for To Alpha. Pipe the image
containing the Alpha into the foreground input on the Channel Booleans node. Set Operation:
“Copy.” The same operation is available in the Matte Control node.
To combine any mask into an Alpha channel of an image, choose “Do Nothing” for To Red,
To Green, and To Blue and “Matte” for To Alpha. Pipe the mask into the foreground input on
the Channel Booleans node. Set Operation: “Copy.”
To subtract the red channel’s pixels of another image from the blue channel, choose “Do
Nothing” for To Red and To Green and “Red FG” for To Blue. Pipe the image containing the red
channel to subtract into the foreground input on the Channel Booleans node. Set
Operation: “Subtract.”
Common Controls
Settings Tab
The Settings tab in the Inspector appears in other Color nodes. These common controls are described
in detail at the end of this chapter in “The Common Controls” section.
Inspector
Range
This menu determines the tonal range affected by the color correction controls in this tab. The menu
can be set to Shadows, Midtones, Highlights, and Master, where Master is the default affecting the
entire image.
The selected range is maintained throughout the Colors, Levels, and Suppress sections of the Color
Corrector node.
Adjustments made to the image in the Master channel are applied to the image after any changes
made to the Highlight, Midtone, and Shadow ranges.
NOTE: The controls are independent for each color range. For example, adjusting the
Gamma control while in Shadows mode does not change or affect the value of the Gamma
control for the Highlights mode. Each control is independent and applied separately.
Color Wheel
The color wheel provides a visual representation of adjustments made to Hue and Saturation, as well
as any tinting applied to the image. Adjustments can be made directly by dragging the color indicator,
or by entering values in the numeric boxes under the color wheel.
The tinting is represented in the color wheel color indicator that shows the color and strength of the
tint. The Highlight setting uses a black outline for the color indicator. The Midtones and Shadows use
gray color indicators. The Master color indicator is also black, but it has a white M in the center to
distinguish it from the others.
The mouse can position the color indicator for each range only when the applicable range is selected.
For example, the Highlight color indicator cannot be moved when the Master range is selected.
Holding down the Command or Ctrl key while dragging this indicator allows you to make finer
adjustments by reducing the control’s sensitivity to mouse movements. Holding down the Shift key
limits the movement of the color indicator to a single axis, allowing you to restrict the effect to either
tint or strength.
Tint Mode
This menu is used to select the speed and quality of the algorithm used to apply the hue and
saturation adjustments. The default is Better, but for working with larger images, it may be desirable to
use a faster method.
Hue
This slider is a clone of the Hue control located under the color wheel. The slider makes it easier to
make small adjustments to the value with the mouse. The Hue control provides a method of shifting
the hue of the image (or selected color range) through the color spectrum. The control value has an
effective range between -0.1 and 1.0, which represents the angle of rotation in a clockwise direction. A
value of 0.25 would be 90 degrees (90/360) and would have the effect of shifting red toward blue,
green to red, and so on.
Hue shifting can be done by dragging the slider, entering a value directly into the text control, or by
placing the mouse above the outer ring of the color wheel and dragging the mouse up or down. The
outer ring always shows the shifted colors compared to the original colors shown in the center of
the wheel.
Channel
This menu is set for the Histogram, Color, and Levels sections of the Color Corrector node. When the
red channel is selected, the controls in each mode affect the red channel only, and so on.
The controls are independent, so switching to blue does not remove or eliminate any changes made
to red, green, or Master. The animation and adjustments made to each channel are separate. This
menu simply determines what controls to display.
Contrast
Contrast is the range of difference between the light to dark areas. Increasing the value of this slider
increases the contrast, pushing color from the midrange toward black and white. Reducing the contrast
causes the colors in the image to move toward midrange, reducing the difference between the darkest
and brightest pixels in the image.
Gain
The Gain slider is a multiplier of the pixel value. A gain of 1.2 makes a pixel that is R0.5 G0.5 B0.4 into
R0.6 G0.6, B0.48 (i.e., 0.4 * 1.2 = 0.48), while leaving black pixels totally unaffected. Gain affects higher
values more than it affects lower values, so the effect is strongest in the midrange and top range of
the image.
Lift
While Gain scales the color values around black, Lift scales the color values around white. The pixel
values are multiplied by the value of this control. A Lift of 0.5 makes a pixel that is R0.0 G0.0 B0.0 into
R0.5 G0.5, B0.5, while leaving white pixels totally unaffected. Lift affects lower values more than it
affects higher values, so the effect is strongest in the midrange and low range of the image.
Gamma
Values higher than 1.0 raise the Gamma (mid gray), whereas lower values decrease it. The effect of this
node is not linear, and existing black or white points are not affected at all. Pure gray colors are
affected the most.
Brightness
The value of the Brightness slider is added to the value of each pixel in your image. This control’s
effect on an image is linear, so the effect is applied identically to all pixels despite value.
Range
Identical to the Range menu when Color is selected in the Menu, the Range menu determines the
tonal range affected by the color correction controls in this tab. The menu can be set to Shadows,
Midtones, Highlights, and Master, where Master is the default affecting the entire image.
The selected range is maintained throughout the Colors, Levels, and Suppress sections of the Color
Corrector node.
Adjustments made to the image in the Master channel are applied to the image after any changes
made to the Highlights, Midtones, and Shadows ranges.
NOTE: The controls are independent for each color range. For example, adjusting the
Gamma control while in Shadows mode does not change or affect the value of the Gamma
control for the Highlights mode. Each control is independent and applied separately.
Channel
This menu is used to select and display the histogram for each color channel or for the Master channel.
Histogram Display
A histogram is a chart that represents the distribution of color values in the scene. The chart reads
from left to right, with the leftmost values representing the darkest colors in the scene and the
rightmost values representing the brightest. The more pixels in an image with the same or similar
value, the higher that portion of the chart is.
Luminance is calculated per channel; therefore, the red, green, and blue channels all have their own
histogram, and the combined result of these comprises the Master Histogram.
To scale the histogram vertically, place the mouse pointer inside the control and drag the pointer up to
zoom in or down to zoom out.
Histogram Controls
These controls along the bottom of the histogram display are used to adjust the input image’s
histogram, compressing or shifting the ranges of the selected color channel.
The controls can be adjusted by dragging the triangles beneath the histogram display to the left
and right.
Shifting the High value toward the left (decreasing the value) causes the histogram to slant toward
white, shifting the image distribution toward white. The Low value has a similar effect in the opposite
direction, pushing the image distribution toward black.
Output Level
The Output Level control can apply clipping to the image, compressing the histogram. Decreasing the
High control reduces the value of pixels in the image, sliding white pixels down toward gray and gray
pixels toward black.
Adjusting the Low control toward High does the opposite, sliding the darkest pixels toward white.
If the low value were set to 0.1, pixels with a value of 0.0 would be set to 0.1 instead, and other values
would increase to accommodate the change. The best way to visualize the effect is to observe the
change to the output histogram displayed above.
Channel
This menu is used to select and display the histogram for each color channel or for the Master channel.
Histogram Display
A histogram is a chart that represents the distribution of color values in the scene. The chart reads
from left to right, with the leftmost values representing the darkest colors in the scene and the
rightmost values representing the brightest. The more pixels in an image with the same or similar
value, the higher that portion of the chart is.
Luminance is calculated per channel; therefore, the red, green, and blue channels all have their own
histogram, and the combined result of these comprises the Master Histogram.
To scale the histogram vertically, place the mouse pointer inside the control and drag the pointer up to
zoom in or down to zoom out.
Histogram Type
Each of these menu options enables a different type of color correction operation.
– Keep: Keep produces no change to the image, and the reference histogram is ignored.
– Equalize: Selecting Equalize adjusts the source image so that all the color values in the image are
equally represented—in essence, flattening the histogram so that the distribution of colors in the
image becomes more even.
– Match: The Match mode modifies the source image based on the histogram from the reference
image. It is used to match two shots with different lighting conditions and exposures so that they
appear similar.
When selected, the Equalize and Match modes reveal the following controls.
Equalize/Match R/G/B
The name of this control changes depending on whether the Equalize or Match modes have been
selected. The slider can be used to reduce the correction applied to the image to equalize or match it.
A value of 1.0 causes the full effect of the Equalize or Match to be applied, whereas lower values
moderate the result.
Precision
This menu determines the color fidelity used when sampling the image to produce the histogram.
10-bit produces higher fidelity than 8-bit, and 16-bit produces higher fidelity than 10-bit.
Smooth Correction
Often, color equalization and matching operations introduce posterization in an image, which occurs
because gradients in the image have been expanded or compressed so that the dynamic range
between colors is not sufficient to display a smooth transition. This control can be used to smooth the
correction curve, blending some of the original histogram back into the result for a more even
transition.
Release Match
Click this button to release the current snapshot of the histogram and return to using the live
reference input.
Suppression Angle
Use the Suppression Angle control to rotate the controls on the suppression wheel and zero in on a
specific color.
Ranges Tab
The Ranges tab contains the controls used to specify which pixels in an image are considered to be
shadows and which are considered to be highlights. The midrange is always calculated as pixels not
already included in the shadows or the highlights.
Range
This menu is used to select the tonal range displayed in the viewers. They help to visualize the pixels
in the range. When the Result menu option is selected, the image displayed by the color corrector in
the viewers is that of the color corrected image. This is the default.
Selecting one of the other menu options switches the display to a grayscale image showing which
pixels are part of the selected range. White pixels represent pixels that are considered to be part of
the range, and black pixels are not in the range. For example, choosing Shadows would show pixels
Channel
The Channel menu in this tab can be used to examine the range of a specific color channel. By default,
Fusion displays the luminance channel when the color ranges are examined.
Spline Display
The ranges are selected by manipulating the spline handles. There are four spline points, each with
one Bézier handle. The two handles at the top represent the start of the shadow and highlight ranges,
whereas the two at the bottom represent the end of the range. The Bézier handles are used to control
the falloff.
The midtones range has no specific controls since its range is understood to be the space between
the shadow and the highlight ranges.
The X and Y text controls below the spline display can be used to enter precise positions for the
selected Bézier point or handle.
Options Tab
The Options tab includes a few very important processing operations including a simple solution when
color correcting premultiplied Alpha channels.
Pre-Divide/Post-Multiply
Selecting this option divides the color channels by the value of the Alpha before applying the color
correction. After the color correction, the color values are re-multiplied by the Alpha to produce a
properly additive image. This is crucial when performing an additive merge or when working with CG
images generated with premultiplied Alpha channels.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Color Curves node includes three inputs in the Node Editor.
– Input: This orange input is the only required connection. It connects a 2D image that is
adjusted by the color curves.
– Effect Mask: The optional effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the color curves adjustment to only those pixels within the mask. An effect mask is
applied to the tool after it is processed.
– Reference Image: The optional green input is used to connect a second 2D image that can be
used for reference matching.
– Match Mask: This optional white input accepts any mask much like an effect mask. However,
this mask defines of the area to match during a Match. It offers more flexibility in terms of shape
than the built-in Match reference rectangle in the Inspector.
Inspector
Controls Tab
The Controls tab for the color curves is divided into two sections. The top half of the Inspector
includes the curves and LUT controls. The bottom half is dedicated primarily to matching the
reference image.
Mode
The Mode options change between Animated and Dissolve modes. The default mode is No
Animation, where adjustments to the curves are static. Setting the mode provides a change spline for
each channel, allowing the color curve to be animated over time.
Dissolve mode is essentially obsolete and is included for compatibility reasons only.
Color Space
The splines in the LUT view represent color channels from a variety of color spaces. The default is
Red, Green, and Blue. The options in this menu allow an alternate color space to be selected.
Spline Window
The Spline Window displays a standard curve editor for each RGBA channel. These splines can be
edited individually or as a group, depending on the color channels selected above.
The spline defaults to a linear range, from 0 in/0 out at the bottom left to the 1 in/1 out at the top right.
At the default setting, a color processes to the same value as the output. If a point is added in the
middle at 0.5 in/0.5 out, and the point is moved up, this raises the mid color of the image brighter.
The spline curves allow precise control over color ranges, so specific adjustments can be made
without affecting other color values.
In and Out
Use the In and Out controls to manipulate the precise values of a selected point. To change a value,
select a point and enter the in/out values desired.
Eyedropper (Pick)
Click the Eyedropper icon, also called the Pick button, and select a color from an image in the display
to automatically set control points on the spline for the selected color. The new points are drawn with
a triangular shape and can only be moved vertically (if point is locked, only the Out value can change).
Points are only added to enabled splines. To add points only on a specific channel, disable the other
channels before making the selection.
One use for this technique is white balancing an image. Use the Pick control to select a pixel from the
image that should be pure gray. Adjust the points that appear so that the Out value is 0.5 to change
the pixel colors to gray.
Use the contextual menu’s Locked Pick Points option to unlock points created using the Pick option,
converting them into normal points.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inspector
Gain Tab
The Gain tab provides control of individual RGBA Lift/Gamma/Gain parameters. These controls can
quickly enable you to fix irregular color imbalances in specific channels.
Gain RGBA
The Gain RGBA controls multiply the values of the image channel in a linear fashion. All pixels are
multiplied by the same factor, but the effect is larger on bright pixels and smaller on dark pixels. Black
pixels do not change because multiplying any number times 0 is always 0.
Lift RGBA
While Gain scales the color values around black, Lift scales the color values around white. The pixel
values are multiplied by the value of this control. A Lift of 0.5 makes a pixel that is R0.0 G0.0 B0.0 into
R0.5 G0.5, B0.5, while leaving white pixels totally unaffected. Lift affects lower values more than it
affects higher values, so the effect is strongest in the midrange and low range of the image.
Gamma RGBA
The Gamma RGBA controls affect the brightness of the midrange in the image. The effect of this node
is nonlinear. White and black pixels in the image are not affected when gamma is modified, whereas
pure grays are affected most by changes to this parameter. Large changes to this control tend to push
midrange pixels into black or white, depending on the value used.
Pre-Divide/Post-Multiply
Selecting this checkbox causes the image pixel values to be divided by the Alpha values prior to the
color correction, and then re-multiplied by the Alpha value after the correction. This helps when
attempting to color correct images with premultiplied Alpha channels.
Saturation Tab
This Setting tab includes controls for the intensity of the colors in the individual RGB channels.
RGB Saturation
When adjusting an individual channel, a value of 0.0 strips out all that channel’s color. Values greater
than one intensify the color in the scene, pushing it toward the primary color.
Balance Tab
This tab in the Color Gain node offers controls for adjusting the overall balance of a color channel.
Independent color and brightness controls are offered for the High, Mid, and Dark ranges of
the image.
Colors are grouped into opposing pairs from the two dominant color spaces. Red values can be
pushed toward Cyan, Green values toward Magenta and Blue toward Yellow. Brightness can be raised
or lowered for each of the channels.
Hue Tab
Use the Hue tab of the Color Gain node to shift the overall hue of the image, without affecting the
brightness, or saturation. Independent controls of the High, Mid, and Dark ranges are offered by
three sliders.
The following is the order of the hues in the RGB color space: Red, Yellow, Green, Cyan, Blue,
Magenta and Red.
High/Mid/Dark Hue
Values above 0 push the hue of the image toward the right (red turns yellow). Values below 0 push the
hue toward the left (red turns magenta). At -1.0 or 1.0, the hue completes the cycle and returns to its
original value.
The default range of the hue sliders is -1.0 to +1.0. Values outside this range can be entered manually.
Spline Display
The ranges are selected by manipulating the spline handles. There are four spline points, each with
one Bézier handle. The two handles at the top represent the start of the shadow and highlight ranges,
whereas the two at the bottom represent the end of the range. The Bézier handles are used to control
the falloff.
The midtones range has no specific controls since its range is understood to be the space between
the shadow and the highlight ranges. The X and Y text controls below the Spline display can be used
to enter precise positions for the selected Bézier point or handle.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inspector
Controls Tab
Color Matrix multiplies the RGBA channels based on the values entered in a 4 x 4 grid. The fifth
column/row is an Add column.
Update Lock
When this control is selected, Fusion does not render the node. This is useful for setting up each value
of the node, and then turning off Update Lock to render it.
Matrix
This defines what type of operation actually takes place. The horizontal rows define the output values
of the node. From left to right, they are R, G, B, A, and Add. The vertical columns define the input
values. From top to bottom, they are R, G, B, A, and Add. The Add column allows simple adding of
values to the individual color channels.
Invert
Enabling this option inverts the Matrix. Think of swapping channels around, doing other operations
with different nodes, and then copying and pasting the original ColorMatrix and setting it to Invert to
get your channels back to the original.
Example 1: Invert
If you want to do a simple invert or negative of the color values, but leave the Alpha channel
untouched, the matrix would look like this:
Observe the fact that we have to add 1 to each channel to push the inverted values back into the
positive numbers.
Let’s follow this example step by step by viewing the waveform of a 32-bit grayscale gradient.
1 The original grayscale.
Original Grayscale
RGB set to -1
3 Adding 1 to each channel keeps the inversion but moves the values back into a positive range.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inspector
Controls Tab
The Controls tab in the Color Space node consists of two menus. The top Conversion menu
determines whether you are converting an image to RGB or from RGB. The bottom menu selects the
alternative color space you are either converting to or from.
Conversion
This menu has three options. The None option has no effect on the image. When To Color is selected,
the input image is converted to the color space selected in the Color Type control found below. When
To RGB is selected, the input image is converted back to the RGB color space from the type selected
in the Color Type menu (for example, YUV to RGB).
Color Type
This menu is used to select the color space conversion applied when the To Color conversion is
selected. When the To RGB option is selected in the Conversion menu, the Color Type option should
reflect the input image’s current color space. There are eight color space options to choose from.
– HSV (Hue, Saturation, and Value): Each pixel in the HSV color space is described in terms of its
Hue, Saturation, and Value components. Value is defined as the quality by which we distinguish
a light color from a dark one or brightness. Decreasing saturation roughly corresponds to adding
white to a paint chip on a palette. Increasing Value is roughly similar to adding black.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Copy Aux node includes two inputs: one for the main image and the other for an effect mask.
– Input: This orange input is the only required connection. It connects a 2D image for the Copy
Aux node operation.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the Copy Aux operation to only those pixels within the mask. An effect mask is
applied to the tool after the tool is processed.
Inspector
Mode
The Mode menu determines whether the auxiliary channel is copied into the RGBA color channel (Aux
to Color) or vice versa (Color to Aux). Using this option, you can use one Copy Aux node to bring an
auxiliary channel into color, do some compositing operations on it, and then use another Copy Aux
node to write the color back into the auxiliary channel. When the Mode is set to Color to Aux, all the
options in the Controls tab except the Aux Channel menu are hidden.
Aux Channel
The Aux Channel menu selects the auxiliary channel to be copied from or written to depending on the
current mode. When the aux channel abcd has one valid component, it is copied as aaa1, two valid
components as ab01, three valid components as abc1, and four components as abcd. For example, the
Z-channel is copied as zzz1, texture coordinates as uv01, and normals as nxnynz1.
Channel Missing
Channel Missing determines what happens if a channel is not present. For example, this determines
what happens if you chose to copy Disparity to Color and your input image does not have a Disparity
aux channel.
– Fail: The node fails and prints an error message to the console.
– Use Default Value: This fills the RGBA channels with the default value of zero for everything
except Z, which is -1e30.
Enable Remapping
When remapping is enabled, the currently selected aux channel is rescaled, linearly mapping the
range according to the From and To slider selections as explained below. The Remapping options are
applied before the conversion operation. This means you could set the From > Min-Max values to -1, 1
to rescale your normals into the [0, 1] range, or set them to [-1000, 0] to rescale your Z values from
[-1000, 0] into the [0, 1] range before the clipping occurs.
Note that the Remapping options are per channel options. That means the default scale for normals
can be set to [-1, +1] > [0, 1] and for Z it can be set [-1000, 0] > [0, 1]. When you flip between normals and
Z, both options are remembered. One way this could be useful is that you can set up the remapping
ranges and save this as a setting that you can reuse. The remapping can be useful to squash the aux
channels into a static [0, 1] range for viewing or, for example, if you wish to compress normals into the
[0, 1] range to store them in an int8 image.
– From > Min: This is the value of the aux channel that corresponds to To > Min.
– From > Max: This is the value of the aux channel that corresponds to To > Max. It is possible to set
the max value less than the min value to achieve a flip/inversion of the values.
– Detect Range: This scans the current image to detect the min/max values and then sets the From
> Min/ From > Max Value controls to these values.
– Update Range: This scans the current image to detect the min/max values and then enlarges the
current [From > Min, From > Max] region so that it contains the min/max values from the scan.
– To > Min: This is the minimum output value, which defaults to 0.
– To > Max: This is the maximum output value, which defaults to 1.
– Invert: After the values have been rescaled into the [To > Min, To > Max] range, this inverts/flips
the range.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Gamut node includes two inputs: one for the main image and the other for an effect mask to limit
the conversion area.
– Input: This orange input is the only required connection. It connects a 2D image output that is
the source of the gamut conversion.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the Gamut operation to only those pixels within the mask. An effect mask is applied
to the tool after the tool is processed.
Inspector
Controls Tab
The Controls tab is where all the conversion operations take place. It has a section for incoming
images and a section for the node’s output. Which section you use depends on whether you are
stripping an image of a gamma curve to make it linear or converting a linear image to a specific color
space and gamma curve for output.
Source Space
Source Space determines the input color space of the image. When placed directly after a Loader
node in Fusion or a MediaIn node in DaVinci Resolve, you would select the applicable color space
based on how the image was created and check the Remove Gamma checkbox. The output of the
node would be a linearized image. You leave this setting at No Change when you are adding gamma
using the Output Space control and placing the node directly before the Saver node in Fusion or a
MediaOut node in DaVinci Resolve.
DCI-P3
The DCI-P3 color space is most commonly used in association with DLP projectors. It is frequently
provided as a color space available with DLP projectors and as an emulation mode for 10-bit LCD
monitors such as the HP Dreamcolor and Apple’s Pro Display XDR. This color space is defined in the
SMPTE-431-2 standard.
Custom
The Custom gamut allows you to describe the color space according to CIE 1931 primaries and white
point, which are expressed as XY coordinates, as well as by gamma, limit, and slope. For example, the
DCI-P3 gamut mentioned above would have the following values if described as a Custom
color space.
Gamma 2.6 –
To understand how these controls work, you could view the node attached to a gradient background
in Waveform mode and observe how different adjustments modify the output.
NOTE: When outputting to HD specification Rec. 709, Fusion uses the term Scene to refer to
a gamma of 2.4 and the term Display for a gamma of 2.2.
Remove/Add Gamma
Select these checkboxes to do the gamut conversion in a linear or nonlinear gamma, or simply remove
or add the applicable gamma values without changing the color space.
Pre-Divide/Post-Multiply
Selecting this checkbox causes the image’s pixel values to be divided by the Alpha values prior to the
color correction, and then re-multiplied by the Alpha value after the correction. This helps to avoid the
creation of illegally additive images, particularly around the edges of a blue/green key or when
working with 3D-rendered objects.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inspector
Controls Tab
The Controls tab consists of color attribute checkboxes that determine which splines are displayed in
the Spline window. The spline graph runs horizontally across with control points placed horizontally at
each of the primary colors. You can manipulate these control points to change the selected color
attribute.
Spline Window
This graph display is the main interface element of the Hue Curves node, which hosts the various
splines. In appearance, the node is very similar to the Color Curves node, but here the horizontal axis
represents the image’s hue, while the vertical axis represents the degree of adjustment. The Spline
window shows the curves for the individual channels. It is a miniature Spline Editor. In fact, the curves
shown in this window can also be found and edited in the Spline Editor.
The spline curves for all components are initially flat, with control points placed horizontally at each of
the primary colors. From left to right, these are: Red, Yellow, Green, Cyan, Blue, and Magenta. Because
of the cyclical design of the hue gradient, the leftmost control point in each curve is connected to the
rightmost control point of the curve.
Right-clicking in the graph displays a contextual menu containing options for resetting the curves,
importing external curves, adjusting the smoothness of the selected control points, and more.
In and Out
Use the In and Out controls to manipulate the precise values of a selected point. To change a value,
select a point and enter the In/Out values desired.
Eyedropper
Left-clicking and dragging from the Eyedropper icon changes the current mouse cursor to an
Eyedropper. While still holding down the mouse button, drag the cursor to a viewer to pick a pixel from
a displayed image. This causes control points, which are locked on the horizontal axis, to appear on
the currently active curves. The control points represent the position of the selected color on the
curve. Use the contextual menu’s Lock Selected Points toggle to unlock points and restore the option
of horizontal movement.
Points are only added to enabled splines. To add points only on a specific channel, disable the other
channels before making the selection.
Pre-Divide/Post-Multiply
Selecting this checkbox causes the image’s pixel values to be divided by the Alpha values prior to the
color correction, and then re-multiplied by the Alpha value after the correction. This helps when color
correcting images that include a premultiplied Alpha channel.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Generally, the OCIO color pipeline is composed from a set of color transformations defined by OCIO-
specific config files, commonly named with a “.ocio” extension. These config files allow you to share
color settings within or between facilities. The path to the config file to be used is normally specified
by a user-created environment variable called “OCIO,” although some tools allow overriding this. If no
other *.ocio config files are located, the DefaultConfig.ocio file in Fusion’s LUTs directory is used.
For in-depth documentation of the format’s internals, please refer to the official pages on
opencolorio.org.
Inputs
The OCIO CDL Transform node includes two inputs: one for the main image and the other for an effect
mask to limit the area where the CDL is applied.
– Input: This orange input is the only required connection. It connects a 2D image output for the
CDL grade.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the CDL grade to only those pixels within the mask. An effect mask is applied to the
tool after it is processed.
Controls Tab
The Controls tab for the OCIO CDL Transform contains primary color grading color correction controls
in a format compatible with CDLs. You can make R, G, B adjustments based on the Slope, Offset, and
Power. There is also overall Saturation control. You can also use the Controls tab to import and export
the CDL compatible adjustments.
Operation
This menu switches between File and Controls. In File mode, standard ASC-CDL files can be loaded.
In Controls mode, manual adjustments can be made to Slope, Offset, Power, and Saturation, and the
CDL file can be saved.
NOTE: Using DaVinci Resolve terminology, slope is similar to gain. It controls mids-to-high
contrast. Offset is the overall offset of color balance and exposure. Power is very similar to
contrast with a raised pivot, giving you control over shadow contrast.
Slope
Offset
Power
Applies a Gamma Curve. This is an inverse of the Gamma function of the Brightness Contrast node.
Saturation
Enhances or decreases the color saturation. This works the same as Saturation in the Brightness
Contrast node.
Export File
Allows the user to export the settings as a CDL file.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Generally, the OCIO color pipeline is composed from a set of color transformations defined by OCIO-
specific config files, commonly named with a “.ocio” extension. These config files allow you to share
color settings within or between facilities. The path to the config file to be used is normally specified
by a user-created environment variable called “OCIO,” though some tools allow overriding this. If no
other *.ocio config files are located, the DefaultConfig.ocio file in Fusion’s LUTs directory is used.
For in-depth documentation of the format’s internals, please refer to the official pages on
opencolorio.org.
Sample configs can be obtained from https://opencolorio.readthedocs.io/en/latest/quick_start/
downloads.html
The functionality of the OCIO Color Space node is also available as a View LUT node from the
View LUT menu.
Inputs
The OCIO Color Space node includes two inputs: one for the main image and the other for an effect
mask to limit the area where the color space conversion is applied.
– Input: This orange input is the only required connection. It connects a 2D image for the color
space conversion.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the color space conversion to only those pixels within the mask. An effect mask is
applied to the tool after it is processed.
Inspector
Controls Tab
The Controls tab for the OCIO Color Space node allows you to convert an image from one color space
to another based on an OCIO config file. By default, it uses the config file included with Fusion;
however, the Controls tab does allow you to load your own config file as well.
OCIO Config
Displays a File > Open dialog to load the desired config file.
Source Space
Based on the config file, the available source color spaces are listed here.
The content of this list is based solely on the loaded profile and hence can vary immensely. If no other
OCIO config file is loaded, the DefaultConfig.ocio file in Fusion’s LUTs directory is used to populate
this menu.
Output Space
Based on the config file, the available output color spaces are listed here.
The content of this list is based solely on the loaded profile and hence can vary immensely. If no other
OCIO config file is loaded, the DefaultConfig.ocio file in Fusion’s LUTs directory is used to populate
this menu.
Look
Installed OCIO Color Transform Looks appear in this menu. If no looks are installed, this menu has only
None listed as an option.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Generally, the OCIO color pipeline is composed from a set of color transformations defined by OCIO-
specific config files, commonly named with a “.ocio” extension. These config files allow you to share
color settings within or between facilities. The path to the config file to be used is normally specified
by a user-created environment variable called “OCIO,” though some tools allow overriding this. If no
other *.ocio config files are located, the DefaultConfig.ocio file in Fusion’s LUTs directory is used.
For in-depth documentation of the format’s internals, please refer to the official pages on
opencolorio.org.
The functionality of the OCIO File Transform node is also available as a View LUT node from the
View LUT menu.
Inputs
The OCIO File Transform node includes two inputs: one for the main image and the other for an effect
mask to limit the area where the color space conversion is applied.
– Input: This orange input is the only required connection. It connects a 2D image for the LUT.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the applied LUT to only those pixels within the mask. An effect mask is applied to
the tool after it is processed.
Controls Tab
The Controls tab for the OCIO File Transform node includes options to import the LUT, invert the
transform, and select the color interpolation method.
LUT File
Displays a File > Open dialog to load the desired LUT.
CCC ID
This is the ID key used to identify the specific file transform located within the ASC CDL color
correction XML file.
Direction
Toggles between Forward and Reverse. Forward applies the corrections specified in the node, while
Reverse tries to remove those corrections. Keep in mind that not every color correction can be
undone. Imagine that all slope values have been set to 0.0, resulting in a fully black image. Reversing
that operation is not possible, neither mathematically nor visually.
Interpolation
Allows the user to select the color interpolation to achieve the best quality/render time ratio. Nearest is
the fastest interpolation, while Best is the slowest.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
NOTE: Position the mouse pointer in a black area outside the raster to view the RGB canvas
color in the status bar at the bottom left of the Fusion window.
Inputs
The Set Canvas Color node includes two inputs: one for the main image and a second for a
foreground.
– Input: This orange input is the only required connection. It accepts a 2D image that reveals the
canvas color if the image’s DoD is smaller than the raster.
– Foreground: The optional green foreground input allows the canvas color to be sampled from
an image connected to this input.
The Set Canvas Color node is often used for adjusting keys. In the example above, the Luma Keyer is
extracting a key, and therefore assigns the area outside the DoD, which is black, as an opaque
foreground. If the element is scaled down and composited, you do not see the background. To correct
this, insert a SetBGColor before the keyed element is placed in the composite. For example, LumaKey
> Set Canvas Color > Transform > Merge.
Controls Tab
The Controls tab for the Set Canvas Color is used for simple color selection. When the green
foreground is connected, the tab is empty.
Color Picker
Use these controls to adjust the Color and the Alpha value for the image’s canvas. It defaults to black
with zero Alpha.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
IMPORTANT
When picking neutral colors using the Custom method, make sure you are picking from the
source image, not the results of the White Balance node. This ensures that the image doesn’t
change while you are still picking, and that the White Balance node gets an accurate idea of
the original colors it needs to correct.
Inspector
Method
The White Balance node can operate using one of two methods: a Custom method or a color
Temperature method.
– Custom: The Custom method requires the selection of a pixel from the scene that should have
been pure gray. The node uses this information to calculate the color correction required to
convert the pixel so that it actually is gray. When the correction is applied without an effect
mask connected and the LockBlack/Mid/White checkbox enabled, the node white balances the
entire shot.
– Temperature: The color Temperature method requires that the actual color temperature of the
shot be specified.
Lock Black/Mid/White
This checkbox locks the Black, Midtones, and White points together so that the entire image is
affected equally. Unchecking the control provides individual controls for white balancing each range
separately. This control affects both methods equally.
Black/Mid/White Reference
These controls appear only if the Custom method is selected. They are used to select a color from a
pixel in the source image. The White Balance node color corrects the image so that the selected color
is transformed to the color set in the Result Color Picker below. Generally, this is gray. A color that is
supposed to be pure gray but is not truly gray for one reason or another should be selected.
If the Lock Black/Mid/White checkbox is deselected, different references can be selected for each
color range.
For example, try to select a pixel for the black and white references that are not clipped in any of the
color channels. In the high end, an example would be a pixel that is light pink with values of 255, 240,
240. The pixel is saturated/clipped in the red, although the color is not white. Similarly, a really dark
blue-gray pixel might be 0, 2, 10. It is clipped in red as well, although it is not black.
Neither example would be a good choice as a reference pixel because there would not be enough
headroom left for the White Balance node.
Black/Mid/White Result
These controls appear only if the Custom method is selected. They are used to select the color that
the node uses to balance the reference color. This generally defaults to pure, midrange gray.
If the Lock Black/Mid/White checkbox is deselected, different results can be selected for each
color range.
Temperature Reference
When the Method menu is set to Temperature, the Temperature reference control is used to set the
color temperature of the source image. If the Lock Black/ Mid/White checkbox is deselected, different
references can be selected for each color range.
Temperature Result
Use this control to set the target color temperature for the image. If the Lock Black/Mid/White
checkbox is deselected, different results can be selected for each color range.
Use Gamma
This checkbox selects whether the node takes the gamma of the image into account when applying
the correction, using the default gamma of the color space selected in the Space menu at the top
of the tab.
Ranges Tab
The Ranges tab can be used to customize the range of pixels in the image considered to be shadows,
midtones, and highlights by the node.
Spline Display
The ranges are selected by manipulating the spline handles. There are four spline points, each with
one Bézier handle. The two handles at the top represent the start of the shadow and highlight ranges,
whereas the two at the bottom represent the end of the range. The Bézier handles are used to control
the falloff.
The midtones range has no specific controls since its range is understood to be the space between
the shadow and the highlight ranges.
The X and Y text controls below the Spline display can be used to enter precise positions for the
selected Bézier point or handle.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Color category. The Settings
controls are even found on third-party color type plug-in tools. The controls are consistent and work
the same way for each tool, although some tools do include one or two individual options that are also
covered here.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels not included in the mask (i.e., set to 0) to become black/
transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around the
edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on Coverage and Background Color channels, see Chapter 18, “Understanding
Image Channels” in the Fusion Reference Manual or Chapter 79 in the DaVinci Resolve
Reference Manual.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off hardware-accelerated
rendering using the graphics card in your computer. Enabled uses the hardware. Auto uses a capable
GPU if one is available and falls back to software rendering when a capable GPU is not available.
Motion Blur
– Motion Blur: This toggles the rendering of Motion Blur on the tool. When this control is toggled
on, the tool’s predicted motion is used to produce the motion blur caused by the virtual camera’s
shutter. When the control is toggled off, no motion blur is created.
– Quality: Quality determines the number of samples used to create the blur. A quality setting of
2 causes Fusion to create two samples to either side of an object’s actual motion. Larger values
produce smoother results but increase the render time.
– Shutter Angle: Shutter Angle controls the angle of the virtual shutter used to produce the motion
blur effect. Larger angles create more blur but increase the render times. A value of 360 is the
equivalent of having the shutter open for one whole frame exposure. Higher values are possible
and can be used to create interesting effects.
– Center Bias: Center Bias modifies the position of the center of the motion blur. This allows the
creation of motion trail effects.
– Sample Spread: Adjusting this control modifies the weighting given to each sample. This affects
the brightness of the samples.
Comments
The Comments field is used to add notes to a tool. Click in the field and type the text. When a note is
added to a tool, a small red square appears in the lower-left corner of the node when the full tile is
displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the note
in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Composite Nodes
This chapter details the Dissolve and Merge nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Dissolve [DX] 893
Merge [MRG] 895
The Common Controls 903
Inputs
The Dissolve node provides three image inputs, all of which are optional:
– Background: The first of two images you want to switch between or mix. Unlike most
other nodes, it is unnecessary to connect the background input before connecting the
foreground input.
– Foreground: The second of two images you want to switch between or mix. The Dissolve node
works best when both foreground and background inputs are connected to images with the
same resolution.
– Gradient Map: (Optional) The Gradient Map is required only when Gradient Wipe is selected.
Resolution Handling
It is recommended to make sure that all images connected to the foreground, background, and
gradient map inputs of the Dissolve node have the same resolution and the same pixel aspect. This is
not required, however. But, the result if you mix resolutions depends on how you set the Background/
Foreground slider.
– If the input images are different sizes, but the Foreground/Background slider is set to full
Foreground (all the way to the right) or full Background (all the way to the left), then the output
resolution will be identical to the image resolution of the corresponding node input.
– If input images of different sizes are mixed by setting the Background/Foreground slider
somewhere between, the output resolution will be set to the larger of the two input resolutions
to make sure there’s enough room to contain both images. In this case, you may experience
undesirable resolution changes when the slider moves from full foreground or background to
somewhere in between.
For example, if you try to dissolve between a 4K image (connected to the background) and an 8K
image (connected to the foreground), the output of the Dissolve node will be 4K when the slider is
set to full Background, but will suddenly jump to 8K when set to full Foreground or when mixed
somewhere between the foreground and background.
Inspector
Dissolve controls
Controls Tab
These are the main controls that govern the Dissolve node’s behavior.
– Operation Pop-Up: The Operation menu contains one of seven different methods for mixing
the Foreground and Background inputs. The two images are mixed using the value of the
Background/Foreground slider to determine the percentage each image contributes.
– Dissolve: The standard Dissolve mode is the equivalent of a cross dissolve: one clip fades out
as another clip fades in.
– Additive Dissolve: Similar in look to a standard film dissolve, an Additive dissolve adds the
second clip and then fades out the first one.
– Erode: The Erode method transitions between the two images by growing the darkest areas of
the background image to reveal the foreground image. The effect appears similar to a filmstrip
burning out.
– Random Dissolve: A randomly generated dot pattern is used to perform the mix of the images.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in both the Dissolve and Merge nodes. These
common controls are described in detail at the end of this chapter in “The Common Controls” section.
Merge [MRG]
Inputs
The Merge node provides three image inputs, all of which are optional:
– Background: The orange background input is for the first of two images you want to composite
together. You should connect the background input before connecting the foreground input. If
you connect an image to the background without connecting anything to the foreground input,
the Merge node will output the background image.
– Foreground: The green foreground input is for the second of two images you want to
composite together, which is typically a foreground subject that should be in front of the
background. If you connect an image to the foreground input without connecting anything to
the background input first, the Merge node won’t output anything.
– Effect Mask: (Optional) The effect mask input lets you mask a limited area of the output image
to be merged where the mask is white (where the foreground image shows in front of the
background), letting the background image show through by itself where the mask is black.
Resolution Handling
While you can connect images of any resolution to the background and foreground inputs of the
Merge node, the image that’s connected to the background input determines the resolution of
the output.
TIP: If you want to change the resolution of the image connected to the background, you can
use the Crop node to change the “canvas” resolution of the image without changing the size
of the original image, or you can use the Resize node to change both the resolution and the
size of the image.
Merge Tab
The Merge tab contains most of the controls necessary for customizing most merge operations.
x = 1, y = 1-[foreground Alpha]
– In: The In mode multiplies the Alpha channel of the background input against the pixels in
the foreground. The color channels of the foreground input are ignored. Only pixels from the
foreground are seen in the final output. This essentially clips the foreground using the mask
from the background.
x = [background Alpha], y = 0
– Held Out: Held Out is essentially the opposite of the In operation. The pixels in the foreground
image are multiplied against the inverted Alpha channel of the background image. Accomplish
exactly the same result using the In operation and a Matte Control node to invert the matte
channel of the background image.
x = 1-[background Alpha], y = 0
– Atop: Atop places the foreground over the background only where the background
has a matte.
– XOr: XOr combines the foreground with the background wherever either the foreground or the
background has a matte, but never where both have a matte.
– Subtractive/Additive slider: This slider controls whether Fusion performs an Additive merge, a
Subtractive merge, or a blend of both. This slider defaults to Additive merging for most operations,
assuming the input images are premultiplied (which is usually the case). If you don’t understand
the difference between Additive and Subtractive merging, here’s a quick explanation.
– An Additive merge is necessary when the foreground image is premultiplied, meaning that the
pixels in the color channels have been multiplied by the pixels in the Alpha channel. The result
is that transparent pixels are always black, since any number multiplied by 0 always equals 0.
This obscures the background (by multiplying with the inverse of the foreground Alpha), and
then simply adds the pixels from the foreground.
– Alpha Gain slider: Alpha Gain linearly scales the values of the foreground’s Alpha channel. In
Subtractive merges, this controls the density of the composite, similarly to Blend. In Additive
merges, this effectively reduces the amount that the background is obscured, thus brightening the
overall result. In an Additive merge with Alpha Gain set to 0.0, the foreground pixels are simply
added to the background.
– Burn In slider: The Burn In control adjusts the amount of Alpha used to darken the background,
without affecting the amount of foreground added in. At 0.0, the merge behaves like a straight
Alpha blend, whereas at 1.0, the foreground is effectively added onto the background (after Alpha
multiplication if in Subtractive mode). This gives the effect of the foreground image brightening
the background image, as with Alpha Gain. For Additive merges, increasing the Burn In gives an
identical result to decreasing Alpha Gain.
– Blend slider: This is a cloned instance of the Blend slider in the Common Controls tab. Changes
made to this control are simultaneously made to the one in the common controls. The Blend slider
mixes the result of the node with its input, blending back the effect at any value less than 1.0. In
this case, it will blend the background with the merged result.
Additional Controls
The remaining controls let you fine-tune the results of the above settings.
– Filter Method: For input images that are being resized, this setting lets you choose the filter
method used to interpolate image pixels when resizing clips. The default setting is Linear. Different
settings work better for different kinds of resizing. Most of these filters are useful only when
making an image larger. When shrinking images, it is common to use the Linear filter; however,
the Catmull-Rom filter will apply some sharpening to the results and may be useful for preserving
detail when scaling down an image.
– Nearest Neighbor: This skips or duplicates pixels as needed. This produces the fastest but
crudest results.
– Box: This is a simple interpolation resize of the image.
– Linear: This uses a simplistic filter, which produces relatively clean and fast results.
– Quadratic: This filter produces a nominal result. It offers a good compromise between speed
and quality.
– Cubic: This produces better results with continuous-tone images. If the images have fine detail
in them, the results may be blurrier than desired.
– Catmull-Rom: This produces good results with continuous-tone images that are resized down.
Produces sharp results with finely detailed images.
– Gaussian: This is very similar in speed and quality to Bi-Cubic.
Resize Filters from left to right: Nearest Neighbor, Box, Linear, Quadratic, Cubic,
Catmull-Rom, Gaussian, Mitchell, Lanczos, Sinc, and Bessel
– Edges Buttons: Four buttons let you choose how to handle the space around images that are
smaller than the current DoD of the canvas as defined by the resolution of the background image.
– Canvas: The area outside the frame is set to the current color/opacity of the canvas. If you want
to change this value, you can attach a Set Canvas Color node between the image connected to
the foreground input and the foreground input itself, using Set Canvas Color to choose a color
and/or transparency setting with which to fill the canvas.
– Wrap: Creates a “video wall” effect by duplicating the foreground image as a grid.
– Duplicate: Duplicates the outermost pixels along the edge of the foreground image, duplicating
them to stretch up, down, left, and right from each side to reach the end of the DoD.
– Mirror: Similar to duplicate, except every other iteration of the foreground image is flipped and
flopped to create a repeating pattern.
– Invert Transform: Select the Invert Transform control to invert any position, rotation, or scaling
transformation. This option is useful when connecting the merge to the position of a tracker for
match moving.
– Flatten Transform: The Flatten Transform option prevents this node from concatenating its
transformation with subsequent nodes. The node may still concatenate transforms from its input,
but it will not concatenate its transformation with the node at its output.
– Reference Size: The controls under Reference Size do not directly affect the image. Instead, they
allow you to control how Fusion represents the position of the Merge node’s center.
Normally, coordinates are represented as values between 0 and 1, where 1 is a distance equal to
the full width or height of the image. This allows resolution independence, because the size of the
image can be changed without having to change the value of the center.
One disadvantage to this approach is that it complicates making pixel-accurate adjustments to an
image. To demonstrate, imagine an image that is 100 x 100 pixels in size. To move the center of the
foreground element to the right by 5 pixels, we would change the X value of the merge center
from 0.5, 0.5 to 0.55, 0.5. We know the change must be 0.05 because 5/100 = 0.05.
Channels Tab
The Channels tab has controls that let the Merge node use Z-channels embedded within each image
to define what’s in front and what’s behind during a Merge operation. The following controls let you
customize the result.
– Perform Depth Merge: Off by default. When turned on, the Z-channel of both images will be used
to determine the composite order. Alpha channels are still used to define transparency, but the
values of the Z-Depth channels will determine the ordering of image elements, front to back. If a
Z-channel is not available for either image, the setting of this checkbox will be ignored, and no
depth compositing will take place. If Z-Depth channels are available, turning this checkbox off
disables their use within this operation.
– Foreground Z-Offset: This slider sets an offset applied to the foreground image’s Z value. Click
the Pick button to pick a value from a displayed image’s Z-channel, or enter a value using the slider
or input boxes. Raising the value causes the foreground image’s Z-channel to be offset further
away along the Z-axis, whereas lowering the value causes the foreground to move closer.
– Subtractive/Additive: When Z-compositing, it is possible for image pixels from the background to
be composited in the foreground of the output because the Z-buffer for that pixel is closer than the
Z of the foreground pixel. This slider controls whether these pixels are merged in an Additive or a
Subtractive mode, in exactly the same way as the comparable slider in the Merge tab.
When merged over a background of a different color, the original background will still be visible in
the semitransparent areas. An Additive merge will maintain the transparencies of the image but
will add their values to the background.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on both tools in the Composite category. The Settings
controls are even found on third-party color type plug-in tools. The controls are consistent and work
the same way for each tool, although some tools do include one or two individual options that are also
covered here.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this will cause the tool to skip processing entirely, copying the input straight to the output.
Multiply by Mask
Selecting this option will cause the RGB values of the masked image to be multiplied by the mask
channel’s values. This will cause all pixels of the image not in the mask (i.e., set to 0) to become black/
transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around the
edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on Coverage and Background Color channels, see Chapter 18, “Understanding
Image Channels” in the Fusion Reference Manual or Chapter 79 in the DaVinci Resolve
Reference Manual.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off hardware-accelerated
rendering using the graphics card in your computer. Enabled uses the hardware. Auto uses a capable
GPU if one is available and falls back to software rendering when a capable GPU is not available
Comments
The Comments field is used to add notes to a tool. Click in the field and type the text. When a note is
added to a tool, a small red square appears in the lower left corner of the node when the full tile is
displayed or a small text bubble icon appears on the right when nodes are collapsed. To see the note
in the node editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Contents
Ambient Occlusion [SSAO] 907
Depth Blur [DBl] 910
Fog [Fog] 912
Shader [Shd] 914
Texture [Txr] 917
The Common Controls 919
Usage
The AO node rarely works out of the box, and requires some tweaking. The setup process involves
adjusting the Kernel Radius and Number Of Samples to get the desired affect.
The Kernel Radius depends on the natural “scale” of the scene. Initially, there might appear to be no
AO at all. In most cases, the Kernel Radius is too small or too big, and working values must be found.
Inputs
There are three inputs on the AO node. The standard effect mask is used to limit the AO effect. The
Input and Camera connections are required. If either of these is not supplied, the node does not
render an image on output.
– Input: This orange input accepts a 2D RGBA image, Z-Depth, and Normals.
– Camera: The green camera input can take either a 3D Scene or a 3D Camera that rendered the
2D image.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the Ambient Occlusion to only those pixels within the mask. An effects mask is
applied to the tool after the tool is processed.
Inspector
Controls Tab
The controls tab includes all the main controls for compositing with AO. It controls the quality and
appearance of the effect.
Output Mode
– Color: Using the Color menu option combines the incoming image with Ambient Occlusion applied.
– AO: This option outputs the pure Ambient Occlusion as a grayscale image. White corresponds
to regions in the image that should be bright, while black correspond to regions that should be
darker. This allows you to create a lighting equation by combining separate ambient/diffuse/
specular passes. Having the AO as a separate buffer allows creative freedom to combine the
passes in various ways.
Number of Samples
Increase the samples until artifacts in the AO pass disappear. Higher values can generate better
results but also increase render time.
Kernel Radius
The Kernel Radius controls the size of the filter kernel in 3D space. For each pixel, it controls how far
one searches in 3D space for occluders. The Filter Kernel should be adjusted manually for each
individual scene.
If made too small, nearby occluders can be missed. If made too large, the quality of the AO decreases
and the samples must be increased dramatically to get the quality back.
This value is dependent on the scene Z-depth. That means with huge Z values in the scene, the kernel
size must be large as well. With tiny Z values, a small kernel size like 0.1 should be sufficient.
Lift/Gamma/Tint
You can use the lift, gamma, and tint controls to adjust the AO for artistic effects.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Deep Pixel nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
TIP: Combining multiple AO passes with different kernel radii can produce better effects.
Inputs
The Depth Blur node includes three inputs: one for the main image, one for a blur image, and another
for an effect mask to limit the area where the depth blur is applied.
– Input: This orange input is the only required connection. It accepts a 2D image that includes a Z
channel. The Z channel is used to determine the blur amount in different regions of the image.
– Blur Image: If the Blur Image input is connected, channels from the image are used to control
the blur. This allows general 2D per-pixel blurring effects.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the depth blur to only those pixels within the mask. An effects mask is applied to the
tool after the tool is processed.
Inspector
Controls Tab
The Controls tab includes parameters for adjusting the amount of blur applied and the depth of the
blurred area. It also includes options for selecting channels other than the Z channel for the blur map.
Filter
This menu selects the filter used for the blur.
– Box: This applies a basic depth-based box blur effect to the image.
– Soften: This applies a depth-based general softening filter effect.
– Super Soften: This applies a depth-based high-quality softening filter effect.
Blur Channel
Select one of these options to determine the channel used to control the level of blur applied to each
pixel. The channel from the main image input is used, unless an image is connected to the node’s
green Blur Image input.
Lock X/Y
When toggled on, this control locks the X and Y Blur sliders together for symmetrical blurring.
Blur Size
This slider is used to set the strength of the horizontal and vertical blurring.
Focal Point
This control is visible only when the Blur channel menu is set to use the Z channel.
Use this control to select the distance of the simulated point of focus. Lowering the value causes the
Focal Point to be closer to the camera; raising the value causes the Focal Point to be farther away.
Z Scale
Scales the Z-buffer value by the selected amount. Raising the value causes the distances in the
Z-channel to expand. Lowering the value causes them to contract. This is useful for exaggerating the
depth effect. It can also be used to soften the boundaries of the blur. Some images with small depth
values may require the Z-scale to be set quite low, below 1.0.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Deep Pixel nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
Fog [Fog]
Inputs
The Fog node includes three inputs: one for the main image with a Z channel, one for a blur image,
and another for an effect mask to limit the area where the depth blur is applied.
– Input: This orange input is the only required connection. It accepts a 2D image that includes a Z
channel. The Z channel is used to determine the fog amount in different regions of the image.
– Blur Image: The green second image input connects an image that is used as the source of the
fog. If no image is provided, the fog consists of a single color. Generally, a noise map of some
sort is connected here.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the fog to only those pixels within the mask. An effects mask is applied to the tool
after the tool is processed.
Inspector
Fog controls
Controls Tab
The Controls tab includes parameters for adjusting the density and color of the fog.
Z Depth Scale
This option scales the Z-buffer values by the selected amount. Raising the value causes the distances
in the Z-channel to expand, whereas lowering the value causes the distances to contract. This is useful
for exaggerating the fog effect.
Fog Opacity
Use this control to adjust the opacity on all channels of the fog.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Deep Pixel nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
Shader [Shd]
Inputs
The Shader node includes three inputs: one for the main image with normal map channels, one for a
reflection map, and another for an effect mask to limit the area where the depth blur is applied.
– Input: This orange input is the only required connection. It accepts a 2D image that includes a
normals channel.
– Reflection Map Image: The green reflection map image input projects an image onto all
elements in the scene or to elements selected by the Object and Material ID channels in
the Common Controls. Reflection maps work best as 32-bit floating point, equirectangular
formatted images
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the shader to only those pixels within the mask. An effects mask is applied to the
tool after the tool is processed.
Inspector
Shader controls
Controls Tab
The Controls tab for the Shader node includes parameters for adjusting the overall surface reaction to
light sources. You can modify the ambient, diffuse, specular, and reflection properties of the image
connected to the orange image input.
Light Tab
The Controls tab includes parameters for basic lighting brightness and reflections.
Ambient
Ambient controls the Ambient color present in the scene or the selected object. This is a base level of
light added to all pixels, even in completely shadowed areas.
Diffuse
This option controls the Diffuse color present in the scene or for the selected object. This is the normal
color of the object, reflected equally in all directions.
Specular
This option controls the Specular color present in the scene or for the selected object. This is the color
of the glossy highlights reflected toward the eye from a light source.
Reflection
This option controls the Reflection contribution in the scene or for the selected object. High levels
make objects appear mirrored, while low levels overlay subtle reflections giving a polished effect. It
has no effect if no reflection map is connected.
Equator Angle
Equator Angle controls the left to right angle of the light generated and mapped by the Shader node
for the scene or the selected object.
Polar Height
Polar Height controls the top to bottom angle of the light generated and mapped by the Shader node
for the scene or the selected object.
Shader Tab
The Shader tab is used to adjust the falloff of the Diffuse and Specular light and the tint color of the
specular highlight.
In and Out
These options are used to display and edit point values on the spline.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Deep Pixel nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
Texture [Txr]
NOTE: Background pixels may have U and V values of 0.0, which set those pixels to the color
of the texture’s corner pixel. To restrict texturing to specific objects, use an effect mask based
on the Alpha of the object or its Object or Material ID channel.
Inputs
The Texture node includes three inputs: one for the main image with UV map channels, one for a
texture map image, and another for an effect mask to limit the area where the replace texture
is applied.
– Input: This orange input accepts a 2D image that includes UV channels. If the UV channels are
not in the images, this node has no effect.
– Texture: The green texture map input provides the texture that is wrapped around objects,
replacing the current texture.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the texture to only those pixels within the mask. An effects mask is applied to the
tool after the tool is processed.
A Texture node used to manipulate the texture coordinates and add texture to a Text 3D node
Inspector
Texture controls
Texture Tab
The Texture tab controls allow you to flip, swap, scale, and offset the UV texture image connected to
the texture input.
Swap UV
When this checkbox is selected, the U and V channels of the source image are swapped.
Rotate 90
The texture map image is rotated 90 degrees when this checkbox is enabled.
U and V Scale
These controls change the scaling of the U and V coordinates used to map the texture. Changing
these values effectively enlarges and shrinks the texture map as it is applied.
U and V Offset
Adjust these controls to offset the U and V coordinates. Changing the values causes the texture to
appear to move along the geometry of the object.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Deep Pixel category. The Settings
controls are even found on third-party Deep Pixel-type plug-in tools. The controls are consistent and
work the same way for each tool although some tools do include one or two individual options that are
also covered here.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this causes the tool to skip processing entirely, copying the input straight to the output.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels of the image not in the mask (i.e., set to 0) to become black/
transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around the
edge of the object. If this option is disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information, see Chapter 18, “Understanding Image Channels” in the Fusion Studio
Reference Manual or Chapter 79 in the DaVinci Resolve Reference Manual.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off hardware-accelerated
rendering using the graphics card in your computer. Enabled uses the hardware. Auto uses a capable
GPU if one is available, but falls back to software rendering when a capable GPU is not available.
Comments
The Comments field is used to add notes to a tool. Click in the field and type the text. When a note is
added to a tool, a small red square appears in the lower-left corner of the node when the full tile is
displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the note
in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Effect Nodes
This chapter details the Effect nodes in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Duplicate [Dup] 923
Highlight [HIL] 929
Hot Spot [HOT] 931
Pseudo Color [PSCL] 937
Rays [CIR] 938
Shadow [SH] 940
Trails [TRLS] 942
TV [TV] 947
The Common Controls 950
Inputs
The two inputs on the Duplicate node are used to connect a 2D image and an effect mask, which can
be used to limit the area where duplicated objects appear.
– Input: The orange input is used for the primary 2D image that is duplicated.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the duplicated
objects to appear only those pixels within the mask. An effects mask is applied to the tool after
the tool is processed.
Duplicate controls
Controls Tab
The Controls tab includes all the parameters you can use to create, offset, and scale copies of the
object connected to the input on the node.
Copies
Use this slider to set the number of copies made. Each copy is a copy of the last copy. So, when set to
5, the parent is copied, then the copy is copied, then the copy of the copy is copied, and so on. This
allows for some interesting effects when transformations are applied to each copy using the
following controls.
Time Offset
Use the Time Offset slider to offset any animations that are applied to the original image by a set
amount per copy. For example, set the value to -1.0 and use a square set to rotate on the Y-axis as the
source. The first copy shows the animation from a frame earlier. The second copy shows animation
from a frame before that, and so forth. This can be used with great effect on textured planes, for
example, where successive frames of a clip can be shown.
Center
The X and Y Center controls set the offset position applied to each copy. An X offset of 1 would offset
each copy 1 unit along the X-axis from the last copy.
Pivot
The Pivot controls determine the position of the pivot point used when changing the size, position, or
angle of each copy. The pivot does not move with the original object or the duplicated array. To have
the pivot follow the army, you must modify the pivot controls.
Size
The Size control determines how much scaling to apply to each copy.
Angle
The Angle control sets the amount of Z rotation applied to each copy. The angle adjustment is linear
based on the location of the pivot point.
Apply Mode
The Apply Mode setting determines the math used when blending or combining duplicated objects
that overlap.
– Normal: The default mode uses the foreground object’s Alpha channel as a mask to determine
which pixels are transparent and which are not. When this is active, another menu shows possible
operations, including Over, In, Held Out, Atop, and XOr.
– Screen: Screen blends the objects based on a multiplication of their color values. The Alpha
channel is ignored, and layer order becomes irrelevant. The resulting color is always lighter.
Screening with black leaves the color unchanged, whereas screening with white always produces
white. This effect creates a similar look to projecting several film frames onto the same surface.
When this is active, another menu shows possible operations, including Over, In, Held Out, Atop,
and XOr.
– Dissolve: Dissolve mixes overlapping objects. It uses a calculated average of the objects to
perform the mixture.
– Multiply: Multiplies the values of a color channel. This gives the appearance of darkening the
object as the values are scaled from 0 to 1. White has a value of 1, so the result would be the same.
Gray has a value of 0.5, so the result would be a darker object or, in other words, an object half as
bright.
– Overlay: Overlay multiplies or screens the color values of the foreground object, depending
on the color values of the object behind. Patterns or colors overlay the existing pixels while
preserving the highlights and shadows of the color values of the objects behind the foreground
objects. Objects behind other objects are not replaced but mixed with the front objects to reflect
the original lightness or darkness of the objects behind.
– Soft Light: Soft Light darkens or lightens the foreground object, depending on the color values of
the objects behind them. The effect is similar to shining a diffused spotlight on the image.
– Hard Light: Hard Light multiplies or screens the color values of the foreground object, depending
on the color values of the objects behind them. The effect is similar to shining a harsh spotlight on
the image.
– Color Dodge: Color Dodge uses the foreground object’s color values to brighten the objects
behind them. This is similar to the photographic practice of dodging by reducing the exposure of
an area of a print.
– Color Burn: Color Burn uses the foreground object’s color values to darken the objects behind
them. This is similar to the photographic practice of burning by increasing the exposure of an area
of a print.
– Darken: Darken looks at the color information in each channel and selects the object’s foreground
or background’s color value, whichever is darker, as the result color. Pixels lighter than the
blended colors are replaced, and pixels darker than the blended color do not change.
Operator
This menu is used to select the Operation mode used when the duplicate objects overlap. Changing
the Operation mode changes how the overlapping objects are combined. This drop-down menu is
visible only when the Apply mode is set to Normal.
The formula used to combine pixels in the Duplicate node is always (fg object * x) + (bg object * y). The
different operations determine what x and y are, as shown in the description for each mode.
The Operator Modes are as follows:
– Over: The Over mode adds the foreground object to the background object by replacing the
pixels in the background with the pixels from the Z wherever the foreground object’s Alpha
channel is greater than 1.
x = 1, y = 1 - [foreground object Alpha]
– In: The In mode multiplies the Alpha channel of the background object against the pixels in the
foreground object. The color channels of the foreground object are ignored. Only pixels from the
foreground object are seen in the final output. This essentially clips the foreground object using
the mask from the background object.
x = [background Alpha], y = 0
– Held Out: Held Out is essentially the opposite of the In operation. The pixels in the foreground
object are multiplied against the inverted Alpha channel of the background object.
x = 1 - [background Alpha], y = 0
– Atop: Atop places the foreground object over the background object only where the background
object has a matte.
x = [background Alpha], y = 1 - [foreground Alpha]
– XOr: XOr combines the foreground object with the background object wherever either the
foreground or the background have a matte, but never where both have a matte.
x = 1 - [background Alpha], y = 1-[foreground Alpha]
Gain
The Gain RGB controls multiply the values of the image channel linearly. All pixels are multiplied by the
same factor, but the effect is larger on bright pixels and smaller on dark pixels. Black pixels are not
changed since multiplying any number times 0 always equals 0.
Alpha Gain linearly scales the Alpha channel values of objects in front. This effectively reduces the
amount that the objects in the background are obscured, thus brightening the overall result. When the
Subtractive/Additive slider is set to Additive with Alpha Gain set to 0.0, the foreground pixels are
simply added to the background.
When Subtractive/Additive slider is set to Subtractive, this controls the density of the composite,
similarly to Blend.
Burn In
The Burn In control adjusts the amount of Alpha used to darken the objects that fall behind other
objects, without affecting the amount of foreground objects added. At 0.0, the blending behaves like a
straight Alpha blend, in contrast to a setting of 1.0 where the objects in the front are effectively added
on to the objects in the back (after Alpha multiplication if in Subtractive mode). This gives the effect of
the foreground objects brightening the objects in the back, as with Alpha Gain. In fact, for Additive
blends, increasing the Burn In gives an identical result to decreasing Alpha Gain.
Blend
This blend control is different from the Blend slider in the Common Settings tab. Changes made to this
control apply the blend between objects. The Blend slider fades the results of the last object first, the
penultimate after that, and so on. The blending is divided between 0 and 1, with 1 being all objects are
fully opaque and 0 being only the original object showing.
Merge Under
This checkbox reverses the layer order of the duplicated elements, making the last copy the
bottommost layer and the first copy the topmost layer.
r
Duplicate Jitter tab
Random Seed
The Random Seed slider and Reseed button are used to generate a random starting point for the
amount of jitter applied to the duplicated objects. Two Duplicate nodes with identical settings but
different random seeds produce two completely different results.
Center X and Y
Use these two controls to adjust the amount of variation in the X and Y position of the
duplicated objects.
Axis X and Y
Use these two controls to adjust the amount of variation in the rotational pivot center of the duplicated
objects. This affects only the additional jitter rotation, not the rotation produced by the Rotation
settings in the Controls tab.
X Size
Use this control to adjust the amount of variation in the Scale of the duplicated objects.
Angle
Use this dial to adjust the amount of variation in the Z rotation of the duplicated objects.
Gain
The Gain RGBA controls randomly multiply the values of the image channel linearly.
Blend
Changes made to this control randomize the blend between objects.
Common Controls
Settings Tab
The Settings tab controls are common to all Effect nodes, so their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Inputs
There are three Inputs on the Highlight node: one for the image, one for the effects mask, and another
for a highlight mask.
– Input: The orange input is used for the primary 2D image that gets the highlight applied.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input restricts the highlight
to be within the pixels of the mask. An effects mask is applied to the tool after the tool is
processed.
– Highlight Mask: The Highlight node supports pre-masking using the white highlight mask input.
The image is filtered before the highlight is applied. The highlight is then merged back over the
original image. Unlike regular effect masks, it does not crop off highlights from source pixels
when the highlight extends past the edges of the mask.
Highlight controls
Controls Tab
The Controls tab includes parameters for the highlight style except for color, which is handled in the
Color Scale tab.
Curve
The Curve value changes the drop-off over the length of the highlight. Higher values cause the
brightness of the flares to drop off closer to the center of the highlight, whereas lower values drop off
farther from the center.
Length
This designates the length of the flares from the highlight.
Number of Points
This determines the number of flares emanating from the highlight.
Angle
Use this control to rotate the highlights.
Merge Over
When enabled, the effect is overlaid on the original image. When disabled, the output is the highlights
only. This is useful for downstream color correction of the highlights.
Alpha Scale
Moving the Alpha slider down makes highlight falloff more transparent.
Common Controls
Setting Tab
The Settings tab controls are common to all Effect nodes, so their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Inputs
There are three inputs on the Hot Spot node: one for the image, one for the effects mask, and another
for an Occlusion image.
– Input: The required orange input is used for the primary 2D image that gets
the hot spot applied.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input restricts the hot
spot to be within the pixels of the mask. An effects mask is applied to the tool after the
tool is processed.
– Occlusion: The green Occlusion input accepts an image to provide the occlusion matte.
The matte is used to block the hot spot, causing it to “wink.” The white pixels in the image
occlude the hot spot. Gray pixels partially suppress the hot spot.
Inspector
Primary Strength
This control determines the brightness of the primary hot spot.
Aspect
This controls the aspect of the spot. A value of 1.0 produces a perfectly circular hot spot. Values above
1.0 elongate the circle horizontally, and values below 1.0 elongate the circle vertically.
Aspect Angle
This control can be used to rotate the primary hot spot.
Secondary Strength
This control determines the strength, which is to say the brightness, of the secondary hot spot. The
secondary hot spot is a reflection of the primary hot spot. It is always positioned on the opposite side
of the image from the primary hot spot.
Secondary Size
This determines the size of the secondary hot spot.
Occlude
This menu is used to select which channel of the image connected to the Hot Spot node’s Occlusion
input is used to provide the occlusion matte. Occlusion can be controlled from Alpha or R, G, or B
channels of any image connected to the Occlusion input on the node’s tile.
Lens Aberration
Aberration changes the shape and behavior of the primary and secondary hot spots.
– In and Out Modes: Elongates the shape of the hot spot into a flare. The hot spot stretches toward
the center when set to In mode and stretches toward the corners when set to Out mode.
– Flare In and Flare Out Modes: This option is a lens distortion effect that is controlled by the
movement of the lens effect. Flare In causes the effect to become more severe, the closer the hot
spot gets to the center. Flare Out causes the effect to increase as the hot spot gets closer to the
edges of the image.
– Lens: This mode emulates a round, ringed lens effect.
Aberration
The Aberration slider controls the overall strength of the lens aberration effect.
Color Tab
The Color tab is used to modify the color of the primary and secondary hot spots.
Color Mode
This menu allows you to choose between animated or static color modifications using the small curves
editor in the Inspector.
– None: The default None setting retains a static curve adjustment for the entire range.
– Animated Points: This setting allows the color curves in the spline area to be animated over time.
Once this option is selected, moving to the desired frame and making a change in the Spline
Editor sets a keyframe.
– Dissolve mode: Dissolve mode is mostly obsolete and is included for compatibility reasons only.
Mix Spline
The Mix spline is used to determine the influence that the Radial controls have along the radius of the
hot spot. The horizontal axis represents the position along the circle’s circumference, with 0 being 0
degrees and 1.0 being 360 degrees. The vertical axis represents the amount of the radial hot spot to
blend with the color hot spot. A value of 0 is all radial hot spot, while a value of 1.0 is all color hot spot.
NOTE: Right-clicking in the LUT displays a contextual menu with options related to modifying
spline curves.
For more information on the LUT Editor, see Chapter 7, “Using Viewers” in the Fusion Studio Reference
Manual or Chapter 68 in the DaVinci Resolve Reference Manual.
Radial Tab
Radial On
This control enables the Radial splines. Otherwise, the radial matte created by the splines is not
applied to the hot spot, and the Mix spline in the color controls does not affect the hot spot.
Radial Mode
Similar to the Color mode menu, this menu allows you to choose between animated or static radial hot
spot modifications using the small curves editor in the Inspector.
Radial Repeat
This control repeats the effect of the radial splines by x number of times. For example, a repeat of 2.0
causes the spline to take effect between 0 and 180 degrees instead of 0 and 360, repeating the spline
between 180 and 360.
Length Angle
This control rotates the effect of the Radial Length spline around the circumference of the hot spot.
Density Angle
This control rotates the effect of the Radial Density spline around the circumference of the hot spot.
NOTE: Right-clicking in the spline area displays a contextual menu containing options related
to modifying spline curves.
A complete description of LUT Editor controls and options can be found in Chapter 45, “LUT Nodes.”
Element Strength
This determines the brightness of element reflections.
Element Size
This determines the size of element reflections.
Element Position
This determines the distance of element reflections from the axis. The axis is calculated as a line
between the hot spot position and the center of the image.
Element Type
Use this group of buttons to choose the shape and density of the element reflections. The presets
available are described below.
– Circular: This creates slightly soft-edged circular shaped reflections.
– Soft Circular: This creates very soft-edged circular shaped reflections.
– Circle: This creates a hard-edged circle shape.
– NGon Solid: This creates a filled polygon with a variable number of sides.
– NGon Star: This creates a very soft-edged star shape with a variable number of sides.
– NGon Shaded Out: This creates soft-edged circular shapes.
– NGon Shaded In: This creates a polygon with a variable number of sides, which has a very soft
reversed (dark center, bright radius) circle.
NGon Angle:
This control is used to determine the angle of the NGon shapes.
NGon Sides:
This control is used to determine the number of sides used when the Element Type is set to Ngon Star,
Ngon Shaded Out, and Ngon Shaded In.
NGon Starriness:
This control is used to bend polygons into star shapes. The higher the value, the more star-like
the shape.
Common Controls
Settings Tab
The Settings tab controls are common to all Effect nodes, so their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Inputs
There are two Inputs on the Pseudo Color node: one for an image and one for an effects mask.
– Input: The orange input is used for the primary 2D image that gets its color modified.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input restricts the
pseudo color to be within the pixels of the mask. An effects mask is applied to the tool after
the tool is processed.
Inspector
Color Checkbox
When enabled, the Pseudo Color node affects this color channel.
Wrap
When enabled, waveform values that exceed allowable parameter values are wrapped to the
opposite extreme.
Soft Edge
This slider determines the soft edge of color transition.
Waveform
This selects the type of waveform to be created by the generator. Four waveforms are available: Sine,
Triangle, Sawtooth, and Square.
Frequency
This controls the frequency of the waveform selected. Higher values increase the number of
occurrences of the variances.
Phase
This modifies the Phase of the waveform. Animating this control produces color cycling effects.
Mean
This determines the level of the waveform selected. Higher values increase the overall brightness of
the channel until the allowed maximum is reached.
Amplitude
Amplitude increases or decreases the overall power of the waveform.
Common Controls
Settings Tab
The Settings tab controls are common to all Effect nodes, so their descriptions can be found in “The
Common Controls” section at the end of this chapter.
Rays [CIR]
Inspector
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the rays.
Center X and Y
This coordinate control and related viewer crosshair set the center point for the light source.
Blend
Sets the percentage of the original image that’s blended with the light rays.
Decay
Sets the length of the light rays.
Weight
Sets the falloff of the light rays.
Exposure
Sets the intensity level of the light rays.
Threshold
Sets the luminance limit at which the light rays are produced.
Shadow [SH]
Input
The three inputs on the Shadow node are used to connect a 2D image that causes the shadow.
A depth map input and an effect mask can be used to limit the area where trails appear. Typically, the
output of the shadow is then merged over the actual background in the composite.
– Input: The orange input is used for the primary 2D image with Alpha channel that is the source
of the shadow.
– Depth: The green Depth map input takes a 2D image as its input and extracts a depth matte
from a selected channel. The light Position and Distance controls can then be used to modify
the appearance of the shadow based on depth.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the area where
the shadow appears. An effects mask is applied to the tool after the tool is processed.
NOTE: The Shadow node is designed to create simple 2D drop shadows. Use a Spot Light
node and an Image Plane 3D node for full 3D shadow casting.
Inspector
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the shadow appearance.
Shadow Offset
This control sets the X and Y position of the shadow. When the Shadow node is selected, you can also
adjust the position of the Shadow Offset using the crosshair in the viewer.
Softness
Softness controls how blurry the shadow’s edges appear.
Shadow Color
Use this control to select the color of the shadow. The most realistic shadows are usually not totally
black and razor sharp.
Light Position
This control sets the position of the light relative to the shadow-casting object. The Light Position is
only taken into consideration when the Light Distance slider is not set to infinity (1.0).
Z Map Channel
This menu is used to select which color channel of the image connected to the node’s Depth Map
input is used to create the shadow’s depth map. Selections exist for the RGB and A, Luminance, and
Z-buffer channels.
Output
This menu determines if the output image contains the image with shadow applied or the shadow only.
The shadow only method is useful when color correction, perspective, or other effects need to be
applied to the resulting shadow before it is merged back with the object.
Common Controls
Settings Tab
The Settings tab controls are common to all Effect nodes, so their descriptions can be found in “The
Common Controls” section at the end of this chapter.
Trails [TRLS]
Input
The two inputs on the Trails node are used to connect a 2D image and an effect mask that can be
used to limit the area where trails appear.
– Input: The orange input is used for the primary 2D image that receives the trails applied.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the area where
the trails effect appears. An effects mask is applied to the tool after the tool is processed.
Inspector
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the trails.
Restart
This control clears the image buffer and displays a clean frame, without any of the ghosting effects.
Preroll
This makes the Trails node pre-render the effect by the number of frames on the slider.
Reset/Preroll on Render
When this checkbox is enabled, the Trails node resets itself when a preview or final render is initiated.
It pre-rolls the designated number of frames.
Lock RGBA
When selected, this checkbox allows the Gain of the color channels to be controlled independently.
This allows for tinting of the Trails effect.
Gain
The Gain control affects the overall intensity and brightness of the image in the buffer. Lower values in
this parameter create a much shorter, fainter trail, whereas higher values create a longer, more
solid trail.
Rotate
The Rotate control rotates the image in the buffer before the current frame is merged into the effect.
The offset is compounded between each element of the trail. This is different than each element of the
trail rotating on its pivot point. The pivot remains over the original object.
Offset X/Y
These controls offset the image in the buffer before the current frame is merged into the effect.
Control is given over each axis independently. The offset is compounded between each element of
the trail.
Scale
The Scale control resizes the image in the buffer before the current frame is merged into the effect.
The size is compounded between each element of the trail.
Blur Size
The Blur Size control applies a blur to the trails in the buffer before the current frame is merged into
the effect. The blur is compounded between each element of the trail.
Apply Mode
The Apply Mode setting determines the math used when blending or combining the trailing objects
that overlap.
– Normal: The default mode uses the foreground object’s Alpha channel as a mask to determine
which pixels are transparent and which are not. When this is active, another menu shows possible
operations, including Over, In, Held Out, Atop, and XOr.
– Screen: Screen blends the objects based on a multiplication of their color values. The Alpha
channel is ignored, and layer order becomes irrelevant. The resulting color is always lighter.
Screening with black leaves the color unchanged, whereas screening with white always produces
white. This effect creates a similar look to projecting several film frames onto the same surface.
When this is active, another menu shows possible operations, including Over, In, Held Out, Atop,
and XOr.
– Dissolve: Dissolve mixes overlapping objects. It uses a calculated average of the objects to
perform the mixture.
Operator
This menu is used to select the Operation mode used when the trailing objects overlap. Changing the
Operation mode changes how the overlapping objects are combined to produce a result. This
drop-down menu is visible only when the Apply mode is set to Normal.
The formula used to combine pixels in the trails node is always (fg object * x) + (bg object * y).
The different operations determine what x and y are, as shown in the description for each mode.
– In: The In mode multiplies the Alpha channel of the background object against the pixels in the
foreground object. The color channels of the foreground object are ignored. Only pixels from the
foreground object are seen in the final output. This essentially clips the foreground object using
the mask from the background object.
x = [background Alpha], y = 0
– Held Out: Held Out is essentially the opposite of the In operation. The pixels in the foreground
object are multiplied against the inverted Alpha channel of the background object.
x = 1 - [background Alpha], y = 0
– Atop: Atop places the foreground object over the background object only where the background
object has a matte.
x = [background Alpha], y = 1 - [foreground Alpha]
– XOr: XOr combines the foreground object with the background object wherever either the
foreground or the background have a matte, but never where both have a matte.
x = 1 - [background Alpha], y = 1 - [foreground Alpha]
Subtractive/Additive
This slider controls whether Fusion performs an Additive composite, a Subtractive composite, or a
blend of both when the trailing objects overlap. This slider defaults to Additive assuming the input
image’s Alpha channel is premultiplied (which is usually the case). If you don’t understand the
difference between Additive and Subtractive compositing, below is a quick explanation.
NOTE: An Additive blend operation is necessary when the foreground image is premultiplied,
meaning that the pixels in the color channels have been multiplied by the pixels in the Alpha
channel. The result is that transparent pixels are always black since any number multiplied by
0 always equals 0. This obscures the background (by multiplying with the inverse of the
foreground Alpha), and then adds the pixels from the foreground.
A Subtractive blend operation is necessary if the foreground image is not premultiplied.
The compositing method is similar to an additive composite, but the foreground image is first
multiplied by its Alpha, to eliminate any background pixels outside the Alpha area.
Although the Additive/Subtractive option is often an either/or checkbox in other software, the
Trails node lets you blend between the Additive and Subtractive versions of the compositing
operation. This can be useful when dealing with problem edges that are too bright
or too dark.
For example, using Subtractive merging on a premultiplied image may result in darker edges,
whereas using Additive merging with a non-premultiplied image causes any non-black area
outside the foreground’s Alpha to be added to the result, thereby lightening the edges.
By blending between Additive and Subtractive, you can tweak the edge brightness to be just
right for your situation.
Burn In
The Burn In control adjusts the amount of Alpha used to darken the objects that trail under other
objects, without affecting the amount of foreground objects added. At 0.0, the blending behaves like a
straight Alpha blend. At 1.0, the objects in the front are effectively added onto the objects in the back
(after Alpha multiplication if in Subtractive mode). This gives the effect of the foreground objects
brightening the objects in the back, as with Alpha Gain. In fact, for Additive blends, increasing the
Burn In gives an identical result to decreasing Alpha Gain.
Merge Under
When enabled, the current image is placed under the generated trail, rather than the usual, over top
operation. The layer order of the trailing elements is also reversed, making the last trail the
topmost layer.
Common Controls
Settings Tab
The Settings tab controls are common to all Effect nodes, so their descriptions can be found in “The
Common Controls” section at the end of this chapter.
TV [TV]
The TV node
TV Node Introduction
The TV node is a simple node designed to mimic some of the typical flaws seen in analog television
broadcasts and screens. This Fusion-specific node is mostly obsolete when using DaVinci Resolve
because of the more advanced Analog Damage ResolveFX.
Input
The two inputs on the TV node are used to connect a 2D image and an effect mask, which can be
used to limit the area where the TV effect appears.
– Input: The orange input is used for the primary 2D image that gets the TV distortion applied.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the area where
the the TV effect to appears. An effects mask is applied to the tool after the tool is processed.
Inspector
TV node controls
Controls Tab
The Controls tab is the first of three tabs used to customize the analog TV distortion. The Controls tab
modifies the scan lines and image distortion of the effect.
Scan Lines
This slider is used to emulate the interlaced look by dropping lines out of the image. Setting it to black,
with a transparent Alpha, drops a line. A value of 1 (default) drops every second line. A value of 2
shows one line, and then drops the second and third and repeats. A value of zero turns off the effect.
Horizontal
Use this slider to apply a simple Horizontal offset to the image.
Vertical
Use this slider to apply a simple Vertical offset to the image.
Skew
This slider is used to apply a diagonal offset to the image. Positive values skew the image to the top
left. Negative values skew the image to the top right. Pixels pushed off frame wrap around and
reappear on the other side of the image.
Amplitude
The Amplitude slider can be used to introduce smooth sine wave-type deformation to the edges of the
image. Higher values increase the intensity of the deformation. Use the Frequency control to
determine how often the distortion is repeated.
Offset
Use Offset to adjust the position of the sine wave, causing the deformation applied to the image via
the Amplitude and Frequency controls to see across the image.
Noise Tab
The Noise tab is the second of three tabs used to customize the analog TV distortion. The Noise tab
modifies the noise in the image to simulate a weak analog antenna signal.
Power
Increase the value of this slider above 0 to introduce noise into the image. The higher the value, the
stronger the noise.
Size
Use this slider to scale the noise map larger.
Random
If this thumbwheel control is set to 0, the noise map is static. Change the value over time to cause the
static to change from frame to frame.
Bar Strength
At the default value of 0, no bar is drawn. The higher the value, the darker the area covered by the
bar becomes.
Bar Size
Increase the value of this slider to make the bar taller.
Common Controls
Settings Tab
The Settings tab controls are common to all Effect nodes, so their descriptions can be found in the
following “The Common Controls” section.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Effects category. The Settings
controls are even found on third-party Effects-type plug-in tools. The controls are consistent and work
the same way for each tool, although some tools do include one or two individual options, which are
also covered here.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels of the image not included in the mask (i.e., set to 0) to become
black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around the
edge of the object. If this option is disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on Coverage and Background Color channels, see Chapter 18,
“Understanding Image Channels” in the Fusion Reference Manual or Chapter 79 in the
DaVinci Resolve Reference Manual.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off hardware-accelerated
rendering using the graphics card in your computer. Enabled uses the hardware. Auto uses a capable
GPU if one is available and falls back to software rendering when a capable GPU is not available
Motion Blur
– Motion Blur: This toggles the rendering of Motion Blur on the tool. When this control is toggled
on, the tool’s predicted motion is used to produce the motion blur caused by the virtual camera’s
shutter. When the control is toggled off, no motion blur is created.
– Quality: Quality determines the number of samples used to create the blur. A quality setting of
2 causes Fusion to create two samples to either side of an object’s actual motion. Larger values
produce smoother results but increase the render time.
– Shutter Angle: Shutter Angle controls the angle of the virtual shutter used to produce the motion
blur effect. Larger angles create more blur but increase the render times. A value of 360 is the
equivalent of having the shutter open for one full frame exposure. Higher values are possible and
can be used to create interesting effects.
– Center Bias: Center Bias modifies the position of the center of the motion blur. This allows for the
creation of motion trail effects.
– Sample Spread: Adjusting this control modifies the weighting given to each sample. This affects
the brightness of the samples.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Film Nodes
This chapter details the Film nodes in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Cineon Log [LOG] 954
Film Grain [FGR] 957
Grain [GRN] 960
Light Trim [LT] 963
Remove Noise [RN] 965
The Common Controls 967
Input
There are two Inputs on the Cineon Log node: one for the log image and one for the effects mask.
– Input: The orange input is used for the primary 2D image that gets the highlight applied.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input restricts the log
conversion to be within the pixels of the mask. An effects mask is applied to the tool after the
tool is processed.
Inspector
Depth
The Depth menu is used to select the color depth used to process the input image. The default option
is Auto. Auto determines the color depth based on the file format loaded. For example, JPEG files
automatically process at 8 bit because the JPEG file format does not store color depths greater than 8.
Blackmagic RAW files load at Float, etc. If the color depth of the format is undetermined, the default
depth defined in the Frame Format preferences is used.
Mode
The Mode menu offers two options: one for converting log images to linear and one for converting
linear images to logarithmic.
Log Type
The Log Type menu allows you to select the source of the file. Typically, you select the camera used to
create the image, although the Josh Pines option is specific to film scan workflows. This menu contains
the following camera log types:
– Cineon – Canon Log – Panasonic V-Log – Viper Film Stream
– Arri Log C – Nikon N Log – Red Log Film – ACESlog
– BMD Film – Panalog – Sony S-Log
Lock RGB
When enabled, the settings in this tab affect all color channels equally.
Disable this control to convert the red, green, and blue channels of the image using separate settings
for each channel.
Level
Use this range control to set the black level and white level in the log image before converting. The
left handle adjusts the black level, while the right handle adjusts the white level. Pixels with values in
log space below the black level become out-of-range values below 0.0. Pixels with values above the
white level become out-of-range values above 1.0 after conversion.
When processing in floating-point color space, both negative and high out-of-range values are
preserved. When using 16-bit or 8-bit mode, the out-of-range values are clipped.
Black Rolloff
Since a mathematical log() operation on a value of zero or lower results in invalid values,
Fusion clips values below 1e-38 (0 followed by 38 zeros) to 0 to ensure correct results. This is
almost never an issue, since values that small have no visual impact on an image. To see such
tiny values, you would have to add three Brightness Contrast nodes, each with a gain set to
1,000,000. Even then, the values would hover very close to zero.
We have seen processes where instead of cropping these minimal values, they are instead
scaled. So values between 0.0 and 1e-16 are scaled between 1e-18 and 1e-16. The idea is to
crush the majority of the visual range in a float image into values very near to zero, then
expand them again, forcing a gentle ramp to produce a small ramp in the extreme black
values. Should you find yourself facing a color pipeline using this process, here is how you can
mimic it with the help of a Custom node.
The process involves converting the log image to linear with a very small gamma and a wider
than normal black level to white level (e.g., conversion gamma of 0.6, black of 10,
white of 1010). This crushes most of the image’s range into very small values. This is followed
by a Custom node (described below), and then by a linear to log conversion that reverses the
process but uses a slightly higher black level. The difference between the black levels defines
the falloff range.
Since this lifts the blacks, the image is usually then converted back to linear one more time,
using more traditional values (i.e., 95-685) to reset the black point.
The Custom node should use the following equation in the red, green, and blue expressions:
Falloff Comparison
NOTE: Although more accurate, the Film Grain node does not replace the older Grain node,
which is still provided to allow older compositions to load and render, but in almost every
case, it is better to use the Film Grain node.
Input
There are two inputs on the Film Grain node: one for the image and one for the effects mask.
– Input: The orange input is used for the primary 2D image that gets the grain applied.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the grain to be
within the pixels of the mask. An effects mask is applied to the tool after the tool is processed.
Controls Tab
The Controls tab includes all the parameters for modifying the appearance of the film grain.
Complexity
The Complexity setting indicates the number of “layers” of grain applied to the image. With a
complexity of 1, only one grain layer is calculated and applied to the image. When complexity is set to
4, the node calculates four separate grain layers and applies the mean combined result of each pass
to the final image. Higher complexities produce visually more sophisticated results, without the
apparent regularity often perceivable in digitally-produced grain.
Alpha Multiply
When the Alpha Multiply checkbox is enabled, the Film Grain node multiplies its results by the source
image’s Alpha channel. This is necessary when working with post-multiplied images to ensure that the
grain does not affect areas of the image where the Alpha is 0.0 (transparent).
NOTE: Since it is impossible to say what the final value of semitransparent pixels in the image
are until after they are composited with their background, you should avoid applying log-
processed grain to the elements until after they have been composited. This ensures that the
strength of the grain is accurate.
Log Processing
When this checkbox is enabled (default), the grain applied to the image has its intensity applied
nonlinearly to match the grain profile of most film. Roughly speaking, the intensity of the grain
increases exponentially from black to white. When this checkbox is disabled, the grain is applied
uniformly, regardless of the brightness of the affected pixel.
One of the primary features of grain in film is that the appearance of the grain varies radically with the
exposure so that there appears to be minimal grain present in the blacks, with the amount and
deviation of the grain increasing as the pixels exposure increases. In a film negative, the darkest
portions of the developed image appear entirely opaque, and this obscures the grain. As the negative
becomes progressively clearer, more of the grain becomes evident in the result. Chemical differences
in the R, G, B, layer’s response to light also cause each color component of the film to present a
different grain profile, typically with the blue channel presenting the most significant amount of grain.
Seed
The Seed slider and Reseed button are presented whenever a Fusion node relies on a random result.
Two nodes with the same seed values produce the same random results. Click on the Reseed button
to randomly select a new seed value, or adjust the slider to select a new seed value manually.
Time Lock
Enabling Time Lock stops the random seed from generating new grain on every frame.
Monochrome
When the Monochrome checkbox is enabled (default), the grain is applied to the red, green, and blue
color channels of the image equally. When deselected, individual control over the Size, Strength, and
Roughness of the grain in each channel becomes possible.
Size
The grain size is calculated relative to the size of a pixel. Consequently, changing the resolution of the
image does not impact the relative appearance of the grain. The default grain size of 1.0 produces
grain kernels that cover roughly 2 pixels.
Strength
Grain is expressed as a variation from the original color of a pixel. The stronger the grain’s strength,
the wider the possible variation from the original pixel value. For example, given a pixel with an original
value of p, and a Grain node with complexity = 1 size = 1; roughness = 0; log processing = off; the grain
produces an output value of p +/- strength. In other words, a pixel with a value of 0.5 with a grain
strength of 0.02 could end up with a final value between 0.48 and 0.52.
Once again, that’s a slight oversimplification, especially when the complexity exceeds 1. Enabling the
Log Processing checkbox also causes that variation to be affected such that there is less variation in
the blacks and more variation in the whites of the image.
NOTE: When visualizing the effect of the grain on the image, the more mathematically
inclined may find it helps to picture a sine wave, where each lobe of the sine wave covers 1
pixel when the Grain Size is 1.0. The Grain Size controls the frequency of the sine wave, while
the Grain Strength controls its amplitude. Again, this is something of an oversimplification.
Roughness
The Roughness slider applies low frequency variation to give the impression of clumping in the grain.
Try setting the roughness to 0, and observe that the grain produced has a very even luminance
variation across the whole image. Increase the roughness to 1.0 and observe the presence of “cellular”
differences in the luminance variation.
Offset
The Offset control helps to match the intensity of the grain in the deep blacks by offsetting the values
before the intensity (strength) of the grain is calculated. So an offset of 0.1 would cause a pixel with a
value of 0.1 to receive grain as if its value was 0.2.
Processing Examples
Log Processing On
Grain [GRN]
A Grain node used to add grain back for a more realistic composite
Inspector
Grain controls
Controls Tab
The Controls tab includes all the parameters for modifying the appearance of the grain.
Power
This slider determines the strength of the grain. A higher value increases visibility, making the grain
more prevalent.
RGB Difference
Separate Red, Green, and Blue sliders are used to modify the strength of the effect on a per
channel basis.
Grain Size
This slider determines the size of the grain particles. Higher values increase the grain size.
Grain Spacing
This slider determines the density or amount of grain per area. Higher values cause the grain to
appear more spaced out.
Aspect Ratio
This slider adjusts the aspect of the grain so that it can be matched with anamorphic images.
Alpha-Multiply
When enabled, this checkbox multiplies the image by the Alpha, clearing the black areas of any
grain effect.
Spread Tab
The Spread tab uses curves for the red, green, and blue channels to control the amount of grain over
each channel’s tonal range.
RGB Checkboxes
The red, green, and blue checkboxes enable each channel’s custom curve, allowing you to control
how much grain appears in each channel. To mimic usual film responses, more grain would appear in
the blue channel than the red, and the green channel would receive the least. Right-clicking in the
spline area displays a contextual menu containing options related to modifying spline curves.
For more information on the LUT Editor’s controls see Chapter 106, “LUT Nodes” in the
DaVinci Resolve Reference Manual or Chapter 45 in the Fusion Reference Manual.
In and Out
This control provides direct editing of points on the curve by setting In/Out point values.
Bell-Shaped Spread
Setting a bell shape is often a good starting point to create a more realistic-looking grain.
Here we have a non-uniform distribution with different amounts of grain in the red, green, and
blue channels.
In both examples, the grain’s power has been exaggerated to show the effect a bit better.
Common Controls
Setting Tab
The Settings tab controls are common to all Film nodes, so their descriptions can be found in “The
Common Controls” section at the end of this chapter.
Inputs
There are two Inputs on the Light Trim node: one for the 2D image and one for the effects mask.
– Input: The orange input is used for the primary Log 2D image that gets its exposure adjusted.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the exposure
change to be within the pixels of the mask. An effects mask is applied to the tool after the tool
is processed.
Inspector
Controls Tab
The Controls tab includes a single slider that adjusts the exposure of the image.
Lock RGBA
When selected, the Lock RGBA control collapses control of all image channels into one slider. This
selection is on by default. To manipulate the various color channels independently, deselect
this checkbox.
Trim
This slider shifts the color in film, optical printing, and lab printing points. 8 points equals one stop
of exposure.
Inputs
There are two inputs on the Remove Noise node: one for the 2D image and one for the effects mask.
– Input: The orange input is used for the primary 2D image that gets noise removed.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the noise
removal change to be within the pixels of the mask. An effects mask is applied to the tool after
the tool is processed.
Controls Tab
The Controls tab switches the noise removal between two methods: Color and Chroma. When the
Method is set to Color, the Controls tab adjusts the amount of blur and sharpness individually for each
RGB channel. When the Method is set to Chroma, the blur and sharpness is adjusted based on Luma
and Chroma controls.
Method
This menu is used to choose whether the node processes color using the Color or Chroma method.
This also gives you a different set of control sliders.
Lock
This checkbox links the Softness and Detail sliders of each channel together.
Common Controls
Settings Tab
The Settings tab controls are common to all Film nodes, so their descriptions can be found in the
following “The Common Controls” section.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Film category. The Settings
controls are even found on third-party Film-type plug-in tools. The controls are consistent and work
the same way for each tool, although some tools do include one or two individual options, which are
also covered here.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Commonly, this causes the tool to skip processing entirely, copying the input straight to the output.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels not included in the mask (i.e., set to 0) to become black/
transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around the
edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on Coverage and Background Color channels, see Chapter 18, “Understanding
Image Channels” in the Fusion Studio Reference Manual or Chapter 79 in the DaVinci Resolve
Reference Manual.
Clipping Mode
This option determines how edges are handled when performing domain of definition rendering. This
is mostly important for nodes like Blur, which may require samples from portions of the image outside
the current domain.
– Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If the
upstream DoD is smaller than the frame, the remaining area in the frame is treated as black/
transparent.
– Domain: Setting this option to Domain respects the upstream domain of definition when applying
the node’s effect. This can have adverse clipping effects in situations where the node employs a
large filter.
– None: Setting this option to None does not perform any source image clipping at all. This means
that any data required to process the node’s effect that would normally be outside the upstream
DoD is treated as black/transparent.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Filter Nodes
This chapter details the Filter nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Create Bump Map [CBu] 971
Custom Filter Node [CFlt] 973
Erode Dilate Node [ErDl] 977
Filter Node [Fltr] 979
Rank Filter Node [RFlt] 981
The Common Controls 983
Input
The Create Bump Map node includes two inputs: one for the main image and the other for an effect
mask to limit the area where the bump map is created.
– Input: The orange input takes the RGBA channels from an image to calculate the bump map.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the creation of the bump map to only those pixels within the mask. An effects mask
is applied to the tool after the tool is processed.
A Create Bump Map node produces a bump map as an RGB image for further image processing
Controls Tab
The Controls tab contains all parameters for creating the bump map.
Filter Size
This menu sets the filter size for creating the bump map. You can set the filter size at 3 x 3 pixels or 5 x
5 pixels, thus determining the radius of the pixels sampled. The larger the size, the more time it takes
to render.
Height Source
The Height Source menu selects the channel for extracting the grayscale information.
Clamp Normal.Z
This slider clips the lower values of the blue channel in the resulting bump texture.
Wrap Mode
This menu determines how the image wraps at the borders, so the filter produces a correct result
when using seamless tiling textures.
Height Scale
The height scale menu modifies the contrast of the resulting values in the bump map. Increasing this
value yields in a more visible bump map.
NOTE: The below definitions are provided to clarify some of the terminology used in the
Create Bump Map node and other similar types of nodes.
– Height Map: A grayscale image containing a height value per pixel.
– Bump Map: An image containing normals stored in the RGB channels used for modifying the
existing normals (usually given in tangent space).
– Normal Map: An image containing normals stored in the RGB channels used for replacing
the existing normals (usually given in tangent or object space).
Common Controls
Settings Tab
The Settings tab controls are common to all Filter nodes, so their descriptions can be found in “The
Common Controls” section at the end of this chapter.
Input
The Custom Filter node includes two inputs: one for the main image and the other for an effect mask
to limit the area where the custom filter is applied.
– Input: The orange input takes the RGBA channels from an image to calculate the custom filter.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the custom filter to only those pixels within the mask. An effects mask is applied to
the tool after the tool is processed.
Controls Tab
The Controls tab is used to set the filter size and then use the filter matrix to enter convolution
filter values.
Matrix Size
This menu is used to set the size of the filter at 3 x 3 pixels, 5 x 5 pixels, or 7 x 7 pixels, thus setting the
radius of the pixels sampled. The larger the size, the more time it takes to render.
Update Lock
When this control is selected, Fusion does not render the filter. This is useful for setting up each value
of the filter, and then turning off Update Lock and rendering the filter.
Filter Matrix
The Filter Matrix control is a 7 x 7 grid of text boxes where a number is entered to represent how much
influence each pixel has on the overall convolution filter. The text box in the center represents the
pixel that is processed by the filter. The text box to the left of the center represents the pixel to the
immediate left, and so forth.
The default Matrix size is 3 x 3. Only the pixels immediately adjacent to the current pixel are analyzed.
If a larger Matrix size is set, more of the text boxes in the grid are enabled for input.
Normalize
This controls the amount of filter normalization that is applied to the result. Zero gives a normalized
image. Positive values brighten or raise the level of the filter result. Negative values darken or lower
the level.
Floor Level
This adds or subtracts a minimum, or Floor Level, to the result of the filtered image. Zero does not add
anything to the image. Positive values add to the filtered image, and negative values subtract from
the image.
...has zero effect from its neighboring pixels, and the resulting image would be unchanged.
Original image
Softening Example
A slight softening effect would be...
1 1 1
1 1 1
1 1 1
Emboss Example
The example below subtracts five times the value from the top left and adds five times
the value from the lower right.
-5 0 0
0 1 0
0 0 5
If parts of the processed image are very smooth in color, the neighboring values are
very similar.
A Custom Filter adding and subtracting neighboring pixels to create an embossed image
Exposure Example
Using the values...
1 1 1
1 1 1
1 1 1
...and adjusting Normalize to a positive value makes the image brighter or glow,
simulating film overexposure.
Relief Example
Using the values...
-1 0 0
0 0 0
0 0 1
... and adjusting Floor Level to a positive value creates a Relief filter.
Inputs
The Erode Dilate node includes two inputs: one for the main image and the other for an effect mask to
limit the area where the erode or dilate is applied.
– Input: The orange input takes the RGBA channels from an image to calculate the custom filter.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the erode or dilate to only those pixels within the mask. An effects mask is applied
to the tool after the tool is processed.
Controls Tab
The Controls tab includes the main Amount slider that determines whether you are performing an
erode by entering a negative value or a dilate by entering a positive value.
Lock X/Y
The Lock X/Y checkbox is used to separate the Amount slider into amount X and amount Y, allowing a
different value for the effect on each axis.
Amount
A negative value for Amount causes the image to erode. Eroding simulates the effect of an
underexposed frame, shrinking the image by growing darker areas of the image so that they eat away
at brighter regions.
A positive value for Amount causes the image to dilate, similar to the effect of overexposing a camera.
Regions of high luminance and brightness grow, eating away at the darker regions of the image. Both
techniques eradicate fine detail in the image and tend to posterize fine gradients.
The Amount slider scale is based on the input image width. An amount value of 1 = image width. So, if
you want to erode or dilate by exactly 1 pixel on an HD image, you would enter 1/1920, or 0.00052083.
Common Controls
Settings Tab
The Settings tab controls are common to all Filter nodes, so their descriptions can be found in “The
Common Controls” section at the end of this chapter.
Inputs
The Filter node includes two inputs: one for the main image and the other for an effect mask to limit
the area where the filter is applied.
– Input: The orange input is used for the primary 2D image that gets the filter applied.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the filter to only those pixels within the mask. An effects mask is applied to the tool
after the tool is processed.
Filter controls
Controls Tab
The Controls tab is used to set the filter type, the channels the filter is applied to, and the amount it
blends with the original image.
Filter Type
The Filter Type menu provides a selection of filter types described below.
– Relief: This appears to press the image into metal, such as an image on a coin. The image appears
to be bumped and overlaid on gray.
– Emboss Over: Embosses the image over the top of itself, with adjustable highlight and shadow
height and direction.
– Noise: Uniformly adds noise to images. This is often useful for 3D computer-generated images
that need to be composited with live action, as it reduces the squeaky-clean look that is inherent
in rendered images. The frame number acts as the random generator seed. Therefore, the effect
is different on each frame and is repeatable.
– Defocus: This filter type blurs the image.
– Sobel: Sobel is an advanced edge detection filter. Used in conjunction with a Glow filter, it creates
impressive neon light effects from live-action or 3D-rendered images.
– Laplacian: Laplacian is a very sensitive edge detection filter that produces a finer edge than the
Sobel filter.
– Grain: Adds noise to images similar to the grain of film (mostly in the midrange). This is useful for
3D computer-generated images that need to be composited with live action as it reduces the
squeaky-clean look that is inherent in rendered images. The frame number acts as the random
generator seed. Therefore, the effect is different on each frame and is repeatable.
Power
Values range from 1 to 10. Power proportionately increases the amount by which the selected filter
affects the image. This does not apply to the Sobel or Laplacian filter type.
Median
Depending on which Filter Type is selected, the Median control may appear. It varies the Median
filter’s effect. A value of 0.5 produces the true median result, as it finds the middle values. A value of
0.0 finds the minimums, and 1.0 finds the maximums. This applies to the Median setting only.
Seed
This control is visible only when applying the Grain or Noise filter types. The Seed slider can be used
to ensure that the random elements of the effect are seeded with a consistent value. The randomizer
always produces the same result, given the same seed value.
Animated
This control is visible only when applying the Grain or Noise filter types. Select the checkbox to cause
the noise or grain to change from frame to frame. To produce static noise, deselect this checkbox.
Common Controls
Settings Tab
The Settings tab controls are common to all Filter nodes, so their descriptions can be found in “The
Common Controls” section at the end of this chapter.
Inputs
The Rank Filter node includes two inputs: one for the main image and the other for an effect mask to
limit the area where the filter is applied.
– Input: The orange input is used for the primary 2D image that gets the Rank filter applied.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the rank filter to only those pixels within the mask. An effects mask is applied to the
tool after the tool is processed.
Inspector
l
Rank Filter controls
Controls Tab
The Controls tab is used to set the size and rank value of the filter.
Size
This control determines the size in pixels of the area sampled by the filter. A value of 1 samples 1 pixel
in each direction, adjacent to the center pixel. This produces a total of 9 pixels, including the center
sampled pixel. Larger values sample from a larger area.
Low Size settings are excellent for removing salt and pepper style noise, while larger Size settings
produce an effect similar to watercolor paintings.
Rank
The Rank slider determines which value from the sampled pixels is chosen. A value of 0 is the lowest
value (darkest pixel), and 1 is the highest value (brightest pixel).
Example
Below is a before and after example of a Rank filter with Size set to 7 and a Rank of 0.7 to
create a watercolor effect.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Filter category. The Settings
controls are even found on third-party filter-type plug-in tools. The controls are consistent and work
the same way for each tool, although some tools do include one or two individual options, which are
also covered here.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this causes the tool to skip processing entirely, copying the input straight to the output.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels of the image not included in the mask (i.e., set to 0) to become
black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around the
edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on Coverage and Background Color channels, see Chapter 18, “Understanding
Image Channels” in the Fusion Reference Manual or Chapter 79 in the DaVinci Resolve
Reference Manual.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off hardware-accelerated
rendering using the graphics card in your computer. Enabled uses the hardware. Auto uses a capable
GPU if one is available and falls back to software rendering when a capable GPU is not available
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Flow Nodes
This chapter details the Sticky Note and Underlay features available in Fusion.
The abbreviations next to each feature name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Sticky Note [NTE] 987
Sticky Note Introduction 987
Usage 987
Underlay [UND] 988
Underlay Introduction 988
Usage 988
Usage
To create a Sticky Note, click in an empty area of the Node Editor where you want a Sticky Note to
appear. Then, from the Effects Library, click the Sticky Note effect located in the Tools > Flow category
or press Shift-Spacebar and search for the Sticky Note in the Select Tool window.
Like Groups, Sticky Notes are created in a smaller, collapsed form. They can be expanded by double-
clicking on them. Once expanded, they can be resized using any side or corner of the note or moved
by dragging on the name header. To collapse the Sticky Note again, click the icon in the top-
left corner.
To rename, delete, copy, or change the color of the note, right-click over the note and choose from the
contextual menu. Using this menu, you can also lock the note to prevent editing.
To edit the text in a Sticky Note, first expand it by double-clicking anywhere on the note, and then click
below its title bar. If the note is not locked, you can edit the text.
The Underlay
Underlay Introduction
Underlays are a convenient method of visually organizing areas of a composition. As with Groups,
Underlays can improve the readability of a comp by separating it into labeled functional blocks. While
Groups are designed to streamline the look of a comp by collapsing complex layers down to single
nodes, Underlays highlight, rather than hide, and do not restrict outside connections.
Usage
As with Sticky Notes , an Underlay can be added to a comp by selecting it from the Flow category in
the Effects Library or searching for it in the Select Tool window. The Underlay to the Node Editor with
its title bar is centered on the last-clicked position.
Underlays can be resized using any side or corner. This will not affect any nodes.
Underlays can also be used as simple selection groups. Activating an Underlay, by clicking its title, will
select all the tools contained wholly within it as well, allowing the entire set to be moved, duplicated,
passed through, and so on.
To rename an Underlay, first ensure that nodes contained within the Underlay are not selected. Then,
Option-click on the Underlay title to select the Underlay without selecting the nodes it contains. Once
selected, right-click over the title and choose Rename. Underlays can be assigned a color using the
same right-click contextual menu.
Flow Organizational
Nodes
This chapter details the Groups, Macro, and Pipe Router nodes, which are designed to
help organize your compositions, making the node tree easier to see and understand.
Contents
Groups 990
Groups Introduction 990
Usage 990
Macro 991
Macro Introduction 991
Usage 991
Macro Editor 991
The Final Macro 992
Pipe Router 992
Router Introduction 992
Usage 993
Router 993
Groups Introduction
Groups are used to keep complex node trees organized. You can select any number of nodes in the
node tree and then group them to create a single node icon in the Node Editor. Groups are non-
destructive and can be opened at any time.
Usage
– To group nodes, select them in the Node Editor, and then right-click over any of the selected
nodes and choose Group from the contextual menu.
– To edit the individual nodes in a group, right-click and choose Expand Group from the contextual
menu. All individual nodes contained in the group are displayed in a floating node tree window.
When opened, groups hover over existing elements, allowing editing of the enclosed nodes.
– To remove or decompose a group and retain the individual nodes, right-click the group and
choose Ungroup.
Macro Introduction
Macros can be used to combine multiple nodes and expose a user-definable set of controls.
They are meant as a fast and convenient way of building custom nodes.
Usage
To create a Macro, select the nodes intended for the macro. The order in which the nodes are
selected becomes the order in which they are displayed in the Macro Editor. Right-click on any of the
selected nodes and choose Macro > Create Macro from the contextual menu.
Macro Editor
The Macro Editor allows you to specify and rename the controls that are exposed in the final
macro tool.
In the example below, the tool is named Light_Wrap at the top. The Blur slider for Matte Control 1 is
enabled and renamed to Softness, as it will appear in the Inspector.
To add the macro to your node tree, right-click anywhere on the node tree and select Macro >
[NameOfYourMacro] from the contextual menu.
macOS:
Users > UserName > Library > Application Support > Blackmagic Design > DaVinci Resolve > Fusion >
Templates > Edit > Titles
Windows:
C Drive > Users > UserName > AppData > Roaming > Blackmagic Design > DaVinci Resolve > Support
> Fusion > Templates > Edit > Titles
Pipe Router
Pipe Routers are another type of organizational tool you use to improve the layout and appearance of
the node tree.
Router Introduction
Routers can be used to neatly organize your comps by creating “elbows” in your node tree, so the
connection lines do not overlap nodes, making them easier to understand. Routers do not have any
influence on render times.
Router
To insert a router along a connection line, Option- or Alt-click on the line. The router can then be
repositioned to arrange the connections as needed.
Although routers have no actual controls, they still can be used to add comments to a comp.
Fuses
This chapter introduces Fuses, which are scriptable plug-ins that can
be used within Fusion.
Contents
Fuses [FUS] 995
Fuses Introduction 995
Installing Fuses 995
Working with Fuses in a Composition 995
A Fuse node
Fuses Introduction
Fuses are plug-ins. The difference between a Fuse and an Open FX plug-in is that a Fuse is created
using a Lua script. Fuses can be edited within Fusion or DaVinci Resolve, and the changes you make
compile on-the-fly.
Using a Lua script makes it easy for even non-programmers to prototype and develop custom nodes.
A new Fuse can be added to a composition, edited and reloaded, all without having to close the
current composition. They can also be used as modifiers to manipulate parameters, curves, and text
very quickly. ViewShader Fuses can make use of the GPU for faster performance. This makes Fuses
much more convenient than an Open FX plug-in that uses Fusion’s OFX SDK. However, this flexibility
comes at a cost. Since a Fuse is compiled on-the-fly, it can be significantly slower than the identical
node created using the Open FX SDK.
As an example, Fuses could generate a mask from the over-exposed areas of an image, or create
initial particle positions based on the XYZ position stored within a text file.
Please contact Blackmagic Design for access to the SDK (Software Developer Kit) documentation.
Installing Fuses
Fuses are installed in the Fusion:\Fuses path map. By default this folder is located at
Users/ User_Name/Library Application Support/Blackmagic Design/Fusion (or DaVinci Resolve)/Fuses
on macOS or C:\Users\User_Name\AppData\Roaming\Blackmagic Design\Fusion (or DaVinci Resolve)\
Fuses, on Windows. Files must use the extension .fuse, or they will be ignored by Fusion.
NOTE: Any changes made to a Fuse’s script do not immediately affect other copies of the
same Fuse node already added to a composition. To use the updated Fuse script on all
similar Fuses in the composition, either close and reopen the composition, or click on the
Reload button in each Fuse’s Inspector.
When a composition containing a Fuse node is opened, the currently saved version of the Fuse script
is used. The easiest way to ensure that a composition is running the current version of a Fuse is to
close and reopen the composition.
Generator Nodes
This chapter details the Generator nodes available in Fusion. The abbreviations next
to each node name can be used in the Select Tool dialog when searching for tools
and in scripting references.
Contents
Background [BG] 997
Day Sky [DS] 1001
Fast Noise [FN] 1003
Mandelbrot [MAN] 1006
Plasma [PLAS] 1009
Text+ [TXT+] 1011
Text+ Modifiers 1023
Character Level Styling 1023
Comp Name 1024
Follower 1024
Text Scramble 1026
Text Timer 1027
Time Code 1027
The Common Controls 1028
Inputs
There is one input on the Background node for an effect mask input.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the background color to only those pixels within the mask.
Inspector
Type
This control is used to select the style of background generated by the node. Four selections are
available:
– Solid Color: This default creates a single-color image.
– Horizontal: This creates a two-color horizontal gradation.
– Vertical: This creates a two-color vertical gradation.
– Four Corner: This creates a four-color corner gradation.
Horizontal/Vertical/Four Point
When the Type menu is set to Horizontal, Vertical, or Four Corner, two- or four-color swatches are
displayed where the left/right, top/bottom, or four corners of the gradient colors can be set.
Gradient
When the Type menu is set to Gradient, additional controls are displayed where the gradient colors’
direction can be customized.
Gradient Type
This menu selects the form used to draw the gradient. There are six choices:
– Linear: Draws the gradient along a straight line from the starting color stop to the
ending color stop.
– Reflect: Draws the gradient by mirroring the linear gradient on either side of the starting point.
– Square: Draws the gradient by using a square pattern when the starting point is at the
center of the image.
– Cross: Draws the gradient using a cross pattern when the starting point is at the center
of the image.
– Radial: Draws the gradient in a circular pattern when the starting point is at the center
of the image.
– Angle: Draws the gradient in a counterclockwise sweep when the starting point is at the
center of the image.
Gradient Colors
This gradient color bar is used to select the colors for the gradient. The default two color stops set the
start and end colors. You can change the colors used in the gradient by selecting the color stop, and
then using the Eyedropper or color swatch to set a new color.
You can add, move, copy, and delete color from the gradient using the gradient bar.
Interpolation Space
This menu determines what color space is used to calculate the colors between color stops.
Offset
The Offset control is used to offset the position of the gradient relative to the start and end markers.
This control is most useful when used in conjunction with the repeat and ping-pong modes
described below.
Repeat
This menu includes three options used to set the behavior of the gradient when the Offset control
scrolls the gradient past its start and end positions. Selecting Once keeps the color continuous for
offset. Selecting Repeat loops around to the start color when the offset goes beyond the end color.
Selecting Ping-pong repeats the color pattern in reverse.
Sub-Pixel
The Sub-Pixel menu controls the sub-pixel precision used when the edges of the gradient become
visible in repeat mode, or when the gradient is animated. Higher settings will take significantly longer
to render but are more precise.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are duplicated in many Generator nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
There is a single input on the Day Sky node for an effect mask to limit the area where the day sky
simulation occurs is applied.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the Day Sky to only those pixels within the mask.
Controls Tab
The Controls tab is used to set the location and time of the daylight simulation. This will determine the
overall look that is generated.
Location
The Latitude and Longitude sliders are used to specify the location used to create the Day Sky
simulation.
Turbidity
Turbidity causes light to be scattered and absorbed instead of transmitted in straight lines through the
simulation. Increasing the turbidity will give the sky simulation a murky feeling, as if smoke or
atmospheric haze were present.
Do Tone Mapping
Since the simulation is calculated in 32-bit floating-point color space, it generates color values well
above 1.0 and well below 0.0. Tone mapping is a process that takes the full dynamic range of the
resulting simulation and compresses the data into the desired exposure range while attempting to
preserve as much detail from the highlights and shadows as possible. Deselect this checkbox to
disable any tone mapping applied to the simulation.
Generally, this option should be deselected only if the resulting image will later be color corrected as
part of a floating-point color pipeline.
Exposure
Use this control to select the exposure used for tone mapping.
Advanced Tab
The Advanced tad provides more specific controls over the brightness and width of the different
ranges in the generated sky.
Horizon Brightness
Use this control to adjust the brightness of the horizon relative to the sky.
Luminance Gradient
Use this control to adjust the width of the gradient separating the horizon from the sky.
Backscattered Light
Use this control to increase or decrease the backscatter light in the simulation.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are duplicated in many Generator nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The two map inputs on the Fast Noise node allow you to use masks to control the value of the noise
detail and brightness controls for each pixel. These two optional inputs can allow some interesting and
creative effects. There is also a standard effect mask input for limiting the Fast Noise size.
– Noise Detail Map: A soft-edged mask connected to the gray Noise Detail Map input will give
a flat noise map (zero detail) where the mask is black, and full detail where it is white, with
intermediate values smoothly reducing in detail. It is applied before any gradient color mapping.
This can be very helpful for applying maximum noise detail in a specific area, while smoothly
falling off elsewhere.
– Noise Brightness Map: A mask connected to this white input can be used to control the noise
map completely, such as boosting it in certain areas, combining it with other textures, or if Detail
is set to 0, replacing the Perlin Noise map altogether.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the Fast Noise to only those pixels within the mask.
Inspector
Discontinuous
Normally, the noise function interpolates between values to create a smooth continuous gradient of
results. Enable this checkbox to create hard discontinuity lines along some of the noise contours. The
result will be a dramatically different effect.
Inverted
Select this checkbox to invert the noise, creating a negative image of the original pattern. This is most
effective when Discontinuous is also enabled.
Center
Use the Center coordinate control to pan and move the noise pattern.
Detail
Increase the value of this slider to produce a greater level of detail in the noise result. Larger values
add more layers of increasingly detailed noise without affecting the overall pattern. High values take
longer to render but can produce a more natural result.
Brightness
This control adjusts the overall brightness of the noise map, before any gradient color mapping is
applied. In Gradient mode, this has a similar effect to the Offset control.
Contrast
This control increases or decreases the overall contrast of the noise map, prior to any gradient color
mapping. It can exaggerate the effect of the noise and widen the range of colors applied in
Gradient mode.
Angle
Use the Angle control to rotate the noise pattern.
Seethe
Adjust this thumbwheel control to interpolate the noise map against a different noise map.
This will cause a crawling shift in the noise, as if it was drifting or flowing. This control must be
animated to affect the gradient over time, or you can use the Seethe Rate control below.
Seethe Rate
As with the Seethe control above, the Seethe Rate also causes the noise map to evolve and change.
The Seethe Rate defines the rate at which the noise changes each frame, causing an animated drift in
the noise automatically, without the need for spline animation.
Color Tab
The Color tab allows you to adjust the gradient colors used in the generated noise pattern.
Two Color
A simple two-color gradient is used to color the noise map. The noise function will smoothly transition
from the first color into the second.
Gradient
The Advanced Gradient control in Fusion is used to provide more control over the color gradient used
with the noise map.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are duplicated in many Generator nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Mandelbrot [MAN]
Inputs
The one input on the Mandelbrot node is for an effect mask to limit the area where the fractal noise
is applied.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the fractals to only those pixels within the mask.
Inspector
Noise Tab
The Noise tab controls the shape and pattern of the noise for the Mandelbrot node.
Position X and Y
This chooses the image’s horizontal and vertical position or seed point.
Zoom
Zoom magnifies the pattern in or out. Every magnification is recalculated so that there is no practical
limit to the zoom.
Escape Limit
Defines a point where the calculation of the iteration is aborted. Low values lead to blurry halos.
Rotation
This rotates the pattern. Every new angle requires recalculation of the image.
Color Tab
The Color tab allows you to adjust the gradient and repetition of the gradient colors for the
generated pattern.
Grad Method
Use this control to determine the type of gradation applied at the borders of the pattern.
Continuous Potential
This causes the edges of the pattern to blend to the background color.
Iterations
This causes the edges of the pattern to be solid.
Gradient Curve
This affects the width of the gradation from the pattern to the background color.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are duplicated in other generator nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The one input on the Plasma node is for an effect mask to limit the area where the plasma pattern
is applied.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the plasma to only those pixels within the mask.
Inspector
Scale
The Scale control is used to adjust the size of the pattern created.
Operation
The options in this menu determine the mathematical relationship among the four circles whenever
they intersect.
Circle Type
Select the type of circle to be used.
Circle Center
Report and change the position of the circle center.
Circle Scale
Determines the size of the circle to be used for the pattern.
Color Tab
The Color tab allows you to adjust the colors and location within the pattern of the colors for the
generated plasma.
Phase
Phase changes the color phase of the entire image. When animated, this creates psychedelic
color cycles.
R/G/B/A Phases
Changes the phase of the individual color channels and the Alpha. When animated, this creates color
cycling effects.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are duplicated in many Generator nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The one input on the Text+ node is for an effect mask to crop the text.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the text to only those pixels within the mask.
Inspector
Text Tab
The Text tab in the Inspector is divided into three sections: Text, Advanced Controls, and Tab Spacing.
The Text section includes parameters that will be familiar to anyone who has used a word processor.
It includes commonly used text formatting options. The Advanced controls are used for kerning
options, and the Tab Spacing is used to define the location and alignment of tabs in the layout.
Styled Text
The edit box in this tab is where the text to be created is entered. Any common character can be
typed into this box. The common OS clipboard shortcuts (Command-C or Ctrl-C to copy, Command-X
or Ctrl-X to cut, Command-V or Ctrl-V to paste) will also work; however, right-clicking in the edit box
displays a custom contextual menu. More information on these modifiers can be found at the end of
this section.
The Styled Text contextual menu includes the following options:
– Animate: Used to animate the text over time.
– Character Level Styling: Used to change the font, color, size and
transformations of individual characters or words through the Modifiers tab.
– Comp Name: Places the name of the composition in the Styled text box for creating slates.
– Follower: A text modifier used to ripple animation across each character of the text.
– Publish: Publishes the text for connection to other text nodes.
– Text Scramble: A text modifier used to randomize the characters in the text.
– Text Timer: A text modifier used to display a countdown or the current date and time.
– Time Code: A text modifier used to display Time Code for the current frame.
– Connect To: Used to connect the text to the published output of another node.
Font
Two Font menus are used to select the font family and typeface, such as Regular, Bold, and Italic.
Size
This control is used to increase or decrease the size of the text. This is not like selecting a point size in
a word processor. The size is relative to the width of the image.
Tracking
The Tracking parameter adjusts the uniform spacing between each character of text.
Line Spacing
Line Spacing adjusts the distance between each line of text. This is sometimes called leading in
word-processing applications.
V Anchor
The vertical anchor controls consist of three buttons and a slider. The three buttons are used to align
the text vertically to the top of the text, middle of the text, or bottom baseline. The slider can be used
to customize the alignment. Setting the vertical anchor will affect how the text is rotated as well as the
location for line spacing adjustments. This control is most often used when the Layout type is set to
Frame in the Layout tab.
V Justify
The vertical justify slider allows you to customize the vertical alignment of the text from the V Anchor
setting to full justification so it is aligned evenly along the top and bottom edges. This control is most
often used when the Layout type is set to Frame in the Layout tab.
H Anchor
The horizontal anchor controls consist of three buttons and a slider. The three buttons justify the text
alignment to the left edge, middle, or right edge of the text. The slider can be used to customize the
justification. Setting the horizontal anchor will affect how the text is rotated as well as the location for
tracking (leading) spacing adjustments. This control is most often used when the Layout type is set to
Frame in the Layout tab.
H Justify
The horizontal justify slider allows you to customize the justification of the text from the H Anchor
setting to full justification so it is aligned evenly along the left and right edges. This control is most
often used when the Layout type is set to Frame in the Layout tab.
Direction
This menu provides options for determining the Direction in which the text is to be written.
Line Direction
These menu options are used to determine the text flow from top to bottom, bottom to top, left to right,
or right to left.
Write On
This range control is used to quickly apply simple Write On and Write Off effects to the text. To create
a Write On effect, animate the End portion of the control from 1 to 0 over the length of time required.
To create a Write Off effect, animate the Start portion of the range control from 0 to 1.
Force Monospaced
This slider control can be used to override the kerning (spacing between characters) defined in the
font. Setting this slider to zero (the default value) will cause Fusion to rely entirely on the kerning
defined with each character. A value of one will cause the spacing between characters to be
completely even, or monospaced.
Tab Spacing
Tab Spacing
The controls in the Tabs section are used to configure the horizontal screen positions of eight separate
tab stops. Any tab characters in the text will conform to these positions.
You can add tabs directly in the Styled Text input as you type. You can also add tabs by copying from
another document, such as Text on macOS or Notepad on Windows, and paste it into the text box.
Position
This control is used to set the horizontal position of the tab in the frame. The values range from -0.5 to
0.5, where 0 is the center. The position of the tab will be indicated in the viewer by a thin vertical white
line when the Text node is selected. At the top of each tab line in the viewer is a handle. The handle
can be used to position the tab manually.
Alignment
Each tab can be set either left aligned, right aligned, or centered. This slider ranges from -1.0 to 1.0,
where -1.0 is a left-aligned tab, 0.0 is a centered tab and 1.0 is a right-aligned tab. Clicking the tab
handles in the viewer will toggle the alignment of the tab among the three states.
Layout Tab
The controls used to position the text are located in the Layout tab. One of four layout types can be
selected using the Type drop-down menu.
– Point: Point layout is the simplest of the layout modes. Text is arranged around an adjustable
center point.
– Frame: Frame layout allows you to define a rectangular frame used to align the text. The alignment
controls are used for justifying the text vertically and horizontally within the boundaries of the
frame.
– Circle: Circle layout places the text around the curve of a circle or oval. Control is offered over
the diameter and width of the circular shape. When the layout is set to this mode, the Alignment
controls determine whether the text is positioned along the inside or outside of the circle’s edge,
and how multiple lines of text are justified.
– Path: Path layout allows you to shape your text along the edges of a path. The path can be
used simply to add style to the text, or it can be animated using the Position on Path control that
appears when this mode is selected.
Center X, Y, and Z
These controls are used to position the center of the layout element in space. X and Y are onscreen
controls, and Center Z is a slider in the node controls.
Size
This slider is used to control the scale of the layout element.
Rotation
Rotation consists of a series of buttons allowing you to select the order in which 3D rotations are
applied to the text. Angle dials can be used to adjust the angle of the Layout element along any axis.
Fit Characters
This menu control is visible only when the Layout type is set to Circle. This menu is used to select how
the characters are spaced to fit along the circumference.
Position on Path
The Position on Path control is used to control the position of the text along the path. Values less than
0 or greater than 1 will cause the text to move beyond the path in the same direction as the vector of
the path between the last two keyframes.
Background Color
The text generated by this node is normally rendered with a transparent background. This Color Picker
control can be used to set a background color.
Transform
The Transform menu is used to determine the portion of the text affected by the transformations
applied in this tab. Transformations can be applied to line, word, and character levels simultaneously.
This menu is only used to keep the visible controls to a reasonable number.
– Characters: Each character of text is transformed along its own center axis.
– Words: Each word is transformed separately on the word’s center axis.
– Lines: Each line of the text is transformed separately on that line’s center axis.
Spacing
The Spacing slider is used to adjust the space between each line, word, or character. Values less than
1 will usually cause the characters to begin overlapping.
Pivot X, Y, and Z
This provides control over the exact position of the axis. By default, the axis is positioned at the
calculated center of the line, word, or character. The Axis control works as an offset, such that a value
of 0.1, 0.1 in this control would cause the axis to be shifted downward and to the right for each of the
text elements. Positive values in the Z-axis slider will move the axis of rotation further along the axis
(away from the viewer). Negative values will bring the axis of rotation closer.
Rotation
These buttons are used to determine the order in which transforms are applied. X, Y, and Z would
mean that the rotation is applied to X, then Y, and then Z.
X, Y, and Z
These controls can be used to adjust the angle of the text elements in any of the three dimensions.
Shear X and Y
Adjust these sliders to modify the slanting of the text elements along the X- and Y-axis.
Size X and Y
Adjust these sliders to modify the size of the text elements along the X- and Y-axis.
Shading Element
The eight number values in the menu are used to select the element affected by adjustments
in this tab.
Enabled
Select this checkbox to enable or disable each layer of shading elements. Element 1, which is the fill
color, is enabled by default. The controls for a shading element will not be displayed unless this
checkbox is selected.
Sort By
This menu allows you to sort the shading elements by number priority, with 1 being the topmost
element and 8 being the bottommost element, or Z depth, based on the Z Position parameter.
Name
This text label can be used to assign a more descriptive name to each shading element you create.
Appearance
The four Appearance buttons determine how the shading element is applied to the text. Different
controls will appear below depending on the appearance type selected.
– Text Fill: The shading element is applied to the entire text. This is the default mode.
– Text Outline: The shading element is drawn as an outline around the edges of the text.
– Border Fill: The shading element fills a border surrounding the text. Five additional controls are
provided with this shading mode.
– Border Outline: The Border Outline mode draws an outline around the border that surrounds the
text. It offers several additional controls.
Opacity
The Opacity slider controls the overall transparency of the shading element. It is usually better to
assign opacity to a shading element than to adjust the Alpha of the color applied to that element.
Blending
This menu is used to select how the renderer deals with an overlap between two characters in
the text.
– Composite: Merges the shading over the top of itself.
– Solid: Sets the pixels in the overlap region to opaque.
– Transparent: Sets the pixels in the overlap region to transparent.
Thickness
(Outline only) Thickness adjusts the thickness of the outline. Higher values equal thicker outlines.
Join Style
(Outline only) These buttons provide options for how the corners of the outline are drawn. Options
include Sharp, Rounded, and Beveled.
Line Style
(Outline only) This menu offers additional options for the style of the line. Besides the default solid line,
a variety of dash and dot patterns are available.
Level
(Border Fill only) This is used to control the portion of the text border filled.
– Text: This draws a border around the entire text.
– Line: This draws a border around each line of text.
– Word: This draws a border around each word.
– Character: This draws a border around each character.
Round
(Border Fill and Border Outline only) This slider is used to round off the edges of the border.
Color Types
Besides solid shading, it is also possible to use a gradient fill or map an external image onto the text.
This menu is used to determine if the color of the shading element is derived from a user-selected
color or gradient, or if it comes from an external image source. Different controls will be displayed
below depending on the Color Type selected.
– Solid: When the Type menu is set to Solid mode, color selector controls are provided to select the
color of the text.
– Image: The output of a node in the node tree will be used to texture the text. The node used is
chosen using the Color Image control that is revealed when this option is selected.
– Gradient: When the Type menu is set to Gradient, additional controls are displayed where the
gradient colors can direction can be customized.
Image Source
(Image Mode only) The Image Source menu includes three options for acquiring the image used to
fill the text.
– Tool: Displays a Color image text field where you can add a tool from the node tree as
the fill for text.
– Clip: Provides a Browse button to select a media file from your hard drive as the fill for text.
– Brush: Displays a Color Brush menu where you can select one of Fusion’s paint brush bitmaps as
the fill for text.
Image Sampling
(Image Mode only) This menu is used to select the sampling type for shading rendering and
transformations. The default of Pixel shading is sufficient for 90% of tasks. To reduce detectable
aliasing in the text, set the sampling type to Area. This is slower but may produce better-quality results.
A setting of None will render faster, but with no additional sampling applied so the quality will be lower.
Image Edges
(Image Mode only) This menu is used to choose how transformations applied to image shading
elements are handled when they wrap off the text’s edges.
Shading Mapping
(Image Mode only) This menu is used to select whether the entire image is stretched to fill the text or
scaled to fit, maintaining the aspect ratio but cropping part of the image as needed.
Mapping Angle
(Image and Gradient Modes only) This control rotates the image or gradient on the Z-axis.
Mapping Size
(Image and Gradient Modes only) This control scales the image or gradient.
Mapping Level
(Image and Gradient Modes only) The Mapping Level menu is used to select how the image is mapped
to the text.
– Full Image: Applies the entire image to the text.
– Text: Applies the image to fit the entire set of text.
– Line: Applies the image per line of text.
– Word: Applies the image per each word of text.
– Character: Applies the image per individual character.
Softness X and Y
These sliders control the softness of the text outline used to create the shading element. Control is
provided for the X- and Y-axis independently.
Softness Glow
This slider will apply a glow to the softened portion of the shading element.
Softness Blend
This slider controls the amount that the result of the softness control is blended back with the original.
It can be used to tone down the result of the soften operation.
Priority Back/Front
Only enabled when the Sort By menu is set to Priority, this slider overrides the priority setting and
determines the layer’s order for the shading elements. Slide the control to the right to bring an element
closer to the front. Move it to the left to tuck one shading element behind another.
Offset X, Y, and Z
These controls are used to apply offset from the text’s global center (as set in the Layout tab) for the
shading elements. A value of X0.0, Y0.1 in the coordinate controls would place the shading element
centered with 10 percent of the image further down the screen along the Y-axis. Positive values in the
Z-Offset slider control will push the center further away from the camera, while positive values will
bring it closer to the camera.
Pivot X, Y, and Z
These controls are used to set the exact position of the axis for the currently selected shading
element. By default, the axis is positioned at the calculated center of the line, word, or character. The
axis control works as an offset, such that a value of 0.1, 0.1 in this control would cause the axis to be
shifted downward and to the right for the shading element. Positive values in the Z-axis slider will move
the axis of rotation further along the axis (away from the viewer). Negative values will bring the axis of
rotation closer.
Rotation X, Y, and Z
These controls are used to adjust the angle of the currently selected shading element in any of the
three dimensions.
Size X and Y
Adjust these sliders to modify the size of the currently selected shading element along the
X and Y axis.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are duplicated in many Generator nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Text+ Toolbar
When the Text node is selected, a toolbar will appear in the viewer. Each button is described below
from left to right.
Text+ toolbar
No Text Outline
When this button is selected, it disables the drawing of any outline around the edges of the text. The
outline is not a part of the text; it is an onscreen control used to help identify the position of the text.
This is a three-way toggle with the Text Outline Outside Frame Only, and Show Always Text
Outline buttons.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are duplicated in many Generator nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Text+ Modifiers
Text+ modifiers
Text modifiers can be assigned by right-clicking in the Styled Text box and selecting a modifier from
the contextual menu. Once a modifier is selected, its controls are found in the Modifiers tab at the top
of the Inspector.
NOTE: Character Level Styling can only be directly applied to Text+ nodes, not to Text 3D
nodes. However, styled text from a Text+ node can be applied to a Text 3D node by copying
the Text+, right-clicking on the Text 3D, and choosing Paste Settings.
Text Tab
The Styled Text box in the Modifiers tab displays the same text in the Tools tab of the Text+ Inspector.
However, individual characters you want to modify cannot be selected in the Styled Text box; they
must be selected in the viewer. Once text is selected in the viewer, the Text tab includes familiar text
formatting options that will apply only to the selected characters.
Comp Name
The Comp Name sets the styled text to become the current Composition Name. This is quite useful to
automate burn-ins for daily renderings. See also the TimeCode modifier. It can be applied by right-
clicking in the Styled Text field of a Text+ node and selecting Comp Name.
Controls
This modifier has no controls.
Follower
The Follower modifier allows sequencing text animations. The modifier is applied by right-clicking in
the Styled Text field of a Text+ node and selecting Follower. In the Modifiers tab, you start by
animating the parameters of the text (note that changing any parameter in the Modifiers tab will not be
visible unless a keyframe is added.) Then, in the Timing tab you set the animation’s delay between
characters.
Timing Tab
Once the text is animated using the controls in the Modifiers tab, the Timing tab is used to choose the
direction and offset of the animation.
Range
The Range menu is used to select whether all characters should be influenced or only a selected
range. To set the range, you can drag-select over the characters directly in the viewer.
Order
The Order menu determines in which direction the characters are influenced. Notice that empty
spaces are counted as characters as well. Available options include:
– Left to right: The animation ripples from left to right through all characters.
– Right to left: The animation ripples from right to left through all characters.
– Inside out: The animation ripples symmetrically from the center point of the
characters toward the margin.
– Outside in: The animation ripples symmetrically from the margin toward the center point of the
characters.
– Random but one by one: The animation is applied to randomly selected characters but only
influences one character at a time.
– Completely random: The animation is applied to randomly selected characters, influencing
multiple characters at a time.
– Manual curve: The affected characters can be specified by sliders.
Delay Type
Determines what sort of delay is applied to the animation. Available options include:
– Between Each Character: The more characters there are in your text, the longer the animation
will take to the end. A setting of 1 means the first character starts the animation, and the second
character starts 1 frame later, the third character starts 1 frame after the second, and so on.
– Between First and Last Character: No matter how many characters are in your text, the animation
will always be completed in the selected amount of time.
Inspector
Controls Tab
The Controls tab in the Text Scramble modifier is used to enter text and scramble it using the
Randomness control. The scrambled characters are taken from the Substitute Chars field at the bottom
of the Inspector.
Randomness
Defines how many characters are exchanged randomly. A value of 0 will change no characters at all.
A value of 1 will change all characters in the text. Animating this thumbwheel to go from 0 to 1 will
gradually exchange all characters.
Input Text
This reflects the original text in the Text+ Styled Text field. Text can be entered either here or in the
Text+ node.
Animate on Time
When enabled, the characters are scrambled randomly on every new frame. This switch has no effect
when Randomness is set to 0.
Animate on Randomness
When enabled, the characters are scrambled randomly on every new frame, when the Randomness
thumbwheel is animated.
This switch has no effect when Randomness is set to 0.
Substitute Chars
This field contains the characters used to scramble the text.
Inspector
Controls Tab
The Controls tab for the Text Timer modifier is used to set up the type of time display that is generated
by this modifier.
Mode
This menu sets the mode the timer is working in. The choices are CountDown, Timer, and Clock. In
Clock mode, the current system time will be displayed.
Start
Starts the Counter or Timer. Toggles to Stop once the timer is running.
Reset
Resets the Counter and Timer to the values set by the sliders.
Time Code
The Time Code only works on Text+ nodes. It sets the Styled text to become a counter based on the
current frame. This is quite useful for automating burn-ins for daily renderings.
It can be applied by right-clicking in the Styled Text field of a Text+ node and selecting Time Code.
Inspector
Start Offset
Introduce a positive or negative offset to Fusion’s current time to match up with existing time codes.
Drop Frame
Activate this checkbox to match the time code with footage that has drop frames—for example, certain
NTSC formats.
Inspector
Image Tab
The controls in this tab are used to set the resolution, color depth, and pixel aspect of the image
produced by the node.
Process Mode
Use this menu control to select the Fields Processing mode used by Fusion to render changes to the
image. The default option is determined by the Has Fields checkbox control in the Frame
Format preferences.
Width/Height
This pair of controls is used to set the Width and Height dimensions of the image to be created
by the node.
Pixel Aspect
This control is used to specify the Pixel Aspect ratio of the created images. An aspect ratio of 1:1 would
generate a square pixel with the same dimensions on either side (like a computer display monitor), and
an aspect of 0.9:1 would create a slightly rectangular pixel (like an NTSC monitor).
NOTE: Right-click on the Width, Height, or Pixel Aspect controls to display a menu listing the
file formats defined in the preferences Frame Format tab. Selecting any of the listed options
will set the width, height, and pixel aspect to the values for that format, accordingly.
Depth
The Depth drop-down menu is used to set the pixel color depth of the image created by the Creator
node. 32-bit pixels require 4X the memory of 8-bit pixels but have far greater color accuracy. Float
pixels allow high dynamic range values outside the normal 0…1 range, for representing colors that are
brighter than white or darker than black.
Settings Tab
The Settings Tab in the Inspector can be found on every tool in the Color category. The Settings
controls are even found on third-party Color-type plug-in tools. The controls are consistent and work
the same way for each tool, although some tools do include one or two individual options, which are
also covered here.
Multiply by Mask
Selecting this option will cause the RGB values of the masked image to be multiplied by the mask
channel’s values. This will cause all pixels of the image not in the mask (i.e., set to 0) to become black/
transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around the
edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18
“Understanding Image Channels” in the Fusion Reference Manual or Chapter 79 in the
DaVinci Resolve Reference Manual.
Use GPU
The user GPU menu has three settings. Setting the menu to Disable turns off hardware-accelerated
rendering using the graphics card in your computer. Enabled uses the hardware, and Auto uses a
capable GPU if one is available and falls back to software rendering when a capable GPU is
not available
Comments
The Comments field is used to add notes to a tool. Click in the field and type the text. When a note is
added to a tool, a small red square appears in the lower-left corner of the node when the full tile is
displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the note
in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
I/O Nodes
This chapter details the input and output of media using Loader and Saver nodes
within Fusion Studio as well as the MediaIn and MediaOut nodes in DaVinci Resolve.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Loader Node [LD] 1034
MediaIn Node [MI] 1042
MediaOut Node [MO] 1046
Saver Node [SV] 1047
The Common Controls 1054
NOTE: The Loader node in DaVinci Resolve is only used for importing EXR files.
When using Fusion Studio, the Loader node is the node you use to select and load footage from your
hard drive into the Node Editor. There are three ways to add a Loader node, and consequently a clip,
to the Node Editor.
– Add the Loader from the Effects Library or toolbar (Fusion Studio only), and then use Loader’s
file browser to bring a clip into the Node Editor
– Drag clips from an OS window directly into the Node Editor, creating a Loader node in the Node
Editor.
– Choose File > Import > Footage (Fusion Studio only), although this method creates a new
composition as well as adds the Loader node to the Node Editor.
When a Loader is added to the Node editor, a File dialog is displayed automatically to allow the
selection of a clip from your hard drives.
NOTE: You can disable the automatic display of the file browser by disabling Auto Clip
Browse in the Global > General Preferences.
Once clips are brought in using the Loader node, the Loader is used for trimming, looping, and
extending the footage, as well as setting the field order, pixel aspect, and color depth. The Loader is
arguably the most important tool in Fusion Studio.
Inputs
The single input on the Loader node is for an effect mask to crop the image brought in by the Loader.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the loaded
image to appear only within the mask. An effects mask is applied to the tool after the tool is
processed.
Inspector
File Tab
The File tab for the Loader includes controls for trimming, creating a freeze frame, looping, and
reversing the clip. You can also reselect the clip that the Loader links to on your hard drive.
Filename
The Filename field shows the file path of the clip imported to the Node Editor by the Loader node.
Clicking on the Browse button opens a standard file browser. The path to the footage can also be
typed directly using the field provided. The text box supports filename completion. As the name of a
directory or file is typed in the text box, Fusion displays a pop-up that lists possible matches. Use the
arrow keys to select the correct match and complete the path.
The following would not be considered a sequence since the last characters are not numeric.
shot.1.fg.jpg, shot.2.fg.jpg, shot.3.fg.jpg
It is not necessary to select the first file in the sequence. Fusion searches the entire folder for
files matching the sequence in the selected filename. Also, Fusion determines the length of
the sequence based on the first and last numeric value in the filenames. Missing frames are
ignored. For example, if the folder contains two files with the following names:
image.0001.exr, image.0100.exr
Fusion sees this as a file sequence with 100 frames, not an image sequence containing two
frames. The Missing Frames drop-down menu is used to choose how Fusion handles
missing frames.
The Trim In/Trim Out control’s context menu can also be used to force a specific clip length or
to rescan the folder. Both controls are described in greater detail below.
Occasionally, you want to load only a single frame out of a sequence—e.g., a photograph from
a folder containing many other files as well. By default, Fusion detects those as a sequence,
but if you hold Shift while dragging the file from the OS window to the Node Editor, Fusion
takes only that specific file and disregards any sequencing.
Proxy Filename
The Proxy Filename control only appears once the filename control points to a valid clip. The Proxy
Filename can specify a clip that is loaded when the Proxy mode is enabled. This allows smaller
versions of the image to be loaded to speed up file I/O from disk, and processing. For example, create
a 1/4-scale version of an 8K EXR sequence to use as EXR proxy files. Whenever the Proxy mode of the
Composition is enabled, the smaller resolution proxy clip is loaded from disk, and all processing is
performed at the lower resolution, significantly improving render times. This is particularly useful when
working with large RAW plates stored on a remote file server. Lower-resolution versions of the plates
can be stored locally, reducing network bandwidth, interactive render times, and memory usage.The
proxy clip must have the same number of frames as the source clip, and the sequence numbers for the
clip must start and end on the same frame numbers. It is strongly suggested that the proxies are the
same format as the main files. In the case of formats with options, such as Cineon, DPX, and OpenEXR,
the proxies use the same format options as the primary.
Trim
The Trim range control is used to trim frames from the start or end of a clip. Adjust the Trim In to
remove frames from the start and adjust Trim Out to specify the last frame of the clip. The values used
here are offsets. A value of 5 in Trim In would use the fifth frame in the sequence as the start, ignoring
the first four frames. A value of 95 would stop loading frames after the 95th.
Reverse
Select this checkbox to reverse the footage so that the last frame is played first, and the first frame is
played last.
Loop
Select this checkbox to loop the footage until the end of the project. Any lengthening of the clip using
Hold First/Last Frame or shortening using Trim In/Out is included in the looped clip.
Missing Frames
The Missing Frames menu determines the Loader behavior when a frame is missing or is unable to
load for any reason.
– Fail: The Loader does not output any image unless a frame becomes available. Rendering is
aborted.
– Hold Previous Output: The last valid frame is held until a frame becomes available again. This fails
if no valid frame has been seen—for example, if the first frame is missing.
– Output Black: Outputs a black frame until a valid frame becomes available again.
– Wait: Fusion waits for the frame to become available, checking every few seconds. This is useful
for rendering a composition simultaneously with a 3D render. All rendering ceases until the
frame appears.
Some examples:
Your composition is stored in:
X:\Project\Shot0815\Fusion\Shot0815.comp
Observe how the two dots .. set the directory to go up one folder, much like CD .. in a
Command Shell window.
Import Tab
The Import tab includes settings for the frame format and how to deal with fields, pixel aspect, 3:2 pull
down/pull up conversion, and removing gamma curve types for achieving a linear workflow.
Process Mode
Use this menu to select the Fields Processing mode used by Fusion when loading the image. The Has
Fields checkbox control in the Frame Format preferences determines the default option, and the
default height as well. Available options include:
– Full frames
– NTSC fields
– PAL/HD fields
– PAL/HD fields (reversed)
– NTSC fields (reversed).
The two reversed options load fields in the opposite order and thus result in the fields being spatially
swapped both in time order and in vertical order as well.
Use the Swap Field Dominance checkbox (described below) to swap fields in time only.
Pixel Aspect
This menu is used to determine the image’s pixel aspect ratio.
– From File: The loader conforms to the image aspect detected in the saved file. There are a few
formats that can store aspect information. TIFF, JPEG, and OpenEXR are examples of image
formats that may have the pixel aspect embedded in the file’s header. When no aspect ratio
information is stored in the file, the default frame format method is used.
– Default: Any pixel aspect ratio information stored in the header of the image file is ignored. The
pixel aspect set in the composition’s frame format preferences is used instead.
– Custom: Select this option to override the preferences and set a pixel aspect for the clip manually.
Selecting this button causes the X/Y Pixel Aspect control to appear.
Import Mode
This menu provides options for removing pull-up from an image sequence. Pull-up is a reversible
method of combining frames used to convert 24 fps footage into 30 fps. It is commonly used to
broadcast NTSC versions of films.
– Normal: This passes the image without applying pull-up or pull-down 2:3.
– Pull Up: This removes existing 3:2 pull-down applied to the image sequence, converting from 30
fps to 24 fps 2:3.
– Pull Down: The footage has pull-down applied, converting 24 fps footage to 30 fps by creating
five frames out of every four. The process mode of a Loader set to Pull Down should always be
Full Frames.
First Frame
This menu appears when the Import Mode is set to either Pull Up or Pull Down. It is used to determine
which frame of the 3:2 sequence is used as the first frame of the loaded clip.
Invert Alpha
When enabled, the original Alpha channel of the clip is inverted. This may also be used in conjunction
with Make Alpha Solid to set the Alpha to pure black (completely transparent).
Post-Multiply by Alpha
Enabling this option causes the color value of each pixel to be multiplied by the Alpha channel for that
pixel. This option can be used to convert subtractive (non-premultiplied) images to additive
(premultiplied) images.
Curve Type
This menu is used to determine the gamma curve of the footage. Once the Gamma Curve Type is set,
you can choose to remove the curve to help achieve a linear workflow.
– Auto: Passes along any metadata that might be in the incoming image.
– Space: Allows the user to set the gamma curve based on the recording device used to capture
content or software settings used when rendering the content in another application.
– Log: Displays the Log/Lin settings, similar to the Cineon Log node. For more information on the
Log settings, refer to Chapter 38, “Film Nodes” in the Fusion Reference Manual or Chapter 99 in
the DaVinci Resolve Reference Manual.
Remove Curve
Depending on the selected Curve Type or on the Gamma Space found in Auto mode, the associated
Gamma Curve is removed from, or a log-lin conversion is performed on, the material, effectively
converting it to a linear output space.
Format Tab
The Format tab contains file format-specific controls that dynamically change based on the selected
Loader and the file it links to. Some formats contain a single control or no controls at all. Others like
Camera RAW formats contain RAW-specific debayering controls. A partial format list is provided below
for reference.
– OpenEXR: EXR provides a compact and flexible format to support high dynamic range images
(float). The format also supports a variety of extra channels and metadata.The Format tab for
OpenEXR files provides a mechanism for mapping any non-RGBA channels to the channels
supported natively in Fusion. Using the Format tab, you can enter the name of a channel contained
in the OpenEXR file into any of the edit boxes next to the Fusion channel name. A command line
utility for dumping the names of the channels can be found at https://www.openexr.com.
– QuickTime: QuickTime files can potentially contain multiple tracks. Use the format options to
select one of the tracks.
– Cinema DNG: CinemaDNG is an open format capable of high-resolution raw image data with a
wide dynamic range. It was one of the formats recorded by Blackmagic Design cameras before
switching over to the BRAW format.
– Photoshop PSD Format: Fusion can load any one of the individual layers stored in the PSD file, or
the completed image with all layers. Transformation and adjustment layers are not supported. To
load all layers in a PSD file with appropriate blend modes, use File > Import > PSD.
Common Controls
Settings Tab
The Settings tab controls are common to both Loader and Saver nodes, so their descriptions can be
found in “The Common Controls” section at the end of this chapter.
The MediaIn node is the foundation of every composition you create in DaVinci Resolve’s Fusion page.
In most cases, it replaces the Loader node used in Fusion Studio for importing clips. There are four
ways to add a MediaIn node to the Node Editor.
– In the Edit or Cut page, position the playhead over a clip in the Timeline, and then click the Fusion
page button. The clip from the Edit or Cut page Timeline is represented as a MediaIn node in the
Node Editor.
– Drag clips from the Media Pool into the Node Editor, creating a MediaIn node in the Node Editor.
– Drag clips from an OS window directly into the Node Editor, creating a MediaIn node
in the Node Editor.
– Choose Fusion > Import> PSD when importing PSD files into the Node Editor. Each PSD layer is
imported as a separate MediaIn node.
NOTE: Although a MediaIn tool is located in the I/O section of the Effects Library, it is not
used as a method to import clips.
When clips are brought in from the Media Pool, dragged from the OS window, or via the Import
PSD menu option, you can use the MediaIn node’s Inspector for trimming, looping, and extending the
footage, as well as setting the source’s color and gamma space.
Inputs
The single input on the MediaIn node is for an effect mask to crop the image brought in by
the MediaIn.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the
source image to appear only within the mask. An effects mask is applied to the tool after the
tool is processed.
Inspector
Image Tab
When brought in from the Media Pool or dragged from an OS window, the MediaIn node’s Image tab
includes controls for trimming, creating a freeze frame, looping, and reversing the clip. You can also
reselect the clip the MediaIn links to on your hard drive. A subset of these controls is available when
the MediaIn node is brought in from the Edit or Cut page Timeline.
The two reversed options load fields in the opposite order and thus result in the fields being spatially
swapped both in time order and in vertical order as well.
MediaID
An ID assigned by DaVinci Resolve for that clip.
Layer
Used to identify the layer in a PSD file or compound clip. When a PSD file is brought in from the Media
Pool, the drop-down menu allows you to select an individual layer for output instead of the entire PSD
composite.
Trim
The Trim range control is used to trim frames from the start or end of a clip. Adjust the Trim In to
remove frames from the start and adjust Trim Out to specify the last frame of the clip. The values used
here are offsets. A value of 5 in Trim In would use the fifth frame in the sequence as the start, ignoring
the first four frames. A value of 95 would stop loading frames after the 95th frame.
Reverse
Select this checkbox to reverse the footage so that the last frame is played first, and the first frame is
played last.
Loop
Select this checkbox to loop the footage until the end of the project. Any lengthening of the clip using
Hold First/Last Frame or shortening using Trim In/Out is included in the looped clip.
Audio Tab
The Inspector for the MediaIn node contains an Audio tab, where you can choose to solo the audio
from the clip or hear all the audio tracks in the Timeline.
If the audio is out of sync when playing back in Fusion, the Audio tab’s Sound Offset wheel allows you
to slip the audio in subframe frame increments. The slipped audio is only modified in the Fusion page.
All other pages retain the original audio placement.
To hear audio from a clip brought in through the Media Pool, do the following:
1 Select the clip in the Node Editor.
2 In the Inspector, click the Audio tab and select the clip name from the Audio Track
drop-down menu.
If more than one MediaIn node exists in the comp, the audio last selected in the Inspector is
heard. You can use the Speaker icon in the toolbar to switch between the MediaIn node
audio files.
3 Right-click the Speaker icon in the toolbar, then choose the MediaIn for the clip you want to hear.
The audio will be updated when you next playback the composition.
Every composition you create in DaVinci Resolve’s Fusion page must include a MediaOut node. The
MediaOut node sends the final output back to your Timeline on DaVinci Resolve’s Edit or Cut page. In
most cases, it replaces the Saver node used in Fusion Studio for exporting clips.
The composition output by the Fusion page’s MediaOut node is propagated via the Color page’s
source inputs, with the sole exception that if you’ve performed transforms or added plug-ins to that
clip in the Edit or Cut page, then the handoff from the Fusion page to the Color page is as follows:
When using Resolve Color Management or ACES, each MediaOut node converts the output image
back to the Timeline color space for handoff to the Color page.
NOTE: Additional MediaOut nodes can be added to the Node Editor from the Effects Library.
Additional MediaOut nodes are used to pass mattes to the Color page.
Inputs
The single input on the MediaOut node is where you connect the final composite image you want
rendered back into the Edit page.
– Input: The orange input is a required input. It accepts any 2D image that you want rendered
back into the Edit page.
MediaOut1 node rendering to the Edit page, and MediaOut2 sending mattes to the Color page
NOTE: The Saver node in DaVinci Resolve is only used for exporting EXR files.
The Saver node represents the final composition output from Fusion Studio. It is used to render out
movie files or sequential images but can be inserted into a composition at any point to render out
intermediate stages of a composition. A composition can contain any number of Saver nodes for
rendering different branches of a comp as well as different formats.
The Saver node can also be used to add scratch track audio to your composition, which can be heard
during interactive playback.
Inputs
The single input on the Saver node is for the final composition you want to render.
– Image Input: The orange input is used to connect the resulting image you want rendered.
Saver node added to the end of a node tree to render the composition
Inspector
File Tab
The Saver File tab is used to set the location and output format for the rendered file.
Filename
The Filename dialog is used to select the name and path of the rendered image output. Click on the
Browse button to open a file browser and select a location for the output.
Sequence numbering is automatically added to the filename when rendering a sequential image file
format. For example, if c\renders\image.exr is entered as the filename and 30 frames of output are
rendered, the files are automatically numbered as image0000.tga, image0001.exr, image0003.exr...
and so on. Four-digit padding is automatically used for numbers lower than 10000.
You can specify the number of digits to use for padding by explicitly entering the digits into
the filename.
For example, image000000.exr would apply 6-digit padding to the numeric sequence, image.001.exr
would use 3-digit padding, and image1.exr would use none.
Output Format
This menu is used to select the image format to be saved. Be aware that selecting a new format from
this menu does not change the extension used in the filename to match. Modify the filename manually
to match the expected extension for that format to avoid a mismatch between name and image format.
NOTE: The High Quality Interactive setting can easily cause confusion when used in
conjunction with a node tree that contains spline-animated parameters. If these splines are
modified in such a way that frames already saved interactively are changed, the frames
already on the disk do not automatically re-render. Either step through each frame again or
perform a final render to make certain that the result is correct
Frame Offset
This thumbwheel control can be used to set an explicit start frame for the number sequence applied to
the rendered filenames. For example, if Global Start is set to 1 and frames 1-30 are rendered, files are
normally numbered 0001-0030. If the Sequence Start Frame is set to 100, the rendered output would
be numbered from 100-131.
Export Tab
Process Mode
Use this menu to select the Fields Processing mode used by Fusion when saving the images or movie
file to disk. The Has Fields checkbox control in the Frame Format preferences determines the default
option, and the default height as well. Available options include:
– Full frames
– NTSC fields
– PAL/HD fields
– PAL/HD fields (reversed)
– NTSC fields (reversed).
The two reversed options save fields in the opposite order and thus result in the fields being spatially
swapped both in time order and in vertical order as well.
Clipping Mode
This menu, sometimes considered source image clipping, defines how the edges of the image should
be treated.
– Frame: The default Frame setting clips to the parts of the image visible within its visible
dimensions. It breaks any infinite-workspace behavior. If the upstream DoD is smaller than the
frame, the remaining area in the frame is treated as black/transparent.
– None: This setting does not perform any source image clipping at all. This means that any data
that would normally be needed outside the upstream DoD is treated as black/transparent. Be
aware that this might create humongous images that can consume a considerate amount of disk
space. So you should use this option only if really needed.
For more information about ROI, DoD, and Infinite Workspace, see Chapter 7, “Using Viewers” in the
Fusion Reference Manual or Chapter 68 in the DaVinci Resolve Reference Manual.
Curve Type
This menu is used to select a Gamma curve of the rendered file. Once the gamma curve type is set,
you can choose to apply the curve for output.
– Auto: Passes along any metadata that might be in the incoming image.
– Space: Allows the user to set the gamma curve based on the selected file format.
– Log: Displays the Log/Lin settings, similar to the Cineon Log node. For more detail on the Log
settings, see Chapter 38, “Film Nodes” in the Fusion Reference Manual or Chapter 99 in the
DaVinci Resolve Reference Manual.
Apply Curve
Depending on the selected Curve Type or on the Gamma Space found in Auto mode, the associated
Gamma Curve is applied, effectively converting from a linear working space.
The audio functionality is included in Fusion Studio for scratch track (aligning effects to audio and clip
timing) purposes only. Final renders should almost always be performed without audio. The smallest
possible audio files should be used, as Fusion loads the entire audio file into memory for efficient
display of the waveform in the Timeline. The audio track is included in the saved image if a Quicktime
file format is selected. Fusion currently supports playback of WAV audio.
Source Filename
You can enter the file path and name of the audio clip you want to use in the Source Filename field.
You can also click the Browse button to open a file browser window and locate the audio scratch track.
Select the WAV file of choice, and then in the keyframes panel expand the Saver bar to view the audio
waveform. Drag the pointer over the audio wave in the Timeline layout to hear the track.
Sound Offset
Drag the control left or right to slide the Timeline position of the audio clip, relative to other nodes in
the Node Editor.
Legal Tab
The Legal tab includes settings for creating “broadcast safe” saturation and video range files
for output.
Video Type
Use this menu to select the standard to be used for broadcast legal color correction. NTSC, NHK, or
PAL/SECAM can be chosen.
Action
Use this menu to choose how Fusion treats illegal colors in the image.
– Adjust to Legal: This causes the images to be saved with legal colors relevant to the
Video Type selected.
Adjust Based On
This menu is used to choose whether Fusion makes legal the image to 75% or 100% amplitude. Very
few broadcast markets permit 100% amplitude, but for the most part this should be left to 75%.
Soft Clip
The Soft Clip control is used to draw values that are out of range back into the image. This is done by
smoothing the conversion curve at the top and bottom of the curve, allowing more values to be
represented.
Format Tab
The Format tab contains information, options, and settings specific to the image format being saved.
The controls for an EXR sequence is entirely different from the ones displayed when a MOV
file is saved.
EXR is displayed above for reference.
When the Saver node is set to DPX, it’s important to understand the reason for the Bypass
Conversion > Data is Linear option. When saving log data into a DPX, and not using the
Saver’s node’s own lin-log conversion (that is, Bypass Conversion is checked), the Data Is
Linear option should be off. This indicates whether the reason for checking Bypass
Conversion is because the data is linear, or whether it’s already log.
If Data Is Linear is enabled, then the DPX is marked in its header as containing linear data. In
turn, that means that when the DPX is loaded back into Fusion, or into other apps that evaluate
the header, those apps think the data is linear and do not perform any log-lin conversion.
Clipping Mode
This menu, sometimes considered source image clipping, defines how the edges of the image should
be treated.
– Frame: The default Frame setting clips to the parts of the image visible within its visible
dimensions. It breaks any infinite-workspace behavior. If the upstream DoD is smaller than the
frame, the remaining areas in the frame are treated as black/transparent.
– None: This setting does not perform any source image clipping at all. This means that any data
that would normally be needed outside the upstream DoD is treated as black/transparent. Be
aware that this might create humongous images which can consume a considerable amount of
disk space. So you should use this option only if really needed.
For more information about ROI, DoD, and Infinite Workspace, see Chapter 7, “Using Viewers” in the
Fusion Reference Manual or Chapter 68 in the DaVinci Resolve Reference Manual.
Curve Type
This menu is used to select a Gamma curve of the rendered file. Once the gamma curve type is set,
you can choose to apply the curve for output.
– Auto: Passes along any metadata that might be in the incoming image.
– Space: Allows the user to set the gamma curve based on the selected file format.
– Log: Displays the Log/Lin settings, similar to the Cineon Log node. For more detail on the Log
settings, see Chapter 38, “Film Nodes” in the Fusion Reference Manual or Chapter 99 in the
DaVinci Resolve Reference Manual.
Apply Curve
Depending on the selected Curve Type or on the Gamma Space found in Auto mode, the associated
Gamma Curve is applied, effectively converting from a linear working space.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on the Loader, Saver, MediaIn, and MediaOut nodes.
The controls are consistent and work the same way for each tool, although some parameters are only
available on individual nodes but are covered here.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this causes the tool to skip processing entirely, copying the input straight to the output.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels of the image not included in the mask (i.e., set to 0) to become
black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around the
edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on Coverage and Background Color channels, see Chapter 18, “Understanding
Image Channels” in the Fusion Reference Manual or Chapter 79 in the DaVinci Resolve
Reference Manual.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
LUT Nodes
This chapter details the LUT nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
File LUT [FLU] 1057
LUT Cube Analyzer [LCA] 1059
LUT Cube Apply [LCP] 1060
LUT Cube Creator [LCC] 1061
The Common Controls 1063
Inputs
The File LUT node includes two inputs: one for the main image and the other for an effect mask to limit
the area where the LUT is applied.
– Input: This orange input is the only required connection. It accepts a 2D image output that gets
the LUT applied.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the applied LUT to only those pixels within the mask. An effects mask is applied to
the tool after the tool is processed.
A File LUT node applied at the end of a node tree as a colorist’s look
Controls Tab
The Controls tab includes options for loading a LUT and making adjustments to the gain, color space,
and Alpha channel, if one exists.
LUT File
This field is used to enter the path to the LUT file. Clicking the Browse button opens a file browser
window to locate the LUT file instead of entering it manually into the LUT File field. Currently, this node
supports LUTs exported from Fusion in .LUT and .ALUT formats, DaVinci Resolve’s .CUBE format, and
several 3D LUT formats. The node fails with an error message on the Console if it is unable to find or
load the specified file.
Pre-Gain:
This slider is a gain adjustment before the LUT being applied. This can be useful for pulling in
highlights before the LUT clips them.
Post-Gain
This slider is a gain adjustment after the LUT is applied.
Color Space
This menu is used to change the color space the LUT is applied in. The default is to apply the curves
described in the LUT to the RGB color space, but options for YUV, HLS, HSV, and others are also
available.
Pre-Divide/Post-Multiply
Selecting the Pre-Divide/Post-Multiply checkbox causes the image pixel values to be divided by the
Alpha values before applying the LUT, and then re-multiplied by the Alpha value after the correction.
This helps to prevent the creation of illegally additive images, particularly around the edges of a blue/
green key or when working with 3D-rendered objects.
Settings Tab
The Settings tab in the Inspector is also duplicated in other LUT nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The LUT Cube Analyzer includes a single orange input.
– Input: The orange input is used to take the output of any node modifying an image that
originated with the LUT Cube Creator.
Generating a LUT starts with the LUT Cube Creator and ends with a LUT Cube Analyzer.
Inspector
Type
Select the desired output format of the 3D LUT.
Filename
Enter the path where you want the file saved and enter the name of the LUT file. Alternatively, you can
click the Browse button to open a file browser to select the location and filename.
Write File
Press this button to generate the 3D LUT file based on the settings above.
Settings Tab
The Settings tab in the Inspector is also duplicated in other LUT nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The LUT Cube Apply has three inputs: a green input where the output of the LUT Cube Creator is
connected, an orange input for the image to have the LUT applied, and a blue effect mask input
– Input: This orange input accepts a 2D image that gets the LUT applied.
– Reference Image: The green input is used to connect the output of the LUT Cube Creator or a
node that is modifying the image originating in the LUT Cube Creator.
– Effect Mask: The optional effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input
limits the LUT Cube Apply to only those pixels within the mask. An effects mask is applied to
the tool after the tool is processed.
The LUT generated by the LUT Cube Creator is applied to an image using the LUT Cube Apply node.
Inspector
There are no controls for the LUT Cube Apply node. The LUT connected to the green foreground input
is applied to the image connected to the orange background input without having to write an actual
3D LUT using the LUT Cube Analyzer.
Settings Tab
The Settings tab in the Inspector is also duplicated in other LUT nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
There are no inputs on the LUT Cube Creator. The purpose of the node is to generate an image that
can be used to create a LUT.
Generating a LUT starts with the LUT Cube Creator and ends with a LUT Cube Analyzer.
Inspector
Controls Tab
The Controls tab creates a test pattern of sorts used to create a 3D LUT. The controls here determine
the complexity of the pattern used to create a LUT using the LUT Cube Analyzer.
Type:
The Type menu is used to create a pattern of color cubes.
– Horizontal: Creates a long, horizontal strip representing a color cube.
– Vertical: Creates a long, vertical strip representing a color cube.
– Rect: Creates a rectangular image, as depicted below, representing a color cube.
A Cube image created with the Rect type The resulting color cube
NOTE: Higher resolutions yield more accurate results but are also more memory and
computationally expensive.
Settings Tab
The Settings tab in the Inspector is also duplicated in other LUT nodes. These common controls are
described in the following “The Common Controls” section.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the LUT category. The controls are
consistent and work the same way for each tool, although some tools do include one or two individual
options, which are also covered here.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Mask Nodes
This chapter details the Mask nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Bitmap Mask [BMP] 1066
B-Spline Mask [BSP] 1070
Ellipse Mask [ELP] 1074
Mask Paint [PNM] 1077
Polygon Mask [PLY] 1080
Ranges Mask [RNG] 1085
Rectangle Mask [REC] 1090
Triangle Mask [TRI] 1093
Wand Mask [WND] 1096
The Common Controls 1099
Inputs
The Bitmap mask node includes two inputs in the Node Editor.
– Input: The orange input accepts a 2D image from which the mask will be created.
– Effect Mask: The optional blue input expects a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input combines
the masks. How masks are combined is handled in the Paint mode menu in the Inspector.
Bitmap nodes can be chained together for more advanced matte operations.
Controls Tab
The Controls tab is used to refine how the image connected to the orange input converts into the
Bitmap mask.
Level
The Level control sets the transparency level of the pixels in the mask channel. When the value is 1.0,
the mask is completely opaque (unless it has a soft edge). Lower values cause the mask to be partially
transparent. The result is identical to lowering the Blend control of an effect.
NOTE: Lowering the level of a mask lowers the values of all pixels covered by the mask in the
mask channel. For example, if a Circle mask is placed over a Rectangle mask, lowering the
level of the Circle mask lowers the values of all the pixels in the mask channel, even though
the Rectangle mask beneath it is still opaque.
Filter
This control selects the filtering algorithm used when applying Soft Edge to the mask.
– Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
– Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
– Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
Soft Edge
Use the Soft Edge slider to blur (feather) the mask, using the selected filter. Higher values cause the
edge to fade off well beyond the boundaries of the mask. A value of 0.0 creates a crisp, well-
defined edge.
Paint Mode
Connecting a mask to the effect mask input displays the Paint mode menu. The Paint mode is used to
determine how the incoming mask for the effect mask input and the mask created in the node
are combined.
– Merge: Merge is the default for all masks. The new mask is merged with the input mask.
– Add: The mask’s values add to the input mask’s values.
– Subtract: In the intersecting areas, the new mask values subtract from the input mask’s values.
– Minimum: Comparing the input mask’s values and the new mask, this displays the lowest
(minimum) value.
– Maximum: Comparing the input mask’s values and the new mask, this displays the highest
(maximum) value.
– Average: This calculates the average (half the sum) of the new mask and the input mask.
– Multiply: This multiplies the values of the input mask by the new mask’s values.
– Replace: The new mask completely replaces the input mask wherever they intersect. Areas that
are zero (completely black) in the new mask do not affect the input mask.
– Invert: Areas of the input mask that are covered by the new mask are inverted; white becomes
black and vice versa. Gray areas in the new mask are partially inverted.
– Copy: This mode completely discards the input mask and uses the new mask for all values.
– Ignore: This mode completely discards the new mask and uses the input mask for all values.
Invert
Selecting this checkbox inverts the entire mask. Unlike the Invert Paint mode, this checkbox affects all
pixels, regardless of whether the new mask covers them.
Fit Input
This menu is used to select how the image source is treated if it does not fit the dimensions of the
generated mask.
In the example below, a 720 x 576 image source (yellow) is used to generate a 1920 x 1080
mask (gray).
– Crop: If the image source is smaller than the generated mask, it will be placed according to
the X/Y controls, masking off only a portion of the mask. If the image source is larger than the
generated mask, it will be placed according to the X/Y controls and cropped off at the borders of
the mask.
– Inside: The image source will be scaled uniformly until one of its dimensions (X or Y) fits the
inside dimensions of the mask. Depending on the relative dimensions of the image source and
mask background, either the image source’s width or height may be cropped to fit the respective
dimensions of the mask.
– Width: The image source will be scaled uniformly until its width (X) fits the width of the mask.
Depending on the relative dimensions of the image source and mask, the image source’s Y
dimension might not fit the mask’s Y dimension, resulting in either cropping of the image source in
Y or the image source not covering the mask’s height entirely.
– Height: The image source will be scaled uniformly until its height (Y) fits the height of the mask.
Depending on the relative dimensions of the image source and mask, the image source’s
X-dimension might not fit the mask’s X-dimension, resulting in either cropping of the image source
in X or the image source not covering the mask’s width entirely.
Center X and Y
These controls adjust the position of the Bitmap mask.
Channel
The Channel menu determines the Channel of the input image used to create the mask. Choices
include the red, green, blue, and alpha channels, the hue, luminance, or saturation values, or the
auxiliary coverage channel of the input image (if one is provided).
Threshold Low/High
The Threshold range control can be used to clip the bitmap image. Increasing the low range control
will clip pixels below the specified value to black (0.0). Decreasing the high range control will force
pixels higher than the specified value to white (1.0).
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are also duplicated in other Mask nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
A B-Spline mask is identical to a Polygon mask in all respects except one. Where Polygon masks use
Bézier splines, this mask node uses B-Splines. Where Bézier splines employ a central point and two
handles to manage the smoothing of the spline segment, a B-Spline requires only a single point. This
means that a B-Spline shape requires far fewer control points to create a nicely smoothed shape.
When first added to a node, the B-Spline mask consists of only Center control, which is visible
onscreen. Points are added to the B-Spline by clicking in the viewer. Each new point is connected to
Inputs
The B-Spline mask node includes a single effect mask input.
– Effect Mask: The optional blue input expects a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input combines
the masks. How masks are combined is handled in the Paint mode menu in the Inspector.
Inspector
Level
The Level control sets the transparency level of the pixels in the mask channel. When the value is 1.0,
the mask is completely opaque (unless it has a soft edge). Lower values cause the mask to be partially
transparent. The result is identical to lowering the blend control of an effect.
NOTE: Lowering the level of a mask lowers the values of all pixels covered by the mask in the
mask channel. For example, if a Circle mask is placed over a Rectangle mask, lowering the
level of the Circle mask lowers the values of all of the pixels in the mask channel, even though
the Rectangle mask beneath it is still opaque.
Filter
This control selects the filtering algorithm used when applying Soft Edge to the mask.
– Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
– Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
– Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
– Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Soft Edge
Use the Soft Edge slider to blur (feather) the mask, using the selected filter. Higher values cause the
edge to fade off well beyond the boundaries of the mask. A value of 0.0 creates a crisp, well-
defined edge.
Border Width
The Border Width control adjusts the thickness of the mask’s edge. When the solid checkbox is
toggled on, the border thickens or narrows the mask. When the mask is not solid, the mask shape
draws as an outline, and the width uses the Border Width setting.
Paint Mode
Connecting a mask to the effect mask input displays the Paint mode menu. The Paint mode is used to
determine how the incoming mask for the effect mask input and the mask created in the node
are combined.
– Merge: Merge is the default for all masks. The new mask is merged with the input mask.
– Add: The mask’s values add to the input mask’s values.
– Subtract: In the intersecting areas, the new mask values subtract from the input mask’s values.
Invert
Selecting this checkbox inverts the entire mask. Unlike the Invert Paint mode, the checkbox affects all
pixels, regardless of whether the new mask covers them or not.
Solid
When the Solid checkbox is enabled, the mask is filled to be transparent (white) unless inverted. When
disabled, the spline is drawn as just an outline whose thickness is determined by the Border
Width slider.
Center X and Y
These controls adjust the position of the B-Spline mask.
Size
Use the Size control to adjust the scale of the B-Spline effect mask, without affecting the relative
behavior of the points that compose the mask or setting a keyframe in the mask animation.
X, Y, and Z Rotation
Use these three controls to adjust the rotation angle of the mask along any axis.
Fill Method
The Fill Method menu offers two different techniques for dealing with overlapping regions of a
polyline. If overlapping segments in a mask are causing undesirable holes to appear, try switching the
setting of this control from Alternate to Non Zero Winding.
Adding Points
Adding Points to a B-Spline effect mask is relatively simple. Immediately after adding the node to the
Node Editor, there are no points, but the tool will be in Click Append mode. Click once in the viewer
wherever a point is required for the mask. Continue clicking to draw the shape of the mask.
When the shape is complete, click on the initial point again to close the mask.
When the shape is closed, the mode of the polyline changes to Insert and Modify. This allows you to
add and adjust additional points on the mask by clicking the spline segments. To lock down the mask’s
B-Spline Toolbar
When a B-Spline mask is selected in the Node Editor, a toolbar appears above the viewer with buttons
for easy access to the modes. Position the pointer over any button in the toolbar to display a tooltip
that describes that button’s function.
You can change the way the toolbar is displayed by right-clicking on the toolbar and selecting from the
options displayed in the toolbar’s contextual menu.
The functions of the buttons in this toolbar are explained in depth in the Polylines section.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are also duplicated in other mask nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Ellipse mask node includes a single effect mask input.
– Effect Mask: The optional blue input expects a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input combines
the masks. How masks are combined is handled in the Paint mode menu in the Inspector.
Inspector
Controls Tab
The Controls tab is used to refine how the ellipse appears after drawing it in the viewer.
NOTE: Lowering the level of a mask lowers the values of all pixels covered by the mask in the
mask channel. For example, if a Circle mask is placed over a Rectangle mask, lowering the
level of the Circle mask lowers the values of all of the pixels in the mask channel, even though
the Rectangle mask beneath it is still opaque.
Filter
This control selects the filtering algorithm used when applying Soft Edge to the mask.
– Box: This is the fastest method but at reduced quality. Box is best suited for
minimal amounts of blur.
– Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
– Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
– Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Soft Edge
Use the Soft Edge slider to blur (feather) the mask, using the selected filter. Higher values cause the
edge to fade off well beyond the boundaries of the mask. A value of 0.0 creates a crisp, well-
defined edge.
Border Width
The Border Width control adjusts the thickness of the mask’s edge. When the solid checkbox is
toggled on, the border thickens or narrows the mask. When the mask is not solid, the mask shape
draws as an outline, and the width uses the Border Width setting.
Paint Mode
Connecting a mask to the effect mask input displays the Paint mode menu. The Paint mode is used to
determine how the incoming mask for the effect mask input and the mask created in the node
are combined.
– Merge: Merge is the default for all masks. The new mask is merged with the input mask.
– Add: The mask’s values add to the input mask’s values.
– Subtract: In the intersecting areas, the new mask values subtract from the input mask’s values.
– Minimum: Comparing the input mask’s values and the new mask, this displays the lowest
(minimum) value.
– Maximum: Comparing the input mask’s values and the new mask, this displays the highest
(maximum) value.
– Average: This calculates the average (half the sum) of the new mask and the input mask.
– Multiply: This multiplies the values of the input mask by the new mask’s values.
Invert
Selecting this checkbox inverts the entire mask. Unlike the Invert Paint mode, the checkbox affects all
pixels, regardless of whether the new mask covers them or not.
Solid
When the Solid checkbox is enabled, the mask is filled to be transparent (white) unless inverted. When
disabled, the spline is drawn as just an outline whose thickness is determined by the Border
Width slider.
Center X and Y
These controls adjust the position of the Ellipse mask.
Width
This control allows independent control of the ellipse mask’s Width. In addition to the slider in the
mask’s controls, interactively drag the width (left or right edge) of the mask on the viewer using the
pointer. Any changes will be reflected in this control.
Height
Height allows independent control of the ellipse mask’s height. In addition to the slider in the mask’s
controls, interactively drag the height (top or bottom edge) of the mask on the view using the pointer.
Any changes will be reflected in this control.
To change the mask’s size without affecting the aspect ratio, drag the onscreen control between the
edges (diagonal). This will modify both the width and height proportionately.
Angle
Change the rotational angle of the mask by moving the Angle control left or right. Values can be
entered into the number fields provided. Alternately, use the onscreen controls by dragging the little
circle at the end of the dashed angle line to interactively adjust the rotation of the ellipse.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are also duplicated in other mask nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Paint mask node includes a single effect mask input.
– Effect Mask: The optional blue input expects a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input combines
the masks. How masks are combined is handled in the Paint mode menu in the Inspector.
Inspector
Mask Tab
The Mask tab is used to refine the basic mask parameters that do not fall into the category of
“panting.” These include how multiple masks are combined, overall softness control, and level control.
Level
The Level control sets the transparency level of the pixels in the mask channel. When the value is 1.0,
the mask is completely opaque (unless it has a soft edge). Lower values cause the mask to be partially
transparent. The result is identical to lowering the blend control of an effect.
NOTE: Lowering the level of a mask lowers the values of all pixels covered by the mask in the
mask channel. For example, if a Circle mask is placed over a Rectangle mask, lowering the
level of the Circle mask lowers the values of all of the pixels in the mask channel, even though
the Rectangle mask beneath it is still opaque.
Filter
This control selects the filtering algorithm used when applying Soft Edge to the mask.
– Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
– Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
– Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
– Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Paint Mode
Connecting a mask to the effect mask input displays the Paint mode menu. The Paint mode is used to
determine how the incoming mask for the effect mask input and the mask created in the node
are combined.
– Merge: Merge is the default for all masks. The new mask is merged with the input mask.
– Add: The mask’s values add to the input mask’s values.
– Subtract: In the intersecting areas, the new mask values subtract from the input mask’s values.
– Minimum: Comparing the input mask’s values and the new mask, this displays the lowest
(minimum) value.
– Maximum: Comparing the input mask’s values and the new mask, this displays the highest
(maximum) value.
– Average: This calculates the average (half the sum) of the new mask and the input mask.
– Multiply: This multiplies the values of the input mask by the new mask’s values.
– Replace: The new mask completely replaces the input mask wherever they intersect. Areas that
are zero (completely black) in the new mask do not affect the input mask.
– Invert: Areas of the input mask that are covered by the new mask are inverted; white becomes
black and vice versa. Gray areas in the new mask are partially inverted.
– Copy: This mode completely discards the input mask and uses the new mask for all values.
– Ignore: This mode completely discards the new mask and uses the input mask for all values.
Invert
Selecting this checkbox inverts the entire mask. Unlike the Invert Paint mode, the checkbox affects all
pixels, regardless of whether the new mask covers them or not.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are also duplicated in other mask nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Polygon mask node includes a single effect mask input.
– Effect Mask: The optional blue input expects a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input combines
the masks. How masks are combined is handled in the Paint mode menu in the Inspector.
Inspector
Controls Tab
The Controls tab is used to refine how the polyline appears after drawing it in the viewer.
Level
The Level control sets the transparency level of the pixels in the mask channel. When the value is 1.0,
the mask is completely opaque (unless it has a soft edge). Lower values cause the mask to be partially
transparent. The result is identical to lowering the blend control of an effect.
NOTE: Lowering the level of a mask lowers the values of all pixels covered by the mask in the
mask channel. For example, if a Circle mask is placed over a Rectangle mask, lowering the
level of the Circle mask lowers the values of all of the pixels in the mask channel, even though
the Rectangle mask beneath it is still opaque.
Filter
This control selects the filtering algorithm used when applying Soft Edge to the mask.
– Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
– Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
– Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
– Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Soft Edge
Use the Soft Edge slider to blur (feather) the mask, using the selected filter. Higher values cause the
edge to fade off well beyond the boundaries of the mask. A value of 0.0 creates a crisp, well-
defined edge.
Border Width
The Border Width control adjusts the thickness of the mask’s edge. When the solid checkbox is
toggled on, the border thickens or narrows the mask. When the mask is not solid, the mask shape
draws as an outline, and the width uses the Border Width setting.
Paint Mode
Connecting a mask to the effect mask input displays the Paint mode menu. The Paint mode is used to
determine how the incoming mask for the effect mask input and the mask created in the node
are combined.
– Merge: Merge is the default for all masks. The new mask is merged with the input mask.
– Add: The mask’s values add to the input mask’s values.
– Subtract: In the intersecting areas, the new mask values subtract from the input mask’s values.
– Minimum: Comparing the input mask’s values and the new mask, this displays the lowest
(minimum) value.
Invert
Selecting this checkbox inverts the entire mask. Unlike the Invert Paint mode, the checkbox affects all
pixels, regardless of whether the new mask covers them or not.
Solid
When the Solid checkbox is enabled, the mask is filled to be transparent (white) unless inverted. When
disabled, the spline is drawn as just an outline whose thickness is determined by the Border
Width slider.
Center X and Y
These controls adjust the position of the polygon spline mask.
Size
Use the Size control to adjust the scale of the polygon spline effect mask, without affecting the relative
behavior of the points that compose the mask or setting a keyframe in the mask animation.
X, Y, and Z Rotation
Use these three controls to adjust the rotation angle of the mask along any axis.
Fill Method
The Fill Method menu offers two different techniques for dealing with overlapping regions of a
polyline. If overlapping segments in a mask are causing undesirable holes to appear, try switching the
setting of this control from Alternate to Non Zero Winding.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are also duplicated in other mask nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
When a Polygon (or B-Spline) mask is added to a node, a toolbar appears above the viewer, offering
easy access to modes. Hold the pointer over any button in the toolbar to display a tooltip that
describes that button’s function.
– Click: Click is the default option when creating a polyline (or B-Spline) mask. It is a Bézier style
drawing tool. Clicking sets a control point and appends the next control point when you click again
in a different location.
– Draw: Draw is a freehand drawing tool. It creates a mask similar to drawing with a pencil on paper.
You can create a new mask using the Draw tool, or you can extend an existing open spline by
clicking the Draw tool and starting to draw from the last control point.
– Insert: Insert adds a new control point along the spline.
– Modify: Modify allows you to safely move or smooth any exiting point along a spline without
worrying about adding new points accidentally.
– Done: Prevents any point along the spline from being moved or modified. Also, new points cannot
be added. You can, however, move and rotate the entire spline.
– Closed: Closes an open spline.
– Smooth: Changes the selected control point from a linear to a smooth curve.
– Linear: Changes the selected control point from a smooth curve to linear.
– Select All: Selects all the control points on the spline.
– Keys: Shows or hides the control points along the spline.
– Handles: Shows or hides the Bézier handles along the polyline.
– Shape: Places a reshape rectangle around the selected spline shape. Using the reshape
rectangle, you can deform groups of control points or entire shapes much easier than modifying
each point.
– Delete: Deletes the selected control point(s).
– Reduce: Opens a Freehand precision window that can be used to reduce the number of controls
points on a spline. This can make the paint stroke easier to modify, especially if it has been
created using the Draw tool.
– Publish menu: You can use the publish menu to select between publishing the control points or
the path. Publishing is a form of parameter linking, it makes the selected item available for use by
other controls. It also allows you to attach a control point to a tracker.
– Follow Points: Allows a selected point to follow the path of a published point. The point follows
the published point using an offset position.
Change the way the toolbar is displayed by right-clicking on the toolbar and selecting from the options
displayed in the toolbar’s contextual menu. The functions of the buttons in this toolbar are explained in
depth in the Polylines chapter.
Inspector
Level
The Level control sets the transparency level of the pixels in the mask channel. When the value is 1.0,
the mask is completely opaque (unless it has a soft edge). Lower values cause the mask to be partially
transparent. The result is identical to lowering the blend control of an effect.
NOTE: Lowering the level of a mask lowers the values of all pixels covered by the mask in the
mask channel. For example, if a Circle mask is placed over a Rectangle mask, lowering the
level of the Circle mask lowers the values of all of the pixels in the mask channel, even though
the Rectangle mask beneath it is still opaque.
Filter
This control selects the filtering algorithm used when applying Soft Edge to the mask.
– Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
– Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
– Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
– Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Soft Edge
Use the Soft Edge slider to blur (feather) the mask, using the selected filter. Higher values cause the
edge to fade off well beyond the boundaries of the mask. A value of 0.0 creates a crisp, well-
defined edge.
Paint Mode
Connecting a mask to the effect mask input displays the Paint mode menu. The Paint mode is used to
determine how the incoming mask for the effect mask input and the mask created in the node
are combined.
– Merge: Merge is the default for all masks. The new mask is merged with the input mask.
– Add: The mask’s values add to the input mask’s values.
– Subtract: In the intersecting areas, the new mask values subtract from the input mask’s values.
– Minimum: Comparing the input mask’s values and the new mask, this displays the lowest
(minimum) value.
– Maximum: Comparing the input mask’s values and the new mask, this displays the highest
(maximum) value.
– Average: This calculates the average (half the sum) of the new mask and the input mask.
Invert
Selecting this checkbox inverts the entire mask. Unlike the Invert Paint mode, the checkbox affects all
pixels, regardless of whether the new mask covers them or not.
Center X and Y
These controls adjust the position of the ranges mask.
Fit Input
This menu is used to select how the image source is treated if it does not fit the dimensions of the
generated mask.
For example, below, a 720 x 576 image source (yellow) is used to generate a 1920 x 1080 mask (gray).
– Crop: If the image source is smaller than the generated mask, it is placed according to the X/Y
controls, masking off only a portion of the mask. If the image source is larger than the generated
mask it is placed according to the X/Y controls and cropped off at the borders of the mask.
– Stretch: The image source is stretched in X and Y to accommodate the full dimensions of the
generated mask. This might lead to visible distortions of the image source.
– Inside: The image source is scaled uniformly until one of its dimensions (X or Y) fits the inside
dimensions of the mask. Depending on the relative dimensions of the image source and mask
background, either the image source’s width or height may be cropped to fit the respective
dimension of the mask.
– Height: The image source is scaled uniformly until its height (Y) fits the height of the mask.
Depending on the relative dimensions of the image source and mask, the image source’s X
dimension might not fit the mask’s X dimension, resulting in either cropping of the image source in
X or the image source not covering the mask’s width entirely.
– Outside: The image source is scaled uniformly until one of its dimensions (X or Y) fits the outside
dimensions of the mask. Depending on the relative dimensions of the image source and mask,
either the image source’s width or height may be cropped or not fit the respective dimension of
the mask.
Channel
The Channel menu determines the Channel of the input image used to create the mask. Choices
include the red, green, blue, and alpha channels; the hue, luminance, or saturation values; or the
auxiliary coverage channel of the input image (if one is provided).
Shadows/Midtones/Highlights
These buttons are used to select which range is output by the node as a mask. White pixels represent
pixels that are considered to be part of the range, and black pixels are not included in the range. For
example, choosing Shadows would show pixels considered to be shadows as white, and pixels that
are not shadows as black. Mid gray pixels are only partly in the range and do not receive the full effect
of any color adjustments to that range.
Presets
This sets the splines to two commonly-used configurations. The Simple button gives a straightforward
linear-weighted selection, while the Smooth button uses a more natural falloff.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are also duplicated in other mask nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Rectangle mask node includes a single effect mask input.
– Effect Mask: The optional blue input expects a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input combines
the masks. How masks are combined is handled in the Paint mode menu in the Inspector.
Inspector
Controls Tab
The Controls tab is used to refine how the rectangle appears after drawing it in the viewer.
Level
The Level control sets the transparency level of the pixels in the mask channel. When the value is 1.0,
the mask is completely opaque (unless it has a soft edge). Lower values cause the mask to be partially
transparent. The result is identical to lowering the Blend control of an effect.
NOTE: Lowering the level of a mask lowers the values of all pixels covered by the mask in the
mask channel. For example, if a Circle mask is placed over a Rectangle mask, lowering the
level of the Circle mask lowers the values of all the pixels in the mask channel, even though
the Rectangle mask beneath it is still opaque.
Soft Edge
Use the Soft Edge slider to blur (feather) the mask, using the selected filter. Higher values cause the
edge to fade off well beyond the boundaries of the mask. A value of 0.0 creates a crisp, well-
defined edge.
Border Width
The Border Width control adjusts the thickness of the mask’s edge. When the solid checkbox is
toggled on, the border thickens or narrows the mask. When the mask is not solid, the mask shape
draws as an outline, and the width uses the Border Width setting.
Paint Mode
Connecting a mask to the effect mask input displays the Paint mode menu. The Paint mode is used to
determine how the incoming mask for the effect mask input and the mask created in the node
are combined.
– Merge: Merge is the default for all masks. The new mask is merged with the input mask.
– Add: The mask’s values add to the input mask’s values.
– Subtract: In the intersecting areas, the new mask values subtract from the input mask’s values.
– Minimum: Comparing the input mask’s values and the new mask, this displays the lowest
(minimum) value.
– Maximum: Comparing the input mask’s values and the new mask, this displays the highest
(maximum) value.
– Average: This calculates the average (half the sum) of the new mask and the input mask.
– Multiply: This multiplies the values of the input mask by the new mask’s values.
– Replace: The new mask completely replaces the input mask wherever they intersect. Areas that
are zero (completely black) in the new mask do not affect the input mask.
– Invert: Areas of the input mask that are covered by the new mask are inverted: white becomes
black and vice versa. Gray areas in the new mask are partially inverted.
– Copy: This mode completely discards the input mask and uses the new mask for all values.
– Ignore: This mode completely discards the new mask and uses the input mask for all values.
Invert
Selecting this checkbox inverts the entire mask. Unlike the Invert Paint mode, this checkbox affects all
pixels, regardless of whether the new mask covers them.
Center X and Y
These controls adjust the position of the Rectangle mask.
Corner Radius
Corner Radius allows the corners of the Rectangle mask to be rounded. A value of 0.0 is not rounding
at all, which means that the rectangle has sharp corners. A value of 1.0 applies the maximum amount of
rounding to the corners.
Angle
Change the rotation angle of an effect mask by moving the Angle control left or right. Values can be
entered in the provided input boxes. Alternatively, use the onscreen controls by dragging the little
circle at the end of the dashed angle line to interactively adjust the rotation of the ellipse.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are also duplicated in other mask nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Triangle mask node includes a single effect mask input.
– Effect Mask: The optional blue input expects a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input combines
the masks. How masks are combined is handled in the Paint mode menu in the Inspector.
Inspector
Controls Tab
The Controls tab is used to refine how the triangle appears after drawing it in the viewer.
Level
The Level control sets the transparency level of the pixels in the mask channel. When the value is 1.0,
the mask is completely opaque (unless it has a soft edge). Lower values cause the mask to be partially
transparent. The result is identical to lowering the Blend control of an effect.
NOTE: Lowering the level of a mask lowers the values of all pixels covered by the mask in the
mask channel. For example, if a Circle mask is placed over a Rectangle mask, lowering the
level of the Circle mask lowers the values of all the pixels in the mask channel, even though
the Rectangle mask beneath it is still opaque.
Soft Edge
Use the Soft Edge slider to blur (feather) the mask, using the selected filter. Higher values cause the
edge to fade off well beyond the boundaries of the mask. A value of 0.0 creates a crisp, well-
defined edge.
Border Width
The Border Width control adjusts the thickness of the mask’s edge. When the solid checkbox is
toggled on, the border thickens or narrows the mask. When the mask is not solid, the mask shape
draws as an outline, and the width uses the Border Width setting.
Paint Mode
Connecting a mask to the effect mask input displays the Paint mode menu. The Paint mode is used to
determine how the incoming mask for the effect mask input and the mask created in the node
are combined.
– Merge: Merge is the default for all masks. The new mask is merged with the input mask.
– Add: The mask’s values add to the input mask’s values.
– Subtract: In the intersecting areas, the new mask values subtract from the input mask’s values.
– Minimum: Comparing the input mask’s values and the new mask, this displays the lowest
(minimum) value.
– Maximum: Comparing the input mask’s values and the new mask, this displays the highest
(maximum) value.
– Average: This calculates the average (half the sum) of the new mask and the input mask.
– Multiply: This multiplies the values of the input mask by the new mask’s values.
– Replace: The new mask completely replaces the input mask wherever they intersect. Areas that
are zero (completely black) in the new mask do not affect the input mask.
– Invert: Areas of the input mask that are covered by the new mask are inverted: white becomes
black and vice versa. Gray areas in the new mask are partially inverted.
– Copy: This mode completely discards the input mask and uses the new mask for all values.
– Ignore: This mode completely discards the new mask and uses the input mask for all values.
Invert
Selecting this checkbox inverts the entire mask. Unlike the Invert Paint mode, this checkbox affects all
pixels, regardless of whether the new mask covers them.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are also duplicated in other mask nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Wand mask node includes two inputs in the Node Editor.
– Input: The orange input accepts a 2D image from which the mask is created.
– Effect Mask: The optional blue input expects a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input combines
the masks. How masks are combined is handled in the Paint mode menu in the Inspector.
Inspector
Controls Tab
The Controls tab is used to refine how the mask appears after the Wand makes a selection in
the viewer.
Level
The Level control sets the transparency level of the pixels in the mask channel. When the value is 1.0,
the mask is completely opaque (unless it has a soft edge). Lower values cause the mask to be partially
transparent. The result is identical to lowering the Blend control of an effect.
NOTE: Lowering the level of a mask lowers the values of all pixels covered by the mask in the
mask channel. For example, if a Circle mask is placed over a Rectangle mask, lowering the
level of the Circle mask lowers the values of all the pixels in the mask channel, even though
the Rectangle mask beneath it is still opaque.
Soft Edge
Use the Soft Edge slider to blur (feather) the mask, using the selected filter. Higher values cause the
edge to fade off well beyond the boundaries of the mask. A value of 0.0 creates a crisp, well-
defined edge.
Paint Mode
Connecting a mask to the effect mask input displays the Paint mode menu. The Paint mode is used to
determine how the incoming mask for the effect mask input and the mask created in the node
are combined.
– Merge: Merge is the default for all masks. The new mask is merged with the input mask.
– Add: The mask’s values add to the input mask’s values.
– Subtract: In the intersecting areas, the new mask values subtract from the input mask’s values.
– Minimum: Comparing the input mask’s values and the new mask, this displays the lowest
(minimum) value.
– Maximum: Comparing the input mask’s values and the new mask, this displays the highest
(maximum) value.
– Average: This calculates the average (half the sum) of the new mask and the input mask.
– Multiply: This multiplies the values of the input mask by the new mask’s values.
– Replace: The new mask completely replaces the input mask wherever they intersect. Areas that
are zero (completely black) in the new mask do not affect the input mask.
– Invert: Areas of the input mask that are covered by the new mask are inverted; white becomes
black and vice versa. Gray areas in the new mask are partially inverted.
– Copy: This mode completely discards the input mask and uses the new mask for all values.
– Ignore: This mode completely discards the new mask and uses the input mask for all values.
Invert
Selecting this checkbox inverts the entire mask. Unlike the Invert Paint mode, this checkbox affects all
pixels, regardless of whether the new mask covers them.
Selection Point
The Selection Point is a pair of X and Y coordinates that determines where in the source image the
Wand mask derives its initial color sample. This control is also seen as a crosshair in the viewers. The
selection point can be positioned manually, connected to a tracker, path, or other expressions.
Channel
The Channel button group is used to select whether the color that is masked comes from all three
color channels of the image, the alpha channel, or an individual channel only.
The exact labels of the buttons depend on the color space selected for the Wand mask operation.
If the color space is RGB, the options are R, G, or B. If YUV is the color space, the options are Y, U, or V.
Range
The Range slider controls the range of colors around the source color that are included in the mask.
If the value is left at 0.0, only pixels of the same color as the source are considered part of the mask.
The higher the value, the more that similar colors in the source are considered to be wholly part
of the mask.
Inspector
Image Tab
The controls in this tab set the resolution and clipping method used by the generated mask.
Output Size
The Output size menu sets the resolution of the mask node’s output. The three options include the
default resolution of the comp, the source input’s resolution on nodes that have an input, or a custom
resolution.
Custom
When selecting Custom from the Output Size menu, the width, height, and pixel aspect of the mask
created are locked to values defined in the composition’s Frame Format preferences. If the Frame
Format preferences change, the resolution of the mask produced is changed to match. Disabling this
option can be useful for building a composition at a different resolution than the eventual target
resolution for the final render.
NOTE: Right-click on the Width, Height, or Pixel Aspect controls to display a menu listing the
file formats defined in the preferences Frame Format tab. Selecting any of the listed options
sets the width, height, and pixel aspect to the values for that format.
Clipping Mode
This option determines how the domain of definition rendering handles edges. The Clipping mode is
most important when blur or softness is applied, which may require samples from portions of the
image outside the current domain.
– Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If
the upstream DoD is smaller than the frame, the remaining area in the frame is treated as
black/transparent.
– None: Setting this option to None does not perform any source image clipping. Any data required
to process the node’s effect that would usually be outside the upstream DoD is treated as
black/transparent.
Motion Blur
– Motion Blur: This toggles the rendering of Motion Blur on the tool. When this control is toggled
on, the tool’s predicted motion is used to produce the motion blur caused by the virtual camera’s
shutter. When the control is toggled off, no motion blur is created.
– Quality: Quality determines the number of samples used to create the blur. A quality setting of
2 causes Fusion to create two samples to either side of an object’s actual motion. Larger values
produce smoother results but increase the render time.
– Shutter Angle: Shutter Angle controls the angle of the virtual shutter used to produce the motion
blur effect. Larger angles create more blur but increase the render times. A value of 360 is the
equivalent of having the shutter open for one full frame exposure. Higher values are possible and
can be used to create interesting effects.
– Center Bias: Center Bias modifies the position of the center of the motion blur. This allows for the
creation of motion trail effects.
– Sample Spread: Adjusting this control modifies the weighting given to each sample. This affects
the brightness of the samples.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off hardware-accelerated
rendering using the graphics card in your computer. Enabled uses the hardware, and Auto uses a
capable GPU if one is available and falls back to software rendering when a capable GPU is
not available
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Matte Nodes
This chapter details the Matte nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Alpha Divide [ADV] 1103
Alpha Multiply [AML] 1104
Chroma Keyer [CKY] 1105
Clean Plate 1110
Delta Keyer 1113
Difference Keyer [DfK] 1121
Luma Keyer [LKY] 1125
Matte Control [MAT] 1128
Primatte [Pri] 1133
How to Key with Primatte 1144
Ultra Keyer [UKY] 1146
The Common Controls 1152
Inputs
The Alpha Divide node includes two inputs in the Node Editor.
– Input: The orange input accepts a 2D image with a premultiplied Alpha.
– Effect Mask: The optional blue input expects a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input limits
the pixels where the Alpha divide occurs. An effects mask is applied to the tool after
the tool is processed.
An Alpha Divide node is inserted before color correcting an image with premultiplied alpha.
Inspector
This node has no controls.
Inputs
The Alpha Multiply node includes two inputs in the Node Editor.
– Input: The orange input accepts a 2D image with a “straight” or non-premultiplied alpha.
– Effect Mask: The optional blue input expects a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input limits
the pixels where the Alpha multiply occurs. An effects mask is applied to the tool after the
tool is processed.
An Alpha Multiply node is inserted after color correcting an image with premultiplied alpha.
Inspector
This node has no controls.
NOTE: When working with blue- or green-screen shots, it is best to use the Delta Keyer or
Primatte node, rather than the more general purpose Chroma Keyer node.
Inputs
The Chroma Keyer node includes four inputs in the Node Editor.
– Input: The orange input accepts a 2D image that contains the color you want to be
keyed for transparency.
– Garbage Matte: The gray garbage matte input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes
areas of the image that fall within the matte to be made transparent. The garbage matte is
applied directly to the alpha channel of the image.
– Solid Matte: The white solid matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas
of the image that fall within the matte to be fully opaque.
– Effect Mask: The optional blue input expects a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input limits
the pixels where the alpha multiply occurs. An effects mask is applied to the tool after the
tool is processed.
Key Type
The Key Type menu determines the selection method used for the matte creation.
– Chroma: The Chroma method creates a matte based on the RGB values of the
selected color range.
– Color: The Color method creates a matte based on the hue of the selected color range.
Color Range
Colors are made transparent by selecting the Chroma Keyer node in the node tree, and then dragging
a selection around the colors in the viewer. The range controls update automatically to represent the
current color selection. You can tweak the range sliders slightly, although most often selecting colors
in the displays is all that is required.
Soft Range
This control softens the selected color range, adding additional colors into the matte.
Spill Color
This menu selects the color used as the base for all spill suppression techniques.
Spill Suppression
This slider sets the amount of spill suppression applied to the foreground subject.
When this slider is set to 0, no spill suppression is applied.
Spill Method
This menu selects the strength of the algorithm used to apply spill suppression to the image.
– None: None is selected when no spill suppression is required.
– Rare: This removes very little of the spill color and is the lightest of all methods.
– Medium: This works best for green screens.
– Well Done: This works best for blue screens.
– Burnt: This works best for blue. Use this mode only for very troublesome shots. Most likely you will
have to add strong color correction after the key to get, for example, your skin tones back.
Fringe Gamma
This control is used to adjust the brightness of the fringe or halo that surrounds the keyed image.
Fringe Size
This expands and contracts the size of the fringe or halo surrounding the keyed image.
Fringe Shape
Fringe Shape forces the fringe toward the external edge of the image or toward the inner edge of the
fringe. Its effect is most noticeable while the Fringe Size slider’s value is large.
Filter
This control selects the filtering algorithm used when applying blur to the matte.
– Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
– Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
– Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
– Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Blur
Matte Blur blurs the edge of the matte based on the Filter menu setting. A value of zero results in a
sharp, cutout-like hard edge. The higher the value, the more blur applied to the matte.
Clipping Mode
This option determines how edges are handled when performing domain of definition rendering.
This is profoundly important when blurring the matte, which may require samples from portions of the
image outside the current domain.
– Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If the
upstream DoD is smaller than the frame, the remaining area in the frame will be treated as
black/transparent.
– Domain: Setting this option to Domain will respect the upstream domain of definition when
applying the node’s effect. This can have adverse clipping effects in situations where the node
employs a large filter.
– None: Setting this option to None will not perform any source image clipping at all. This means
that any data required to process the node’s effect that would normally be outside the upstream
DoD will be treated as black/transparent.
Contract/Expand
This slider shrinks or grows the semitransparent areas of the matte. Values above 0.0 expand the
matte, while values below 0.0 contract it.
This control is usually used in conjunction with the Matte Blur to take the hard edge of a matte and
reduce fringing. Since this control affects only semitransparent areas, it will have no effect on a matte’s
hard edge.
Gamma
Matte Gamma raises or lowers the values of the matte in the semitransparent areas. Higher values
cause the gray areas to become more opaque, and lower values cause the gray areas to become
more transparent. Completely black or white regions of the matte remain unaffected.
Since this control affects only semitransparent areas, it will have no effect on a matte’s hard edge.
Restore Fringe
This restores the edge of the matte around the keyed subject. Often when keying, the edge of the
subject where you have hair is clipped out. Restore Fringe brings back that edge while keeping the
matte solid.
Invert Matte
When this checkbox is selected, the alpha channel created by the keyer is inverted, causing all
transparent areas to be opaque and all opaque areas to be transparent.
Solid Matte
Solid Mattes are mask nodes or images connected to the solid matte input on the node. The solid
matte is applied directly to the alpha channel of the image. Generally, solid mattes are used to hold out
keying in areas you want to remain opaque, such as someone with blue eyes against a blue screen.
Enabling Invert will invert the solid matte, before it is combined with the source alpha.
Garbage Matte
Garbage mattes are mask nodes or images connected to the garbage matte input on the node. The
garbage matte is applied directly to the alpha channel of the image. Generally, garbage mattes are
used to remove unwanted elements that cannot be keyed, such as microphones and booms. They are
also used to fill in areas that contain the color being keyed but that you wish to maintain.
Garbage mattes of different modes cannot be mixed within a single tool. A Matte Control node is often
used after a Keyer node to add a garbage matte with the opposite effect of the matte applied to
the keyer.
Enabling Invert will invert the garbage matte, before it is combined with the source alpha.
Post-Multiply Image
Select this option to cause the keyer to multiply the color channels of the image against the alpha
channel it creates for the image. This option is usually enabled and is on by default.
Deselect this checkbox and the image can no longer be considered premultiplied for purposes of
merging it with other images. Use the Subtractive option of the Merge node instead of the
Additive option.
For more information on these Merge node settings, see Chapter 35, “Composite Nodes” in the Fusion
Reference Manual or Chapter 96 in the DaVinci Resolve Reference Manual.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other matte nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Once you have the selection, the Erode control expands the pre-matte, removing any small pixels of
non-green/blue screen around the edges. Then, growing the pre-matte fills in the holes until you have
a solid blue or green image.
Inputs
The Clean Plate node includes three inputs in the Node Editor.
– Input: The orange input accepts a 2D image that contains the green or blue screen.
– Garbage Matte: The white garbage matte input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes
areas of the image that fall within the matte to be excluded from the clean plate. For a clean
plate, garbage mattes should contain areas that are not part of the blue or green screen.
– Effect Mask: The optional blue input expects a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input limits the
pixels where the clean plate is generated. An effects mask is applied to the tool after the
tool is processed.
Inspector
Plate Tab
The Plate tab contains the primary tools for creating a clean plate. Using this tab, you drag over the
areas in the viewer, and then use the Erode and Grow Edges sliders to create the clean plate.
Method
The Method menu selects the type of color selection you use when sampling colors in the viewer.
– Color: Color uses a difference method to separate the background color. This works well on
screen colors that are even.
– Ranges: Ranges uses a chroma range method to separate the background color. This is a better
option for shadowed screen or screens that have different colors.
Erode
The Erode slider decreases the size of the screen area. It is used to eat away at small non-screen color
pixels that may interfere with creating a smooth green- or blue-screen clean plate.
Crop
Crop trims in from the edges of the image.
Grow Edges
The Grow Edges slider expands the color of the edges to fill in holes until fully green or blue screen
is created.
Fill
The Fill checkbox fills in remaining holes with color from the surrounding screen color.
Time Mode
– Sequence: Generates a new clean plate every frame.
– Hold Frame: Holds the clean plate at a single frame.
Mask Tab
The Mask tab is used to invert the mask connected to the garbage mask input on the node. The
garbage mask can be applied to clear areas before growing edges or filling remaining holes.
Invert
Invert uses the transparent parts of the mask to clear the image.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other matte nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Delta Keyer node includes five inputs in the Node Editor.
– Input: The orange input accepts a 2D image that contains the color you want to be keyed for
transparency.
– Garbage Matte: The gray garbage matte input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes
areas of the image that fall within the matte to be made transparent. The garbage matte is
applied directly to the alpha channel of the image.
– Solid Matte: The white solid matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas
of the image that fall within the matte to be fully opaque.
– Clean Plate: Accepts the resulting image from the Clean Plate node.
– Effect Mask: The optional blue input expects a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input
limits the pixels where the keying occurs. An effects mask is applied to the tool after
the tool is processed.
Inspector
Key Tab
The Key tab is where most keying begins. It is used to select the screen color.
Background Color
This is the color of the blue or green screen, sometimes called the screen color. To create the key with
the Delta Keyer, use the background color Eyedropper to select the screen color from the image.
Pre-Blur
Applies a blur before generating the alpha. This can help with certain types of noise, edge
enhancements, and artifacts in the source image.
Gain
Gain increases the influence of the screen color, causing those areas to become more transparent.
Soft Range
The Soft Range extends the range of selected color and rolloff of the screen color.
Erode
Erode contracts the edge of the pre matte, so the edge detail does not clip.
Matte Tab
The Matte tab refines the alpha of the key, combined with any solid and garbage masks connected to
the node. When using the matte tab, set the viewer to display the alpha channel of the Delta Keyer’s
final output.
Threshold
This range slider sets the lower threshold using the handle on the left and sets the upper threshold
using the handle on the right.
Any value below the lower threshold setting becomes black or transparent in the matte.
Any value above the upper threshold setting becomes white or opaque in the matte. All values within
the range maintain their relative transparency values.
Erode/Dilate
Expands or contracts the matte.
Blur
Softens the matte.
Clean Foreground
Fills slightly transparent (light gray) areas of the matte.
Clean Background
Clips the bottom dark range of the matte.
Replace Mode
Determines how matte adjustments restore color to the image.
– None: No color replacement. Matte processing does not affect the color.
– Source: The color from the original image.
– Hard Color: A solid color.
– Soft Color: A solid color weighted by how much background color was originally removed.
Replace Color
The color used with the Hard Color and Soft Color replace modes.
Fringe Tab
The Fringe tab handles the majority of spill suppression in the Delta Keyer. Spill suppression is a form
of color correction that attempts to remove the screen color from the fringe of the matte.
Spill is the transmission of the screen color through the semitransparent areas of the alpha channel. In
the case of blue- or green-screen keying, this usually causes the color of the background to become
apparent in the edges of the foreground subject.
Spill Suppression
When this slider is set to 0, no spill suppression is applied to the image. Increasing the slider increases
the strength of the spill method.
Fringe Gamma
This control can be used to adjust the brightness of the fringe or halo that surrounds the keyed image.
Fringe Size
This expands and contracts the size of the fringe or halo surrounding the keyed image.
Fringe Shape
Fringe Shape presses the fringe toward the external edge of the image or pulls it toward the inner
edge of the fringe. Its effect is most noticeable while the Fringe Size value is large.
Tuning Tab
The Tuning tab is an advanced tab that allows you to determine the size of the shadow, midtone, and
highlight ranges. By modifying the ranges, you can select the strength of the matte and spill
suppression based on tonal values.
Simple/Smooth
The Simple button sets the range to be linear. The Smooth button sets a smooth tonal gradient for
the ranges.
Mask Tab
The Mask tab determines how the solid and garbage mattes are applied to the key.
Garbage Mask
– Invert: Normally, solid areas of a garbage mask remove the image. When inverted, the transparent
areas of the mask remove the image.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other matte nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Difference Keyer node includes four inputs in the Node Editor.
– Background: The orange background input accepts a 2D image that contains just the set
without your subject.
– Foreground: The green foreground input accepts a 2D image that contains the shot with your
subject in the frame.
Inspector
Threshold
This range slider sets the lower threshold using the handle on the left and sets the upper threshold
using the handle on the right. Adjusting them defines a range of difference values between the
images to create a matte.
A difference below the lower threshold setting becomes black or transparent in the matte.
Any difference above the upper threshold setting becomes white or opaque in the matte.
The difference values in the range in between create a grayscale matte.
Filter
This control selects the filtering algorithm used when applying a blur to the matte.
– Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
– Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
– Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
– Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Blur
This blurs the edge of the matte using the method selected in the Filter menu. A value of zero results
in a sharp, cutout-like hard edge. The higher the value, the more blur.
Clipping Mode
This option determines how edges are handled when performing domain of definition rendering. This
is profoundly important when blurring the matte, which may require samples from portions of the
image outside the current domain.
– Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If the
upstream DoD is smaller than the frame, the remaining area in the frame is treated as black/
transparent.
– Domain: Setting this option to Domain respects the upstream domain of definition when applying
the node’s effect. This can have adverse clipping effects in situations where the node employs a
large filter.
– None: Setting this option to None does not perform any source image clipping at all. This means
that any data required to process the node’s effect that would usually be outside the upstream
DoD is treated as black/transparent.
Gamma
Matte Gamma raises or lowers the values of the matte in the semitransparent areas. Higher values
cause the gray areas to be more opaque, and lower values cause the gray areas to be more
transparent. Wholly black or white regions of the matte remain unaffected.
Invert
Selecting this checkbox inverts the matte, causing all transparent areas to be opaque and all opaque
areas to be transparent.
Solid Matte
Solid Mattes are mask nodes or images connected to the solid matte input on the node. The solid
matte is applied directly to the alpha channel of the image. Generally, solid mattes are used to hold out
keying in areas you want to remain opaque, such as someone with blue eyes against a blue screen.
Enabling Invert, inverts the solid matte before it is combined with the source alpha.
Garbage Matte
Garbage mattes are mask nodes or images connected to the garbage matte input on the node. The
garbage matte is applied directly to the alpha channel of the image. Generally, garbage mattes are
used to remove unwanted elements that cannot be keyed, such as microphones and booms. They are
also used to fill in areas that contain the color being keyed but that you wish to maintain.
Garbage mattes of different modes cannot be mixed within a single tool. A Matte Control node is often
used after a Keyer node to add a garbage matte with the opposite effect of the matte applied to
the keyer.
Enabling Invert inverts the garbage matte before it is combined with the source alpha.
Post-Multiply Image
Select this option to cause the keyer to multiply the color channels of the image against the alpha
channel it creates for the image. This option is usually enabled and is on by default.
Deselect this checkbox, and the image can no longer be considered premultiplied for purposes of
merging it with other images. Use the Subtractive option of the Merge node instead of the
Additive option.
For more information on these Merge node settings, see Chapter 35, “Composite Nodes” in the Fusion
Reference Manual or Chapter 96 in the DaVinci Resolve Reference Manual.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Matte nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Luma Keyer node includes four inputs in the Node Editor.
– Input: The orange input accepts a 2D image that contains the luminance values you want to be
keyed for transparency.
– Garbage Matte: The gray garbage matte input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes
areas of the image that fall within the matte to be made transparent. The garbage matte is
applied directly to the alpha channel of the image.
– Solid Matte: The white solid matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas
of the image that fall within the matte to be fully opaque.
– Effect Mask: The optional blue input expects a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input limits
the pixels where the luminance key occurs. An effects mask is applied to the tool after the
tool is processed.
Controls Tab
The Controls tab in the Luma Keyer contains all the parameters for adjusting the quality of the matte.
Channel
This menu selects the color channel used for creating the matte. Select from the Red, Green, Blue,
Alpha, Hue, Luminance, Saturation, and Depth (Z-buffer) channels.
Threshold
This range slider sets the lower threshold using the handle on the left and sets the upper threshold
using the handle on the right. Adjusting them defines a range of luminance values to create a matte.
A value below the lower threshold setting becomes black or transparent in the matte.
Any value above the upper threshold setting becomes white or opaque in the matte.
The values within the range create a grayscale matte.
Filter
This control selects the filtering algorithm used when applying a blur to the matte.
– Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
– Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
– Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
– Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Clipping Mode
This option determines how edges are handled when performing domain of definition rendering. This
is profoundly important when blurring the matte, which may require samples from portions of the
image outside the current domain.
– Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If the
upstream DoD is smaller than the frame, the remaining area in the frame is treated as black/
transparent.
– Domain: Setting this option to Domain respects the upstream domain of definition when applying
the node’s effect. This can have adverse clipping effects in situations where the node employs a
large filter.
– None: Setting this option to None does not perform any source image clipping at all. This means
that any data required to process the node’s effect that is usually outside the upstream DoD is
treated as black/transparent.
Contract/Expand
This slider shrinks or grows the semitransparent areas of the matte. Values above 0.0 expand the
matte, while values below 0.0 contract it.
This control is usually used in conjunction with the blur to take the hard edge of a matte and reduce
fringing. Since this control affects only semitransparent areas, it has no effect on a matte’s hard edge.
Gamma
Matte Gamma raises or lowers the values of the matte in the semitransparent areas. Higher values
cause the gray areas to be more opaque, and lower values cause the gray areas to be more
transparent. Wholly black or white regions of the matte remain unaffected.
Invert
Selecting this checkbox inverts the matte, causing all transparent areas to be opaque and all opaque
areas to be transparent.
Solid Matte
Solid mattes are mask nodes or images connected to the solid matte input on the node. The solid
matte is applied directly to the alpha channel of the image. Generally, solid mattes are used to hold out
keying in areas you want to remain opaque, such as someone with blue eyes against a blue screen.
Enabling Invert inverts the solid matte before it is combined with the source alpha.
Garbage Matte
Garbage mattes are mask nodes or images connected to the garbage matte input on the node. The
garbage matte is applied directly to the alpha channel of the image. Generally, garbage mattes are
used to remove unwanted elements that cannot be keyed, such as microphones and booms. They are
also used to fill in areas that contain the color being keyed but that you wish to maintain.
Garbage mattes of different modes cannot be mixed within a single tool. A Matte Control node is often
used after a Keyer node to add a garbage matte with the opposite effect of the matte applied to
the keyer.
Enabling Invert inverts the garbage matte before it is combined with the source alpha.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Matte nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Matte Control node includes four inputs in the Node Editor.
– Background: The orange background input accepts a 2D image that receives the foreground
image alpha channel (or some other channel you want to copy to the background).
– Foreground: The green foreground input accepts a 2D image that contains an alpha channel
(or some other channel) you want to be applied to the background image.
– Garbage Matte: The gray garbage matte input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input
causes areas of the foreground/background combination that fall within the matte to be made
transparent.
– Solid Matte: The white solid matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas
of the foreground/background combination that fall within the matte to be fully opaque.
– Effect Mask: The optional blue input expects a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input limits
the pixels where the matte control occurs. An effects mask is applied to the tool after the
tool is processed.
A Matte Control embedded an alpha from the foreground input to the background input
Inspector
Matte Tab
The Matte tab combines and modifies alpha or color channels from an image in the foreground input
with the background image.
Combine
Use this menu to select which operation is applied. The default is set to None for no operation.
– None: This causes the foreground image to be ignored.
– Combine Red: This combines the foreground red channel to the background alpha channel.
– Combine Green: This combines the foreground green channel to the background alpha channel.
Combine Operation
Use this menu to select the method used to combine the foreground channel with the background.
– Copy: This copies the foreground source over the background alpha, overwriting any existing
alpha in the background.
– Add: This adds the foreground source to the background alpha.
– Subtract: This subtracts the foreground source from the background alpha.
– Inverse Subtract: This subtracts the background alpha from the foreground source.
– Maximum: This compares the foreground source and the background alpha and takes the value
from the pixel with the highest value.
– Minimum: This compares the foreground source and the background alpha and takes the value
from the pixel with the lowest value.
– And: This performs a logical AND on the two values.
– Or: This performs a logical OR on the values.
– Merge Over: This merges the foreground source channel over the background alpha channel.
– Merge Under: This merges the foreground source channel under the background alpha channel.
Filter
Selects the Filter that is used when blurring the matte.
– Box Blur: This option applies a Box Blur effect to the whole image. This method is faster than the
Gaussian blur but produces a lower-quality result.
– Bartlett: Bartlett applies a more subtle, anti-aliased blur filter.
– Multi-Box: Multi-Box uses a box filter layered in multiple passes to approximate a Gaussian shape.
With a moderate number of passes (e.g., four), a high-quality blur can be obtained, often faster
than the Gaussian filter and without any ringing.
– Gaussian: Gaussian applies a smooth, symmetrical blur filter, using a sophisticated constant-time
Gaussian approximation algorithm. In extreme cases, this algorithm may exhibit ringing; see below
for a discussion of this. This mode is the default filter method.
Blur
This blurs the edge of the matte using a standard, constant speed Gaussian blur. A value of zero
results in a sharp, cutout-like hard edge. The higher the value, the more blur is applied to the matte.
Clipping Mode
This option determines how edges are handled when performing domain of definition rendering. This
is profoundly important when blurring the matte, which may require samples from portions of the
image outside the current domain.
– Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If the
upstream DoD is smaller than the frame, the remaining area in the frame is treated as black/
transparent.
Contract/Expand
This shrinks or grows the matte similar to an Erode Dilate node. Contracting the matte reveals more of
the foreground input, while expanding the matte reveals more of the background input. Values above
0.0 expand the matte, and values below 0.0 contract it.
Gamma
This raises or lowers the values of the matte in the semitransparent areas. Higher values cause the
gray areas to become more opaque, and lower values cause the gray areas to become more
transparent. Completely black or white regions of the matte remain unaffected.
Threshold
Any value below the lower threshold becomes black or transparent in the matte. Any value above the
upper threshold becomes white or opaque in the matte. All values within the range maintain their
relative transparency values.
Restore Fringe
This restores the edge of the matte around the keyed subject. Often when keying, the edge of the
subject where you have hair is clipped out. Restore Fringe brings back that edge while keeping the
matte solid.
Invert Matte
When this checkbox is selected, the alpha channel of the image is inverted, causing all transparent
areas to be opaque and all opaque areas to be transparent.
Solid Matte
Solid mattes are mask nodes or images connected to the solid matte input on the node. The solid
matte is applied directly to the alpha channel of the image. Generally, solid mattes are used to hold out
areas you want to remain opaque, such as someone with blue eyes against a blue screen.
Enabling Invert inverts the solid matte before it is combined with the source alpha.
Garbage Matte
Garbage mattes are mask nodes or images connected to the garbage matte input on the node. The
garbage matte is applied directly to the alpha channel of the image. Generally, garbage mattes are
used to remove unwanted elements that cannot be keyed, such as microphones and booms. They are
also used to fill in areas that contain the color being keyed but that you wish to maintain.
Garbage mattes of different modes cannot be mixed within a single tool. A Matte Control node is often
used after a Keyer node to add a garbage matte with the opposite effect of the matte applied to
the keyer.
Enabling Invert inverts the garbage matte before it is combined with the source alpha.
Post-Multiply Image
Selecting this option multiplies the color channels of the image against the alpha channel it creates for
the image. This option is usually enabled and is on by default.
Deselect this checkbox and the image can no longer be considered premultiplied for purposes of
merging it with other images. Use the Subtractive option of the Merge node instead of the
Additive option.
Spill Tab
The Spill tab handles spill suppression in the Matte Control. Spill suppression is a form of color
correction that attempts to remove the screen color from the fringe of the matte.
Spill is the transmission of the screen color through the semitransparent areas of the alpha channel. In
the case of blue- or green-screen keying, this usually causes the color of the background to become
apparent in the edges of the foreground subject.
Spill Color
This menu selects the color used as the base for all spill suppression techniques.
Spill Suppression
When this slider is set to 0, no spill suppression is applied to the image. Increasing the slider increases
the strength of the spill method.
Spill Method
This selects the strength of the algorithm used to apply spill suppression to the image.
– None: None is selected when no spill suppression is required.
– Rare: This removes very little of the spill color and is the lightest of all methods.
– Medium: This works best for green screens.
– Well Done: This works best for blue screens.
– Burnt: This works best for blue screen. Use this mode only for very troublesome shots.
Fringe Gamma
This control can be used to adjust the brightness of the fringe or halo that surrounds the keyed image.
Fringe Size
This expands and contracts the size of the fringe or halo surrounding the keyed image.
Fringe Shape
Fringe Shape presses the fringe toward the external edge of the image or pulls it toward the inner
edge of the fringe. Its effect is most noticeable while the Fringe Size value is large.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Matte nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Primatte [Pri]
Depending on the type of blue- or green-screen content, you may find that the Delta Keyer or the
Primatte keyer handles the specific keying task better. There is no one-solution-fits-all when it comes
to keying, and in some cases, the combination of the two keyers may prove to be the best solution.
NOTE: Primatte is distributed and licensed by IMAGICA Corp. of America, Los Angeles, CA,
USA. Primatte was developed by and is a trademark of IMAGICA Corp., Tokyo, Japan.
Inputs
The Primatte node includes six inputs in the Node Editor. Unlike every other tool in Fusion, the primary
orange input is labeled as the Foreground input, since it accepts the green-screen or blue-screen
image. The background input on the Primatte node is the green input; this is an optional input that
allows Primatte to create the final merged composite.
NOTE: Connecting the background input without connecting the replacement image input
uses the background image as the replacement image surf spill suppression.
Auto Compute
The Auto Compute button is likely the first button pressed when starting to key your footage. Primatte
automagically analyzes the original foreground image, determines the backing color, and sets it as the
central backing color. Then, using that information, another analysis determines the foreground areas.
A Clean FG Noise operation is performed using the newly determined foreground areas, and Primatte
renders the composite.
NOTE: The Auto Compute button may make the next three buttons—Select Background
Color, Clean Background Noise, and Clean Foreground Noise—unnecessary and make your
keying operation much more straightforward. Clicking Auto Compute automatically senses
the backing screen color, eliminates it, and even gets rid of some foreground and background
noise. If you get good results, then jump ahead to the Spill Removal tools. If you don’t get
satisfactory results, continue from this point using the three buttons described below.
Spill Sponge
The Spill Sponge is the quickest method for removing color spill on your subject. Click the Spill
Sponge button and scrub the mouse pointer over a screen color pixel, and the screen color
disappears from the selected color region and is replaced by a complementary color, a selected color,
or a color from a replacement image. These options are set in the Replace tab. Additionally, use the
tools under the Fine Tuning tab or use the Spill(+) and Split(-) features to adjust the spill.
Restore Detail
Clicking Restore Detail and scrubbing over background regions in the viewer turns completely
transparent areas translucent. This operation is useful for restoring lost hair details, thin wisps of
smoke, and the like.
Spill(+)
Clicking the Spill(+) button returns the color spill to the sampled pixel color (and all colors like it) in the
amount of one Primatte increment. This tool can be used to move the sampled color more in the
direction of the color in the original foreground image. It can be used to nullify a Spill(-) step.
Spill(-)
Clicking the Spill(-) button removes from the sampled pixel color (and all colors like it) in the amount of
one Primatte increment. If spill color remains, another click using this operational mode tool removes
more of the color spill. Continue using this tool until all color spill has been removed from the sampled
color region.
Matte(+)
Clicking the Matte(+) button makes the matte more opaque for the sampled pixel color (and all colors
like it) in the amount of one Primatte increment. If the matte is still too translucent or thin, another click
using this operational mode tool makes the sampled color region even more opaque. This can be
used to thicken smoke or make a shadow darker to match shadows in the background imagery. It can
only make these adjustments to the density of the color region on the original foreground image. It can
be used to nullify a Matte(-) step.
Matte(-)
Clicking the Matte(+) button makes the matte more translucent for the sampled pixel color (and all
colors like it) in the amount of one Primatte increment. If the matte is still too opaque, another click
using this operational mode tool makes the sampled color region even more translucent. This can be
used to thin out smoke or make a shadow thinner to match shadows in the background imagery.
Detail(+)
When this button is selected, the foreground detail becomes less visible for the sampled pixel color
(and all colors like it) in the amount of one Primatte increment. If there is still too much detail, another
click using this operational mode tool makes more of it disappear. This can be used to remove smoke
or wisps of hair from the composite. Sample where detail is visible, and it disappears. This is for
moving color regions into the 100% background region. It can be used to nullify a Detail(-) step.
Algorithms
There are three keying algorithms available in the Primatte keyer:
– Primatte: The Primatte algorithm mode delivers the best results and supports both the Solid Color
and the Complement Color spill suppression methods. This algorithm uses three multifaceted
polyhedrons (as described later in this section) to separate the 3D RGB colorspace. It is also the
default algorithm mode and, because it is computationally intensive, it may take the longest to
render.
– Primatte RT: Primatte RT is the simplest algorithm and therefore the fastest. It uses only a single
planar surface to separate the 3D RGB colorspace (as described later in this section) and, as
a result, does not separate the foreground from the backing screen as carefully as the above
Primatte algorithm. Another disadvantage of the Primatte RT algorithm is that it does not work well
with less saturated backing screen colors, and it does not support the Complement Color spill
suppression method.
– Primatte RT+: Primatte RT+ is in between the above two options. It uses a six planar surface color
separation algorithm (as described later in this section) and delivers results in between the other
two options in both quality and performance. Another disadvantage of the Primatte RT+ algorithm
is that it does not work well with less saturated backing screen colors, and it does not support the
Complement Color spill suppression method.
Hybrid Rendering
After sampling the backing screen color and producing acceptable edges around the foreground
object, you sometimes find a transparent area within the foreground subject. This can occur when the
foreground subject contains a color that is close to the backing screen color. Removing this
transparency with the Clean FG Noise mode can cause the edge of the foreground subject to pick up
a fringe that is close to the backing screen color. Removing the fringe is very difficult without
sacrificing quality somewhere else on the image. The Hybrid Render mode internally creates two
keying operations: Body and Edge. The optimized Edge operation gets the best edge around the
foreground subject without any fringe effect. The Body operation deals with transparency within the
foreground subject. The resultant matte is created by combining these two mattes, and then blurring
and eroding the foreground subject in the Body matte and combining it with the edge matte.
To use Hybrid Rendering, start by keying the main foreground area using the Select Background Color
mode (or any of the other Primatte backing screen detection methods). Activate the Hybrid Rendering
checkbox. Lastly, select the Clean FG Noise button and scrub over the transparent area. The Hybrid
Render mode performs the “Body/Edge” operation. The result is a final composite with perfect edges
around the foreground subject with a solid foreground subject.
Hybrid Blur
Blurs the Body matte that has been automatically generated when Hybrid Rendering is activated.
Hybrid Erode
This slider dilates or erodes the Hybrid matte. You can view the results by selecting Hybrid matte in
the View Mode menu.
Lighting Threshold
Should Adjust Lighting fail to produce a smoother backing screen, adjust the Lighting Threshold slider
while viewing the Lighting Background setting in the View Mode menu. This displays the optimized
artificial backing screen that the Adjust Lighting mode creates.
Crop
This button reveals the Crop sliders to create a rectangular garbage matte with the Primatte node. As
opposed to Fusion’s Crop tool, this does not change the actual image size.
Reset
Resets all the Primatte key control data back to a blue- or green-screen.
Soft Reset
Resets just the Primatte parameters used since the Select Background Color operation was last
completed.
Selected Color
This shows the color selected (or registered) by the scrubbing in the viewer while the Fine Tuning tab
is selected.
Spill
The Spill slider can be used to remove spill from the selected color region. The more to the right the
slider moves, the more spill is removed. The more to the left the slider moves, the closer the color
component of the selected region is to the color in the original foreground image. If moving the slider
to the right does not remove the spill, resample the color region and move the slider again.
These slider operations are additive. The result achieved by moving the slider to the right can also be
achieved by clicking on the color region using the Spill(-) operational mode.
Transparency
The Transparency slider makes the matte more translucent in the selected color region. Moving this
slider to the right makes the selected color region more transparent. Moving the slider to the left
makes the matte more opaque. If moving the slider to the right does not make the color region
translucent enough, resample the color region and again move the slider to the right. These slider
operations are additive. The result achieved by moving the slider to the right can also be achieved by
clicking on the color region using the Matte(-) operational mode.
Detail
The Detail slider can be used to restore lost detail. After selecting a color region, moving this slider to
the left makes the selected color region more visible. Moving the slider to the right makes the color
region less visible. If moving the slider to the left does not make the color region visible enough,
resample the color region and again move the slider to the left.
These slider operations are additive. This result achieved by moving the slider to the left can also be
achieved by clicking on the color region using the Detail(-) operational mode.
Replace Tab
The Replace tab allows you to choose between the three methods of color spill replacement as
covered in detail in the Spill Sponge section above. There are three options for the replacement color
when removing the spill. These options are selected from the Replace mode menu.
Replace Mode
– Complement: Replaces the spill color with the complement of the screen color. This mode
maintains fine foreground detail and delivers the best-quality results. If foreground spill is not a
significant problem, this mode is the one that should be used. However, If the spill intensity on
the foreground image is rather significant, this mode may often introduce serious noise in the
resultant composite.
– Image: Replaces the spill color with colors from a defocused version of the background image or
the Replace image, if one is connected to the Replace input (magenta ) on the node. This mode
results in a good color tone on the foreground subject even with a high-contrast background.
On the negative side, the Image mode occasionally loses the fine edge detail of the foreground
Degrain Tab
The Degrain tab is used when a foreground image is highly compromised by film grain. As a result of
the grain, when backing screen noise is completely removed, the edges of the foreground object
often become harsh and jagged, leading to a poor key.
Grain Size
The Grain Size selector provides a range of grain removal from Small to Large. If the foreground image
has a large amount of film grain-induced pixel noise, you may lose a good edge to the foreground
object when trying to clean all the grain noise with the Clean Background Noise Operation Mode.
These tools clean up the grain noise without affecting the quality of the key.
– None: No degraining is performed.
– Small: The average color of a small region of the area around the sampled pixel. This should be
used when the grain is very dense.
– Medium: The average color of a medium-sized region of the area around the sampled pixel. This
should be used when the grain is less dense.
– Large: The average color of a larger region of the area around the sampled pixel. This should be
used when the grain is very loose.
Grain Tolerance
Adjusting this slider increases the effect of the Clean Background Noise tool without changing the
edge of the foreground object.
Matte Tab
The Matte tab refines the alpha of the key, combined with any solid and garbage masks connected to
the node. When using the Matte tab, set the viewer to display the alpha channel of Primatte’s
final output.
Filter
This control selects the filtering algorithm used when applying blur to the matte.
– Box: This is the fastest method but at reduced quality. Box is best suited for minimal amounts of
blur.
– Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
– Multi-Box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
– Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Blur
Matte Blur blurs the edge of the matte based on the Filter menu setting. A value of zero results in a
sharp, cutout-like hard edge. The higher the value, the more blur applied to the matte.
Blur Inward
Activating the Blur Inward checkbox generates the blur toward the center of the foreground subject.
Conventional blurring or defocus affects the matte edges in both directions (inward and outward) and
sometimes introduces a halo artifact around the edge in the composite view. Blur Inward functions
only in the inward direction of the foreground subject (toward the center of the white area). The final
result removes small and dark noise in the screen area without picking them up again in the Clean
Background Noise mode. It can sometimes result in softer, cleaner edges on the foreground objects.
Contract/Expand
This slider shrinks or grows the semitransparent areas of the matte. Values above 0.0 expand the
matte, while values below 0.0 contract it.
Gamma
Matte Gamma raises or lowers the values of the matte in the semitransparent areas. Higher values
cause the gray areas to become more opaque, and lower values cause the gray areas to become
more transparent. Completely black or white regions of the matte remain unaffected.
Since this control affects only semitransparent areas, it will have no effect on a matte’s hard edge.
Threshold
This range slider sets the lower threshold using the handle on the left and sets the upper threshold
using the handle on the right.
Any value below the lower threshold setting becomes black or transparent in the matte.
Any value above the upper threshold setting becomes white or opaque in the matte. All values within
the range maintain their relative transparency values.
This control is often used to reject salt and pepper noise in the matte.
Restore Fringe
This restores the edge of the matte around the keyed subject. Often when keying, the edge of the
subject where you have hair is clipped out. Restore Fringe brings back that edge while keeping the
matte solid.
Invert Matte
When this checkbox is selected, the alpha channel created by the keyer is inverted, causing all
transparent areas to be opaque and all opaque areas to be transparent.
Solid Matte
Solid mattes are mask nodes or images connected to the solid matte input on the node. The solid
matte is applied directly to the alpha channel of the image. Generally, solid mattes are used to hold out
keying in areas you want to remain opaque, such as someone with blue eyes against a blue screen.
Enabling Invert inverts the solid matte before it is combined with the source alpha.
Garbage Matte
Garbage mattes are mask nodes or images connected to the garbage matte input on the node. The
garbage matte is applied directly to the alpha channel of the image. Generally, garbage mattes are
used to remove unwanted elements that cannot be keyed, such as microphones and booms. They are
also used to fill in areas that contain the color being keyed but that you wish to maintain.
Garbage mattes of different modes cannot be mixed within a single tool. A Matte Control node is often
used after a Keyer node to add a garbage matte with the opposite effect of the matte applied to
the keyer.
Enabling Invert inverts the garbage matte before it is combined with the source alpha.
Post-Multiply Image
Select this option to cause the keyer to multiply the color channels of the image against the alpha
channel it creates for the image. This option is usually enabled and is on by default.
Deselect this checkbox and the image can no longer be considered premultiplied for purposes of
merging it with other images. Use the Subtractive option of the Merge node instead of the
Additive option.
For more information on these Merge node settings, see Chapter 35, “Composite Nodes” in the Fusion
Reference Manual or Chapter 96 in the DaVinci Resolve Reference Manual.
If the foreground image has a shadow in it that you want to keep in the composite, do not select
any of the dark screen pixels in the shadow. This keeps the shadow with the rest of the
foreground image.
You do not need to remove every single white pixel to get good results. Most pixels displayed as a
dark color close to black in a key image are considered transparent and virtually allow the
background to be the final output in that area. Consequently, there is no need to eliminate all
noise in the screen portions of the image. In particular, if an attempt is made to remove noise
around the foreground subject meticulously, a smooth composite image is often difficult
to generate.
TIP: When clearing noise from around loose, flying hair or any background/foreground
transitional area, be careful not to select any of the areas near the edge of the hair. Leave
a little noise around the hair as this can be cleaned up later using the Fine Tuning tools.
Removing Spill
The first three sections created a clean matte. At this point, the foreground can be composited onto
any background image. However, if there is color spill on the foreground subject, a final operation is
necessary to remove that screen spill for a more natural-looking composite.
1 From the View Mode menu, select Composite.
2 Above the viewer, click the Alpha/RGB toggle button to see the RGB image.
NOTE: When using the slider in the Fine Tuning tab to remove spill, spill color
replacement is replaced based on the setting of the Spill Replacement options.
You can use the other two sliders in the same way for different key adjustments. The Detail slider
controls the matte softness for the color that is closest to the background color. For example, you
can recover lost rarefied smoke in the foreground by selecting the Fine Tuning mode, clicking on
the area of the image where the smoke starts to disappear and moving the Detail slider to the left.
The Transparency slider controls the matte softness for the color that is closest to the foreground
color. For example, if you have thick and opaque smoke in the foreground, you can make it
semitransparent by moving the Transparency slider to the right after selecting the pixels in the
Fine Tuning mode.
Inputs
The Ultra Keyer node includes four inputs in the Node Editor.
– Input: The orange input accepts a 2D image that contains the color you want to be keyed for
transparency.
– Garbage Matte: The gray garbage matte input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes
areas of the image that fall within the matte to be made transparent. The garbage matte is
applied directly to the alpha channel of the image.
– Solid Matte: The white solid matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas
of the image that fall within the matte to be fully opaque.
Inspector
Background Color
The Background Color is used to select the color of the blue or green screen of the images. It is good
practice to select the screen color close to the subject to be separated from the screen background.
Background Correction
Depending on the background color selected above, the keyer iteratively merges the pre-keyed
image over either a blue or green background before processing it further.
In some instances, this leads to better, more subtle edges.
Matte Separation
Matte Separation performs a pre-process on the image to help separate the foreground from the
background before color selection. Generally, increase this control while viewing the alpha to eliminate
the bulk of the background, but stop just before it starts cutting holes in the subject or eroding fine
detail on the edges of the matte.
Pre-Matte Range
These R,G,B, and Luminance range controls update automatically to represent the current color
selection. Colors are selected by selecting the Ultra Keyer node’s tile in the node tree and dragging
the Eyedropper into the viewer to select the colors to be used to create the matte. These range
controls can be used to tweak the selection slightly, although selecting colors in the viewer is all that
is required.
Image Tab
The Image tab handles the majority of spill suppression in the Ultra Keyer. Spill suppression is a form of
color correction that attempts to remove the screen color from the fringe of the matte.
Spill is the transmission of the screen color through the semitransparent areas of the alpha channel. In
the case of blue- or green-screen keying, this usually causes the color of the background to become
apparent in the edges of the foreground subject.
Spill Suppression
When this slider is set to 0, no spill suppression is applied to the image. Increasing the slider increases
the strength of the spill method.
Spill Method
This selects the strength of the algorithm used to apply spill suppression to the image.
– None: None is selected when no spill suppression is required.
– Rare: This removes very little of the spill color and is the lightest of all methods.
– Medium: This works best for green screens.
– Well Done: This works best for blue screens.
– Burnt: This works best for blue screens. Use this mode only for very troublesome shots.
Fringe Gamma
This control can be used to adjust the brightness of the fringe or halo that surrounds the keyed image.
Fringe Size
This expands and contracts the size of the fringe or halo surrounding the keyed image.
Fringe Shape
Fringe Shape presses the fringe toward the external edge of the image or pulls it toward the inner
edge of the fringe. Its effect is most noticeable while the Fringe Size value is large.
Matte Tab
The Matte tab refines the alpha of the key, combined with any solid and garbage masks connected to
the node. When using the Matte tab, set the viewer to display the alpha channel of the Delta Keyer’s
final output.
Filter
This control selects the filtering algorithm used when applying blur to the matte.
– Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
– Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
– Multi-Box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
– Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Blur
Matte Blur blurs the edge of the matte based on the Filter menu setting. A value of zero results in a
sharp, cutout-like hard edge. The higher the value, the more blur applied to the matte.
Clipping Mode
This option determines how edges are handled when performing domain of definition rendering. This
is profoundly important when blurring the matte, which may require samples from portions of the
image outside the current domain.
– Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If
the upstream DoD is smaller than the frame, the remaining areas in the frame are treated
as black/transparent.
Contract/Expand
This slider shrinks or grows the semitransparent areas of the matte. Values above 0.0 expand the
matte, while values below 0.0 contract it.
This control is usually used in conjunction with the Matte Blur to take the hard edge of a matte and
reduce fringing. Since this control affects only semitransparent areas, it has no effect on a matte’s
hard edge.
Gamma
Matte Gamma raises or lowers the values of the matte in the semitransparent areas. Higher values
cause the gray areas to become more opaque, and lower values cause the gray areas to become
more transparent. Completely black or white regions of the matte remain unaffected.
Since this control affects only semitransparent areas, it will have no effect on a matte’s hard edge.
Threshold
This range slider sets the lower threshold using the handle on the left and sets the upper threshold
using the handle on the right.
Any value below the lower threshold setting becomes black or transparent in the matte.
Any value above the upper threshold setting becomes white or opaque in the matte. All values within
the range maintain their relative transparency values.
This control is often used to reject salt and pepper noise in the matte.
Restore Fringe
This restores the edge of the matte around the keyed subject. Often when keying, the edge of the
subject where you have hair is clipped out. Restore Fringe brings back that edge while keeping the
matte solid.
Invert Matte
When this checkbox is selected, the alpha channel created by the keyer is inverted, causing all
transparent areas to be opaque and all opaque areas to be transparent.
Solid Matte
Solid mattes are mask nodes or images connected to the solid matte input on the node. The solid
matte is applied directly to the alpha channel of the image. Generally, solid mattes are used to hold out
keying in areas you want to remain opaque, such as someone with blue eyes against a blue screen.
Enabling Invert inverts the solid matte before it is combined with the source alpha.
Garbage Matte
Garbage mattes are mask nodes or images connected to the garbage matte input on the node. The
garbage matte is applied directly to the alpha channel of the image. Generally, garbage mattes are
used to remove unwanted elements that cannot be keyed, such as microphones and booms. They are
also used to fill in areas that contain the color being keyed but that you wish to maintain.
Garbage mattes of different modes cannot be mixed within a single tool. A Matte Control node is often
used after a Keyer node to add a garbage matte with the opposite effect of the matte applied to
the keyer.
Enabling Invert inverts the garbage matte before it is combined with the source alpha.
Subtract Background
This option color corrects the edges when the screen color is removed and anti-aliased to a black
background. By enabling this option, the edges potentially become darker. Disabling this option allows
you to pass on the color of the screen to use in other processes down the line.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other matte nodes. These common controls are
described in detail in the following “The Common Controls” section.
Inspector
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Usually, this causes the tool to skip processing entirely, copying the input straight to the output.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels of the image not included in the mask (i.e., set to 0) to become
black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around the
edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on Coverage and Background Color channels, see Chapter 18, “Understanding
Image Channels” in the Fusion Reference Manual or Chapter 79 in the DaVinci Resolve
Reference Manual.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off GPU hardware-
accelerated rendering. Enabled uses the GPU hardware for rendering the node. Auto uses a capable
GPU if one is available, and falls back to software rendering when a capable GPU is not available
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Metadata Nodes
This chapter details the Metadata nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Copy Metadata [META] 1156
Set Metadata [SMETA] 1157
Set Timecode [TCMETA] 1158
The Common Controls 1160
Inputs
The two inputs on the Copy Metadata node are used to connect two 2D images.
– Background Input: The orange background input is used for the primary 2D image that is
output from the node.
– Foreground Input: The green foreground input is used for the secondary 2D image that
contains metadata you want merge or overwrite onto the background image.
A Copy Metadata node copies metadata from the foreground and embeds it into the background clip
Inspector
Controls Tab
The Controls tab configures how metadata coming from the foreground input image gets added to the
background input image.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Metadata nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The single input on the Set Metadata node is used to connect a 2D image that gets metadata added.
– Background Input: The orange background input is used for the primary 2D image that is
output from the node with the new metadata.
A Set Metadata node creates new metadata and embeds it into the background clip.
Controls Tab
The Controls tab is where you set up the name of the metadata field and the value or information
regarding the metadata.
Field Name
The name of the metadata value. Do not use spaces.
Field Value
The value assigned to the name above.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Metadata nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The single input on the Set Timecode node is used to connect a 2D image that gets timecode added.
– Background Input: The orange background input is used for the primary 2D image that is
output from the node with the new timecode.
Inspector
Controls Tab
The Controls tab sets the clip’s starting timecode metadata based on FPS, hours, minutes, seconds,
and frames.
FPS
You can choose from a variety of settings for frames per second.
Since this is a Fuse, you can easily adapt the settings to your needs by editing the appropriate piece of
code for the buttons:
MBTNC_StretchToFit = true,
{ MBTNC_AddButton = “24“ },
{ MBTNC_AddButton = “25“ },
{ MBTNC_AddButton = “30“ },
{ MBTNC_AddButton = “48“ },
{ MBTNC_AddButton = “50“ },
{ MBTNC_AddButton = “60“ },
})
Hours/Minutes/Seconds/Frames Sliders
Define an offset from the starting frame of the current comp.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Metadata nodes. These common controls
are described in detail in the following “The Common Controls” section.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Metadata category. The controls
are consistent and work the same way for each tool.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Miscellaneous
Nodes
This chapter details miscellaneous nodes within Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Auto Domain [ADoD] 1163
Change Depth [CD] 1165
Custom Tool [CT] 1166
Fields [Flds] 1176
Frame Average [Avg] 1178
Keyframe Stretcher [KfS] 1180
Run Command [Run] 1182
Set Domain [DoD] 1185
Time Speed [TSpd] 1187
Time Stretcher [TSt] 1190
Wireless Link [Wire] 1193
The Common Controls 1194
NOTE: The Domain of Definition is a bounding box that encompasses pixels that
have a nonzero value. The DoD is used to limit image-processing calculations and
speeds up rendering.
Inputs
The single input on the Auto Domain node is used to connect a 2D image and an effect mask, which
can be used to limit the blurred area.
– Input: The orange input is used for the primary 2D image that is blurred.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the blur to only
those pixels within the mask.
Inspector
Controls Tab
In most cases, the Auto Domain node automatically calculates the DoD bounding box; however,
the rectangular shape can be modified using the Controls tab in the Inspector.
Left
Defines the left border of the search area of the ADoD. Higher values on this slider move the left
border toward the right, excluding more data from the left margin.
1 represents the right border of the image; 0 represents the left border. The slider defaults
to 0 (left border).
Bottom
Defines the bottom border of the search area of the ADoD. Higher values on this slider move the
bottom border toward the top, excluding more data from the bottom margin.
1 represents the top border of the image; 0 represents the bottom border. The slider defaults
to 0 (bottom border).
Right
Defines the right border of the search area of the ADoD. Higher values on this slider move the right
border toward the left, excluding more data from the right margin.
1 represents the right border of the image; 0 represents the left border. The slider defaults
to 1 (right border).
Top
Defines the top border of the search area of the ADoD. Higher values on this slider move the top
border toward the bottom, excluding more data from the top margin.
1 represents the top border of the image; 0 represents the bottom border. The slider defaults
to 1 (top border).
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other miscellaneous nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The single input on the Change Depth node is used to connect a 2D image and an effect mask, which
can be used to limit the blurred area.
– Input: The orange input is used for the primary 2D image to be converted.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes or bitmaps from other tools. Connecting a mask to this input limits the blur to only
those pixels within the mask.
A Change Depth node placed after color correction is done on a floating-point image.
Inspector
Depth
The Keep setting doesn‘t do anything to the image but instead keeps the input depth. The other
options change the bit depth of the image to the respective value.
Dither
When down converting from a higher bit depth, it can be useful to add Error Diffusion or Additive
Noise to camouflage artifacts that result from problematic (high-contrast) areas.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other miscellaneous nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Custom Tool node has three image inputs, a matte input, and an effect mask input.
– Input: The orange, green, and magenta inputs combine 2D images to make your composite.
When entering them into the Custom Tool fields, they are referred to as c1, c2 and c3 (c
standard for all three R, G, B channels)
– Matte Input: The white input is for a matte created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a matte to this input allows a matte to be
combined into any equation. When entering the matte into the Custom Tool fields, it is referred
to as m1.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the Custom
Tool effect to only those pixels within the mask.
A Custom Tool is used to build your own effects using C++ and scripting.
Inspector
Controls Tab
Point in 1-4, X and Y
These four controls are 2D X and Y center controls that are available to expressions entered in the
Setup, Intermediate, and Channels tabs as variables p1x, p1y, ...., p4x, p4y. They are normal positional
controls and can be animated or connected to modifiers as any other node might.
Number in 1-8
The values of these controls are available to expressions entered in the Setup, Intermediate, and
Channels tabs as variables n1, n2, n3, ..., n8. They are normal slider controls and can be animated or
connected to modifiers exactly as any other node might.
Setup 1-4
Up to four separate expressions can be calculated in the Setup tab of the Custom Tool node. The
Setup expressions are evaluated once per frame before any other calculations are performed. The
results are then made available to the other expressions in the Custom Tool node as variables s1, s2,
s3, and s4.
NOTE: Because these expressions are evaluated once per frame only and not for each pixel,
it makes no sense to use per-pixel variables like X and Y or channel variables like r1, g1, b1.
Allowable values include constants, variables such as n1..n8, time, W and H, and functions like
sin() or getr1d().
Intermediate 1-4
An additional four expressions can be calculated in the Inter tab. The Inter expressions are evaluated
once per pixel after the Setup expressions are evaluated but before the Channel expressions are
evaluated. Per-pixel channel variables like r1, g1, b1, and a1 are allowable. Results are available as
variables i1, i2, i3, and i4.
Number Controls
There are eight sets of Number controls, corresponding to the eight Number In sliders in the Controls
tab. Uncheck the Show Number checkbox to hide the corresponding Number In slider, or edit the
Name for Number text field to change its name.
Point Controls
There are four sets of Point controls, corresponding to the four Point In controls in the Controls tab.
Uncheck the Show Point checkbox to hide the corresponding Point In control and its crosshair in the
viewer. Similarly, edit the Name for Point text field to change the control’s name.
Value Variables
n1..n8 Numeric Inputs
NOTE: Use w and h and ax and ay without a following number to get the dimensions and
aspect of the primary image.
NOTE: Use c1, c2, c3 to refer to the value of a pixel in the current channel. This makes
copying and pasting expressions easier. For example, if c1/2 is typed as the red expression,
the result would be half the value of the red pixel from image 1, but if the expression is copied
to the blue channel, now it would have the value of the pixel from the blue channel.
NOTE: There are a variety of methods used to refer to pixels from locations other than the
current one in an image.
In the above description, [ch] is a letter representing the channel to access. The [#] is a number
representing the input image. So to get the red component of the current pixel (equivalent to r), you
would use getr1b(x,y). To get the alpha component of the pixel at the center of image 2, you would use
geta2b(0.5, 0.5).
– getr1b(x,y) Output the red value of the pixel at position x, y, if there were a valid pixel present. It
would output 0.0 if the position were beyond the boundaries of the image (all channels).
– getr1d(x,y) Output the red value of the pixel at position x, y. If the position specified were outside
of the boundaries of the image, the result would be from the outer edge of the image (RGBA only).
– getr1w(x,y) Output the red value of the pixel at position x, y. If the position specified were outside
of the boundaries of the image, the x and y coordinates would wrap around to the other side of the
image and continue from there (RGBA only).
To access other channel values with these functions, substitute the r in the above examples with the
correct channel variable (r, g, b, a and, for the getr1b() functions only, z, and so on), as shown above.
Substitute the 1 with either 2 or 3 in the above examples to access the images from the other
image inputs.
Mathematical Expressions
pi The value of pi
e The value of e
Mathematical Operators
!x 1.0 if x = 0, otherwise 0.0
-x (0.0 - x)
x*y x multiplied by y
x/y x divided by y
x+y x plus y
x-y x minus y
x && y 1.0 if both x and y are not 0.0, otherwise 0.0, i.e. identical to above
x|y 1.0 if either x or y (or both) are not 0.0, otherwise 0.0
x||y 1.0 if either x or y (or both) are not 0.0, otherwise 0.0
Rotation
To rotate an image, we need the standard equations for 2D rotation:
Using the n1 slider for the angle theta, and a sample function, we get (for the red channel):
This calculates the current pixel’s (x,y) position rotated around the origin at (0,0) (the bottom-
left corner), and then fetches the red component from the source pixel at this rotated position.
For centered rotation, we need to subtract 0.5 from our x and y coordinates before we rotate
them, and add 0.5 back to them afterward:
Which brings us to the next lesson: Setup and Intermediate Expressions. These are useful for
speeding things up by minimizing the work that gets done in the Channel expressions. The
Setup expressions are executed only once, and their results don‘t change for any pixel, so you
can use these for s1 and s2, respectively.
cos(n1) sin(n1)
Intermediate expressions are executed once for each pixel, so you can use these for i1 and i2:
(x-.5) * s1 - (y-.5) * s2 + .5
(x-.5) * s2 + (y-.5) * s1 + .5
These are the x and y parameters for the getr1b() function from above, but with the Setup
results, s1 and s2, substituted so that the trig functions are executed only once per frame, not
every pixel. Now you can use these intermediate results in your Channel expressions:
getr1b(i1, i2)
getg1b(i1, i2)
getb1b(i1, i2)
geta1b(i1, i2)
With the Intermediate expressions substituted in, we only have to do all the additions,
subtractions, and multiplications once per pixel, instead of four times. As a rule of thumb, if it
doesn‘t change, do it only once.
This is a simple rotation that doesn’t take into account the image aspect at all. It is left as an
exercise for you to include this (sorry). Another improvement could be to allow rotation around
points other than the center.
Filtering
Our second example duplicates the functionality of a 3 x 3 Custom Filter node set to average
the current pixel together with the eight pixels surrounding it. To duplicate it with a Custom
Tool node, add a Custom Tool node to the node tree, and enter the following expressions into
the Setup tab.
S1
1.0/w1
S2
1.0/h1
These two expressions are evaluated at the beginning of each frame. S1 divides 1.0 by the
current width of the frame, and S2 divides 1.0 by the height. This provides a floating-point
value between 0.0 and 1.0 that represents the distance from the current pixel to the next pixel
along each axis.
Now enter the following expression into the first text control of the Channel tab (r).
This expression adds together the nine pixels above the current pixel by calling the getr1w()
function nine times and providing it with values relative to the current position. Note that we
referred to the pixels by using x+s1, y+s2, rather than using x+1, y+1.
Fusion refers to pixels as floating-point values between 0.0 and 1.0, which is why we created
the expressions we used in the Setup tab. If we had used x+1, y+1 instead, the expression
would have sampled the same pixel over and over again. (The function we used wraps the
pixel position around the image if the offset values are out of range.)
That took care of the red channel; now use the following expressions for the green, blue, and
alpha channels.
It’s time to view the results. Add a Background node set to a solid color and change the color
to a pure red. Add a hard-edged Rectangular effects mask and connect it to the expression
just created.
For comparison, add a Custom Filter node and duplicate the settings from the image above.
Connect a pipe to this node from the background to the node and view the results. Alternate
between viewing the Custom Tool node and the Custom Filter while zoomed in close to the
top corners of the effects mask.
Of course, the Custom Filter node renders a lot faster than the Custom Tool node we created,
but the flexibility of the Custom Tool node is its primary advantage. For example, you could
use an image connected to input 2 to control the median applied to input one by changing all
instances of getr1w, getg1w, and getb1w in the expression to getr2w, getg2w, and getb2w, but
leaving the r1, g1, and b1s as they are.
This is just one example; the possibilities of the Custom Tool node are limitless.
Fields [Flds]
Inputs
The single input on the Fields node is used to connect a 2D image and an effect mask, which can be
used to limit the blurred area.
– Stream1 Input: The orange background input is used for the primary 2D image that is
interpolated or converted.
– Stream2 Input: The optional green foreground input is only used when merging two interlaced
images together.
Controls Tab
The Controls tab includes two menus. The Operation menu is used to select the type of field
conversion performed. The Process Mode menu is used to select the field’s format for the
output image.
Operatiion Mode
Operation Menu
– Do Nothing: This causes the images to be affected by the Process Mode selection exclusively.
– Strip Field 2: This removes field 2 from the input image stream, which shortens the image to half
of the original height.
– Strip Field 1: This removes field 1 from the input image stream, which shortens the image to half of
the original height.
– Strip Field 2 and Interpolate: This removes field 2 from the input image stream and inserts a
field interpolated from field 1 so that image height is maintained. Should be supplied with frames,
not fields.
– Strip Field 1 and Interpolate: This removes field 1 from the input image stream and inserts a
field interpolated from field 2 so that image height is maintained. Should be supplied with frames,
not fields.
– Interlace: This combines fields from the input image stream(s). If supplied with one image
stream, each pair of frames are combined to form half of the number of double-height frames.
If supplied with two image streams, single frames from each stream are combined to form
double-height images.
– De-Interlace: This separates fields from one input image stream. This will produce double the
amount of half-height frames.
Process Mode
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other miscellaneous nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inspector
Controls Tab
The Controls tab contains the parameters for setting the duration and guidance of the
averaged frames.
Sample Direction
The Sample Direction menu determines if the averaged frames are taken before the current frame,
after, or a mix of the two.
– Forward: Averages the number of frames set by the Frames slider after the current frame.
– Both: Averages the number of frames set by the Frames slider, taking frames before and after the
current frame.
– Backward: Averages the number of frames set by the Frames slider before the current frame.
Missing Frames
This control determines the behavior if a frame is missing from the clip.
– Duplicate Original: Uses the last original frame until a new frame is available.
– Blank Frame: Leaves missing frames blank.
Frames
This slider sets the number of frames that are averaged.
TIP: The Keyframe Stretcher can be used on a single parameter by applying the
Keystretcher modifier.
Inputs
The single input on the Keyframe Stretcher node is used to connect a 2D image that contains
keyframe animation.
– Input: The orange input is used for any node with keyframed animation. The input can be
a Merge node that is not animated but contains foreground and background nodes that are
animated.
The diagram below shows the original 50-frame animation added to a parameter. The Keyframe
Stretcher Start and End would be set to 0 and 50. The second keyframe is set at frame 10, and the
third keyframe is set at frame 40. Setting the Stretch Start to frame 11 and the Stretch End to frame 39
will keep the existing keyframes at the same speed (number of frames.) The middle will be stretched.
In the below example, the duration of the clip is extended to 75 frames. The first 10 frames and the last
10 frames of the animation run at the same speed as the original animation, while any animation in the
middle is stretched to fill the difference.
NOTE: The actual Spline Editor will show only the original keyframe positions. The splines
are not changed by the Keyframe Stretcher; only the animation is changed.
Animation modified to 75 frames but stretching only the middle of the animation
Inspector
Keyframes Tab
The Keyframes tab includes Source controls for setting the source duration and Stretch controls for
setting the area of the animation that gets modified.
Inputs
The single input on the Run Command node is used to pass through a 2D image.
– Input: The optional orange image input is not required for this node to operate. However, if it is
connected to a node‘s output, the Run Command will only launch after the connected node has
finished rendering. This is often useful when connected to a Saver, to ensure that the output
frame has been fully saved to disk first. If the application launched returns a non-zero result, the
node will also fail.
Frame Tab
The Frame tab is where the command to execute is selected and modified.
Hide
Enable the Hide checkbox to prevent the application or script from displaying a window when it
is executed.
Wait
Enable this checkbox to cause the node to wait for a remote application or tool to exit before
continuing. If this checkbox is disabled, the Fusion continues rendering without waiting for the external
application.
Frame Command
This field is used to specify the path for the command to be run after each frame is rendered. The
Browse button can be used to identify the path.
Interactive
This checkbox determines whether the launched application should run interactively, allowing
user input.
You may also pad a value with spaces by calling the wildcard as %x, where x is the number of spaces
with which you would like to pad the value.
The Run Command Start tab The Run Command End tab
Example
To copy the saved files from a render to another directory as each frame is rendered, save the
following text in a file called copyfile.bat to your C\ directory (the root folder).
@echo off
set parm=%1 %2
copy %1 %2 set parm=
Create or load any node tree that contains a Saver. The following example assumes a Saver is
set to output D\ test0000.exr, test0001.exr, etc. You may have to modify the example to match.
Add a Run Command node after the Saver to ensure the Saver has finished saving first. Now
enter the following text into the Run Command node’s Frame Command text box:
C\copytest.bat D\test%04f.exr C\
Select the Hide Frame command checkbox to prevent the Command Prompt window from
appearing briefly after every frame.
When this node tree is rendered, each file will be immediately copied to the C\ directory as it
is rendered.
The Run Command node could be used to transfer the files via FTP to a remote drive on the
network, to print out each frame as it is rendered, or to execute a custom image-
processing tool.
The Run Command node is not restricted to executing simple batch files. FusionScript,
VBScript, Jscript, CGI, and Perl files could also be used, as just a few examples.
Inputs
The two inputs on the Set Domain node are used to connect 2D images.
– Input: The orange background input must be connected. It accepts a 2D image with the DoD
you want to replace or adjust.
– Foreground: The green image input is optional but also accepts a 2D image as its input. When
the foreground input is connected, the Set Domain node will replace the Background input’s
domain of definition with the foreground’s DoD.
A Set Domain node manually sets the area to limit image processing.
Controls Tab
Mode
The Mode menu has two choices depending on whether you want to adjust or offset the existing
domain or set precise values for it.
The same operations can be performed in Set or in Adjust mode. In Adjust mode, the sliders default to
0, marking their respective full extent of the image. Positive values shrink the DoD while negative
values expand the DoD to include more data.
Set mode defaults to the full extent of the visible image. Sliders default to a scale of 0-1 from left to
right and bottom to top.
Left
Defines the left border of the DoD. Higher values on this slider move the left border toward the right,
excluding more data from the left margin.
1 represents the right border of the image; 0 represents the left border. The slider defaults to 0
(left border).
Bottom
Defines the bottom border of the DoD. Higher values on this slider move the bottom border toward the
top, excluding more data from the bottom margin.
1 represents the top border of the image; 0 represents the bottom border. The slider defaults to 0
(bottom border).
Right
Defines the right border of the DoD. Higher values on this slider move the right border toward the left,
excluding more data from the right margin.
1 represents the right border of the image; 0 represents the left border. In Set mode, the slider defaults
to 1 (right border).
Top
Defines the top border of the DoD. Higher values on this slider move the top border toward the
bottom, excluding more data from the top margin.
1 represents the top border of the image; 0 represents the bottom border. In Set mode, the slider
defaults to 1 (top border).
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other miscellaneous nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The single input on the Time Speed node is used to connect a 2D image that will be retimed.
– Input: The orange input is used for the primary 2D image that will be retimed.
A MediaIn node having its speed changed in the Time Speed node.
Inspector
Speed
This control is used to adjust the Speed, in percentage values, of the outgoing image sequence.
Negative values reverse the image sequence. 200% Speed is represented by a value of 2.0, 100% by
1.0, 50% by 0.5, and 10% by 0.1.
The Speed control cannot be animated.
Delay
Use this control to Delay the outgoing image sequence by the specified number of frames. Negative
numbers offset time back, and positive numbers advance time.
Interpolate Mode
This menu determines the how the time speed is processed in order to improve its visual playback
quality, especially in the case of clips that are slowed down. There are three choices in the menu.
– Nearest: The most processor efficient and least sophisticated method of processing; frames are
either dropped for fast motion or duplicated for slow motion.
– Blend: Also processor efficient, but can produce smoother results; adjacent duplicated frames are
dissolved together to smooth out slow or fast motion effects.
– Flow: The most processor intensive but highest quality method of speed effect processing.
Using vector channels pre-generated from an Optical Flow node, new frames are generated to
create slow or fast motion effects. The result can be exceptionally smooth when motion in a clip
is linear. However, two moving elements crossing in different directions or unpredictable camera
movement can cause unwanted artifacts.
Depth Ordering
This menu is displayed only when Interpolation is set to Flow. The Depth Ordering is used to
determine which parts of the image should be rendered on top. This is best explained by example.
In a locked-off camera shot where a car is moving through the frame, the background does not move,
so it produces small, or slow, vectors. The car produces larger, or faster, vectors.
The Depth Ordering, in this case, is Fastest on Top, since the car draws over the background.
In a shot where the camera pans to follow the car, the background has faster vectors, and the car has
slower vectors, so the Depth ordering method would be Slowest on Top.
Clamp Edges
This checkbox is displayed only when Interpolation is set to Flow. Under certain circumstances, this
option can remove the transparent gaps that may appear on the edges of interpolated frames. Clamp
Edges can cause a stretching artifact near the edges of the frame that is especially visible with objects
moving through it or when the camera is moving.
Because of these artifacts, it is a good idea to use clamp edges only to correct small gaps around the
edges of an interpolated frame.
Edge Softness
This slider is only displayed when Interpolation is set to Flow and Clamp Edges is enabled. It helps to
reduce the stretchy artifacts that might be introduced by Clamp Edges.
If you have more than one of the Source Frame and Warp Direction checkboxes turned on, this can
lead to doubling up of the stretching effect near the edges. In this case, you’ll want to keep the
softness rather small at around 0.01. If you have only one checkbox enabled, you can use a larger
softness at around 0.03.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other miscellaneous nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The single input on the Time Stretcher node is used to connect a 2D image that will be time stretched.
– Input: The orange input is used for the primary 2D image that will be time stretched.
A MediaIn node having its time ramped to various speeds in the Time Stretcher node.
Inspector
NOTE: The Source Time spline may not be immediately visible until Edit is selected from the
Source Time’s contextual menu, or Display all Splines is selected from the Spline Window’s
contextual menu.
Interpolate Mode
This menu determines the how the time speed is processed in order to improve its visual playback
quality, especially in the case of clips that are slowed down. There are three choices in the menu.
– Nearest: The most processor efficient and least sophisticated method of processing; frames are
either dropped for fast motion or duplicated for slow motion.
– Blend: Also processor efficient but can produce smoother results; adjacent duplicated frames are
dissolved together to smooth out slow or fast motion effects.
– Flow: The most processor intensive but highest quality method of speed effect processing.
Using vector channels pre-generated from an Optical Flow node, new frames are generated to
create slow or fast motion effects. The result can be exceptionally smooth when motion in a clip
is linear. However, two moving elements crossing in different directions or unpredictable camera
movement can cause unwanted artifacts.
Sample Spread
This slider is displayed only when Interpolation is set to Blend. The slider controls the strength of the
interpolated frames on the current frame. A value of 0.5 blends 50% of the frame before and 50% of
the frame ahead and 0% of the current frame.
Depth Ordering
This menu is displayed only when Interpolation is set to Flow. The Depth Ordering is used to
determine which parts of the image should be rendered on top. This is best explained by example.
In a locked-off camera shot where a car is moving through the frame, the background does not move,
so it produces small, or slow, vectors. The car produces larger, or faster, vectors.
The Depth Ordering in this case is Fastest on Top, since the car draws over the background.
In a shot where the camera pans to follow the car, the background has faster vectors, and the car has
slower vectors, so the Depth ordering method would be Slowest on Top.
Clamp Edges
This checkbox is displayed only when Interpolation is set to Flow. Under certain circumstances, this
option can remove the transparent gaps that may appear on the edges of interpolated frames. Clamp
Edges can cause a stretching artifact near the edges of the frame that is especially visible with objects
moving through it or when the camera is moving.
Because of these artifacts, it is a good idea to use clamp edges only to correct small gaps around the
edges of an interpolated frame.
Edge Softness
This slider is displayed only when Interpolation is set to Flow and Clamp Edges is enabled. It helps to
reduce the stretchy artifacts that might be introduced by Clamp Edges.
Example
Make sure that the current time is either the first or last frame of the clip to be affected in the
project. Add the Time Stretcher node to the node tree. This will create a single point on the
Source Time spline at the current frame. The value of the Source Time will be set to zero for
the entire Global Range.
Set the value of the Source Time to the frame number to be displayed from the original source,
at the frame in time it will be displayed in during the project.
To shrink a 100-frame sequence to 25 frames, follow these steps:
1 Change the Current Time to frame 0.
2 Change the Source Time control to 0.0.
3 Advance to frame 24.
4 Change the Source Time to 99.
5 Check that the spline result is linear.
6 Fusion will render 25 frames by interpolating down the 100 frames to a length of 25.
7 Hold the last frame for 30 frames, and then play the clip backward at regular speed.
Continue the example from above and follow the steps below.
8 Advance to frame 129.
9 Right-click on the Source Time control and select Set Key from the menu.
10 Advance to frame 229 (129 + 100).
11 Set the Source time to 0.0.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other miscellaneous nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
There are no inputs on this node.
Inspector
Controls Tab
The Controls tab in the Wireless Link node contains a single Input field for the linked node.
Input
To use the Wireless Link node, in the Node Editor, drag the 2D node into the Input field of the Wireless
Link node. Any change you make to the original node is wirelessly replicated in the Wireless Link
node. You can use the output from the Wireless Link node to connect to a nearby node.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other miscellaneous nodes. These common
controls are described in detail in the following “The Common Controls” section.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the miscellaneous nodes. The controls
are consistent and work the same way for each tool.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this will cause the tool to skip processing entirely, copying the input straight to the output.
Multiply by Mask
Selecting this option will cause the RGB values of the masked image to be multiplied by the mask
channel’s values. This will cause all pixels of the image not included in the mask (i.e., set to 0) to
become black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around the
edge of the object. If this option is disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on Coverage and Background Color channels, see Chapter 18, “Understanding
Image Channels” in the Fusion Reference Manual or Chapter 79 in the DaVinci Resolve
Reference Manual.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off GPU hardware-
accelerated rendering. Enabled uses the GPU hardware for rendering the node. Auto uses a capable
GPU if one is available and falls back to software rendering when a capable GPU is not available.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Optical Flow
This chapter details the Optical Flow nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Optical Flow [OF] 1198
Repair Frame [Rep] 1201
Smooth Motion [SM] 1203
Tween [Tw] 1205
The Common Controls 1208
TIP: If the footage input flickers on a frame-by-frame basis, it is a good idea to deflicker the
footage beforehand.
Inputs
The Optical Flow node includes a single orange image input.
– Input: The orange background input accepts a 2D image. This is the sequence of frames for
which you want to compute optical flow. The output of the Optical Flow node includes the
image and vector channels. The vector channels can be displayed by right-clicking in the
viewer and choosing Channel > Vectors and then Options > Normalize Color Range.
TIP: When analyzing Optical Flow vectors, consider adding a Smooth Motion node afterward
with smoothing for forward/ backward vectors enabled.
Inspector
Warp Count
Decreasing this slider makes the optical flow computations faster. To understand what this option
does, you must understand that the optical flow algorithm progressively warps one image until it
matches with the other image. After some point, convergence is reached, and additional warps
become a waste of computational time. You can tweak this value to speed up the computations, but it
is good to watch what the optical flow is doing at the same time.
Iteration Count
Decreasing this slider makes the computations faster. In particular, just like adjusting the Warp Count,
adjusting this option higher will eventually yield diminishing returns and not produce significantly
better results. By default, this value is set to something that should converge for all possible shots and
can be tweaked lower fairly often without reducing the disparity’s quality.
Smoothness
This controls the smoothness of the optical flow. Higher smoothness helps deal with noise, while lower
smoothness brings out more detail.
Half Resolution
The Half Resolution checkbox is used purely to speed up the calculation of the optical flow. The input
images are resized down and tracked to produce the optical flow.
Smoothness
This controls the smoothness of the optical flow. Higher smoothness helps deal with noise, while lower
smoothness brings out more detail.
Edges
This slider is another control for smoothness but applies it based on the color channel. It tends to have
the effect of determining how edges in the flow follow edges in the color images. When it is set to a
low value, the optical flow becomes smoother and tends to overshoot edges. When it is set to a high
value, details from the color images start to slip into the optical flow, which is not desirable. Edges in
the flow end up more tightly aligning with the edges in the color images. This can result in streaked-
out edges when the optical flow is used for interpolation. As a rough guideline, if you are using the
disparity to produce a Z-channel for post effects like Depth of Field, then set it lower in value. If you are
using the disparity to perform interpolation, you might want it to be higher in value.
Match Weight
This control sets a threshold for how neighboring groups of foreground/background pixels are
matched over several frames. When set to a low value, large structural color features are matched.
When set to higher values, small sharp variations in the color are matched. Typically, a good value for
this slider is in the [0.7, 0.9] range. When dealing with stereo 3D, setting this option higher tends to
improve the matching results in the presence of differences due to smoothly varying shadows or local
lighting variations between the left and right images. The user should still perform a color match or
deflickering on the initial images, if necessary, so they are as similar as possible. This option also helps
with local variations like lighting differences due to light passing through a mirror rig.
Mismatch Penalty
This option controls how the penalty for mismatched regions grows as they become more dissimilar.
The slider provides a choice between a balance of Quadratic and Linear penalties. Quadratic strongly
penalizes large dissimilarities, while Linear is more robust to dissimilar matches. Moving this slider
toward Quadratic tends to give a disparity with more small random variations in it, while Linear
produces smoother, more visually pleasing results.
Warp Count
Decreasing this slider makes the optical flow computations faster. In particular, the computational time
depends linearly upon this option. To understand what this option does, you must understand that the
optical flow algorithm progressively warps one image until it matches with the other image. After some
point, convergence is reached, and additional warps become a waste of computational time. The
default value in Fusion is set high enough that convergence should always be reached. You can tweak
this value to speed up the computations, but it is good to watch what the optical flow is doing at the
same time.
Chapter 50 Optical Flow 1200
Iteration Count
Decreasing this slider makes the computations faster. In particular, the computational time depends
linearly upon this option. Just like adjusting the Warp Count, adjusting this option higher will eventually
yield diminishing returns and not produce significantly better results. By default, this value is set to
something that should converge for all possible shots and can be tweaked lower fairly often without
reducing the disparity’s quality.
Filtering
This option controls filtering operations used during flow generation. Catmull-Rom filtering will
produce better results, but at the same time, turning on Catmull-Rom will increase the computation
time steeply.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Optical Flow nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
TIP: If your footage varies in color from frame to frame, sometimes the repair can be
noticeable because, to fill in the hole, Repair Frame must pull color values from adjacent
frames. Consider using deflickering, color correction, or using a soft-edged mask to help
reduce these kinds of artifacts.
Inputs
There are two inputs on the Repair Frame node. One is used to connect a 2D image that will be
repaired and the other is for an effect mask.
– Input: The orange input is used for the primary 2D image that will be repaired.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the repairs to
certain areas.
A Repair Frame node set up to analyze a MediaIn node using internal optical flow analysis.
Inspector
Controls Tab
The Controls tab includes options for how to repair the frames. It also includes controls for adjusting
the optical flow analysis, identical to those controls in the Optical Flow node.
Depth Ordering
The Depth Ordering determines which parts of the image should be rendered on top by selecting
either Fastest On Top or Slowest On Top. The examples below best explain these options.
In a locked-off camera shot where a car is moving through the frame, the background does not move,
so it produces small, or slow, vectors, while the car produces larger, or faster, vectors.
The depth ordering in this case is Fastest On Top since the car draws over the background.
In a shot where the camera pans to follow the car, the background has faster vectors, and the car has
slower vectors, so the Depth Ordering method is Slowest On Top.
Clamp Edges
Under certain circumstances, this option can remove the transparent gaps that may appear on the
edges of interpolated frames. Clamp Edges causes a stretching artifact near the edges of the frame
that is especially visible with objects moving through it or when the camera is moving.
Because of these artifacts, it is a good idea to use clamp edges only to correct small gaps around the
edges of an interpolated frame.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Optical Flow nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
TIP: You can use two or more Smooth Motion nodes in sequence to get additional smoothing.
With one Smooth Motion node, the previous, current, and next frames are examined for a
total of 3; with two Smooth Motion nodes, 5 frames are examined; and with three Smooth
Motion nodes, 7 frames are examined.
Another technique using two Smooth Motion nodes is to use the first Smooth Motion node to smooth
the Vector and Back Vector channels. Use the second Smooth Motion to smooth the channels you
want to smooth (e.g., Disparity). This way, you use the smoothed vector channels to smooth Disparity.
You can also try using the smoothed motion channels to smooth the motion channels.
Inputs
The Smooth Motion node includes a single orange image input.
– Input: The orange image input accepts a 2D image. This is the sequence of images for which
you want to compute smooth motion. This image must have precomputed Vector and Back
Vector channels either generated from an Optical Flow node or saved in EXR format with
vector channels.
A Smooth Motion node using Vector and Back Vector channels from the Optical Flow node.
Inspector
Channel
Smooth Motion can be applied to more than just the RGBA channels. It can also be applied to the
other AOV channels.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Optical Flow nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Tween [Tw]
Inputs
There are two image inputs on the Tween node and an effects mask input.
– Input 0: The orange input, labeled input 0, is the previous frame to the one you are generating.
– Input 1: The green input, labeled input 1, is the next frame after the one you are generating.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the Tween to
certain areas.
The Tween node receives two neighboring frames and generates the middle one.
Inspector
Controls Tab
The Controls tab includes options for how to tween frames. It also includes controls for adjusting the
optical flow analysis, identical to those controls in the Optical Flow node.
Interpolation Parameter
This option determines where the frame you are interpolating is, relative to the two source frames A
and B. An Interpolation Parameter of 0.0 will result in frame A, a parameter of 1.0 will result in frame B,
and a parameter of 0.5 will yield a result halfway between A and B.
Depth Ordering
The Depth Ordering determines which parts of the image should be rendered on top by selecting
either Fastest On Top or Slowest On Top. The examples below best explain these options.
In a locked-off camera shot where a car is moving through the frame, the background does not move,
so it produces small, or slow, vectors, while the car produces larger, or faster, vectors.
Clamp Edges
Under certain circumstances, this option can remove the transparent gaps that may appear on the
edges of interpolated frames. Clamp Edges causes a stretching artifact near the edges of the frame
that is especially visible with objects moving through it or when the camera is moving.
Because of these artifacts, it is a good idea to use Clamp Edges only to correct small gaps around the
edges of an interpolated frame.
Edge Softness
This slider is displayed only when Clamp Edges is enabled. The slider helps to reduce the stretchy
artifacts that might be introduced by Clamp Edges.
If you have more than one of the Source Frame and Warp Direction checkboxes turned on, this can
lead to doubling up of the stretching effect near the edges. In this case, you‘ll want to keep the
softness rather small at around 0.01. If you have only one checkbox enabled, you can use a larger
softness at around 0.03.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Optical Flow nodes. These common
controls are described in detail in the following “The Common Controls” section.
Inspector
r
The Common Optical Flow Settings tab
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Optical Flow category. The controls
are consistent and work the same way for each tool.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this causes the tool to skip processing entirely, copying the input straight to the output.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels of the image not included in the mask (i.e., set to 0) to become
black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around the
edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
“Understanding Image Channels” in the Fusion Studio Reference Manual or Chapter 79 in the
DaVinci Resolve Reference Manual.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, see the Fusion scripting documentation.
Paint Node
This chapter details the Paint node available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Paint 1211
Paint Node Modifiers 1217
Keyboard Shortcuts 1218
Inputs
The two inputs on the Paint node are used to connect a 2D image and an effect mask which can be
used to limit the painted area.
– Input: It is required to connect the orange input with a 2D image that creates the size
of the “canvas” on which you paint.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the Paint to
only those pixels within the mask.
A more flexible setup is to use a Background node to set the size that matches the image you are
painting on. In the Inspector, the background would be set to be fully transparent. Then, the Paint tool
can be merged as the foreground over the actual image you want to paint on.
To begin working with the Paint tool, first select the paint stroke type from the Paint toolbar above the
viewer. There are ten stroke types to choose from as well as two additional tools for selecting and
grouping paint strokes. The stroke types and tools are described below in the order they appear in
the toolbar.
– Multistroke: Although this is the default selection and the first actual brush type in the toolbar,
Multistroke is not typically the stroke type most often used. However, it’s perfect for those
100-strokes-per-frame retouching paint jobs like removing tracking markers. Multistroke is much
faster than the Stroke type but is not editable after it is created. By default, Multistroke lasts for
one frame and cannot be modified after it has been painted. Use the Duration setting in the Stroke
controls to set the number of frames before painting. A shaded area of the Multistroke duration
is visible but not editable in the Keyframes Editor. While Multistrokes aren’t directly editable, they
can be grouped with the PaintGroup modifier, then tracked, moved, and rotated by animating the
PaintGroup instead.
– Clone Multistroke: Similar to Multistroke but specifically meant to clone elements from one
area or image to the other. Perfect for those 100-strokes-per-frame retouching paint jobs like
removing tracking markers. Clone Multistroke is faster than the Stroke type but is not editable
after it is created. By default, Clone Multistroke lasts for one frame and cannot be modified after
it has been painted. Use the Duration setting in the Stroke controls to set the number of frames
before painting. A shaded area of the Clone Multistroke duration is visible but not editable in the
Keyframes Editor.
– Stroke: In most cases, the Stroke tool is what people think of when they think of paint and is the
tool of choice for most operations. It is a fully animatable and editable vector-based paint stroke. It
can become slow if hundreds of strokes are used in an image; when creating a lot of paint strokes,
it is better to use Multistroke. The Stroke type has a duration of the entire global range. However,
you can edit its duration at any time in the Keyframes Editor. When the painting is complete,
choose the Select button in the Paint toolbar to avoid accidentally adding new strokes.
– Polyline Stroke: This provides the ability to create and manipulate a stroke in the same way that
a Bézier path or polygon mask might be created. To add a Polyline Stroke, select the Polyline
button and click in the viewer to add the first point. Continue clicking to add additional points to
the polyline. This click append style is the default, but polyline strokes can also be created in draw
append mode. Polylines can be tracked or connected to existing polylines like masks or animation
paths. The Polyline Stroke has a default duration of the entire global range. However, you can edit
its duration at any time in the Keyframes Editor.
Paint edit options are displayed in the viewer after a Polyline stroke is created.
Polyline-based paint strokes include a second toolbar in the viewer to select different editing options.
The paint strokes that include this second toolbar are Polyline Stroke and Copy Polyline. The Stroke
style also displays this toolbar after the stroke is selected and the Make Editable button is clicked in
the Inspector.
– Click Append: This is the default option when creating a polyline stroke. It works more like a
Bézier pen drawing tool than a paintbrush tool. Clicking sets a control point and appends the next
control point when you click again in a different location.
– Draw Append: This is a freehand drawing tool. It paints a stroke similar to drawing with a pencil on
paper. You can create a new Polyline Stroke or Copy Polyline Stroke using the Draw tool, or you
can extend a Stroke style after clicking the Make Editable button in the Inspector.
– Insert: Insert adds a new control point along the paint stroke spline.
– Modify: Modify allows you to safely move or smooth any exiting point along a spline without
worrying about adding a new point accidentally.
– Done: Prevents any point along the spline from being moved or modified. Also, new points cannot
be added. You can, however, move and rotate the entire spline.
– Closed: Closes an open polyline.
– Smooth: Changes the selected stroke or control point from a linear to a smooth curve.
– Linear: Changes the selected stroke or control point from a smooth curve to linear.
– Select All: Selects all the control points on the polyline.
Inspector
Brush Controls
Brush Shape
The brush shape buttons select the brush tip shape. Except for the single pixel shape, you can modify
the size of the brush shape in the viewer by holding down the Command or Ctrl key while dragging
the mouse.
– Soft Brush: The Soft Brush type is a circular brush tip with soft edges.
– Circular Brush: A Circular Brush is a brush tip shape with hard edges.
– Image Brush: The Image Brush allows images from any node in the node tree,
or from a file system, to be used as a brush tip.
– Single Pixel Brush: The Single Pixel Brush is perfect for fine detail work, creating a brush tip
precisely one pixel in size. No anti-aliasing is applied to the single pixel brush.
– Square Brush: A Square Brush is a brush tip with hard edges.
Vary Size
Vary size settings change the stroke size based on speed or a pressure-sensitive pen and tablet.
– Constant: The brush tip remains a constant size over the stroke.
– With Pressure: The stroke size varies with the actual applied pressure.
– With Velocity: The stroke size varies with the speed of painting. The faster the stroke,
the thinner it is.
Vary Opacity
Vary opacity settings change the stroke opacity based on speed or a pressure-sensitive pen
and tablet.
– Constant: The brush tip remains at a constant transparency setting over the entire stroke.
– With Pressure: The stroke transparency varies with the applied pressure.
– With Velocity: The stroke transparency varies with the speed of painting. The faster the stroke,
the more transparent it is.
Softness
Use this control to increase or decrease the Softness of a soft brush.
Image Source
When using the Image Source brush type, select between three possible source brush images.
– Node: The image source is derived from the output of a node in the node tree. Drag the node into
the Inspector’s Source node input field to set the source.
– Clip: The image source is derived from an image or sequence on disk. Any file supported by
Fusion’s Loader or MediaIn node can be used.
– Brush: Select an image to use as a brush from the menu. Images located in the Fusion > Brushes
directory are used to populate the menu.
Channel
When the Fill tool is selected, a Channel menu selects which color channel is used in the fill paint. For
example, with alpha selected, the fill occurs on contiguous pixels of the alpha channel.
Apply Controls
Apply Mode
The Apply Modes are buttons that change a brush’s painting functionality.
– Color: The Color Apply Mode paints simple colored strokes. When used in conjunction with an
image brush, it can also be used to tint the image.
– Clone: The Clone Apply Mode copies an area from the same image using adjustable positions and
time offsets. This mode can also copy portions of one image into another image. Any image from
the node tree can be used as the source image.
– Emboss: The Emboss Apply Mode embosses the portions of the image covered by the
brush stroke.
– Erase: Erase reveals the underlying image through all other strokes, effectively erasing portions of
the strokes beneath it without actually destroying the strokes.
– Merge: This Apply Mode effectively merges the brush onto the image. This mode behaves in
much the same way as the Color Apply Mode but has no color controls. It is best suited for use
with the image brush type.
– Smear: Smear the image using the direction and strength of the brushstroke as a guide.
– Stamp: Stamps the brush onto the image, completely ignoring any alpha channel or transparency
information. This mode is best suited for applying decals to the target image.
– Wire: This Wire Removal Mode is used to remove wires, rigging, and other small elements in the
frame by sampling adjacent pixels and draw them in toward the stroke.
Stroke Controls
The stroke controls contain parameters that adjust the entire stroke of paint as well as control it
over time.
– Size: This control adjusts the size of the brush when the brush type is set to either Soft Brush or
Circle. The diameter of the brush is drawn in the viewer as a small circle surrounding the mouse
pointer. The size can also be adjusted interactively in the viewer by holding the Command or Ctrl
key while dragging the mouse pointer.
– Spacing: The Spacing slider determines the distance between dabs (samples used to draw a
continuous stroke along the underlying vector shape). Increasing this value increases the density
of the stroke, whereas decreasing this value causes the stroke to assume the appearance of a
dotted line.
– Stroke Animation: The Stroke Animation menu provides several pre-built animation effects
that can be applied to a paint stroke. This menu appears only for vector strokes like Stroke and
Polyline Stroke.
– All Frames: This default displays the stroke for all frames of the image connected to the orange
background input of the Paint node.
– Limited Duration: This exists on the number of frames specified by the Duration slider.
– Duration: Duration sets the duration of each stroke in frames. This control is present only for
Multistroke and Clone Multistroke, or when the stroke animation mode is set to Limited Duration. It
is most commonly employed for frame-by-frame rotoscoping through a scene.
Each Vector stroke applied to a scene has a duration in the Keyframes Editor that can be trimmed
independently from one stroke to the next. The duration can be set to 0.5, which allows each
stroke to last for a single field only when the node tree is processing in Fields mode.
– Write On and Write Off: This range slider appears when the Stroke Animation is set to one of
the Write On and Write Off methods. The range represents the beginning and end points of the
stroke. Increase the Start value from 0.0 to 1.0 to erase the stroke, or increase the End value from
0.0 to 1.0 to draw the stroke on the screen. This control can be animated to good effect. It works
most effectively when automatically animated through the use of the Write On and Write Off
modes of the Stroke Animation menu.
– Make Editable: This button appears only for Vector strokes. Clicking on Make Editable turns the
current stroke into a polyline spline so that the shape can be adjusted or animated.
NOTE: The MultiStroke tools are built for speed and can contain many strokes internally
without creating a huge list stack in the modifiers
Each Paint modifier stroke contains Brush controls, Apply controls, and Stroke controls identical to
those found in the main Controls tab of the Inspector.
While painting:
Hold Command or Ctrl while left-dragging to change brush size.
Hold Option or Alt while clicking to pick a color in the viewer.
While cloning:
Option-click or Alt-click to set the clone source position. Strokes start cloning
from the selected location.
Hold O to temporarily enable a 50% transparent overlay of the clone source
(% can be changed with pref Tweaks.CloneOverlayBlend).
Press P to toggle an opaque overlay of the clone source.
Copy Rect/Ellipse:
Shift + drag out the source to constrain the shape.
Paint Groups:
Command + drag or Ctrl + drag to change the position of a group’s crosshair, without
changing the position of the group.
Particle Nodes
This chapter details the Particle nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Particle Nodes
The remaining particle nodes modify the pEmitter results to simulate natural phenomena like gravity,
flocking, and bounce. The names of particle nodes all begin with a lowercase p to differentiate them
from non-particle nodes. They can be found in the particles category in the Effects Library.
pAvoid [pAv]
Inspector
Randomize
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result. Two nodes with the same seed values will produce the same random results. Click the
Randomize button to randomly select a new seed value, or adjust the slider to manually select a new
seed value.
Distance
Determines the distance from the region a particle should be before it begins to move away from
the region.
Strength
Determines how strongly the particle moves away from the region. Negative values make the particles
move toward the region instead.
pBounce [pBn]
Inputs
The pBounce node has a single orange input by default. Like most particle nodes, this orange input
accepts only other particle nodes. A green or magenta bitmap or mesh input appears on the node
when you set the Region menu in the Region tab to either Bitmap or Mesh.
– Input: The orange input takes the output of other particle nodes.
– Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area particles bounce off.
A pBounce node using a Shape 3D node as the region on which particles bounce off
Randomize
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result.
Two nodes with the same seed values will produce the same random results. Click the Randomize
button to randomly select a new seed value, or adjust the slider to manually select a new seed value.
Elasticity
Elasticity affects the strength of a bounce, or how much velocity the particle will have remaining after
impacting upon the Bounce region. A value of 1.0 will cause the particle to possess the same velocity
after the bounce as it had entering the bounce. A value of 0.1 will cause the particle to lose 90% of its
velocity upon bouncing off of the region.
The range of this control is 0.0 to 1.0 by default, but greater values can be entered manually. This will
cause the particles to gain momentum after an impact, rather than lose it. Negative values will be
accepted but do not produce a useful result.
Variance
By default, particles that strike the Bounce region will reflect evenly off the edge of the Bounce region,
according to the vector or angle of the region. Increasing the Variance above 0.0 will introduce a
degree of variation to that angle of reflection. This can be used to simulate the effect of a
rougher surface.
Spin
By default, particles that strike the region will not have their angle or orientation affected in any way.
Increasing or decreasing the Spin value will cause the Bounce region to impart a spin to the particle
based on the angle of collision, or to modify any existing spin on the particle. Positive values will
impart a forward spin, and negative values impart a backward spin. The larger the value, the faster the
spin applied to the particle will be.
Roughness
This slider varies the bounce off the surface to slightly randomize particle direction.
Surface Motion
This slider makes the bounce surface behave as if it had motion, thus affecting the particles.
pChangeStyle [pCS]
Inputs
The pChange Style node has a single orange input by default. Like most particle nodes, this orange
input accepts only other particle nodes. A green or magenta bitmap or mesh input appears on the
node when you set the Region menu in the Region tab to either Bitmap or Mesh.
– Input: The orange input takes the output of other particle nodes.
– Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area where the custom particle node takes effect.
Inspector
Randomize
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result. Two nodes with the same seed values will produce the same random results. Click the
Randomize button to randomly select a new seed value, or adjust the slider to manually select a new
seed value.
Change Sets
This option allows the user to change the particle’s Set to become influenced by forces other than the
original particle. See “The Common Controls” in this chapter to learn more about Sets.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
pCustom [pCu]
Inputs
The pCustom node has three inputs. Like most particle nodes, this orange input accepts only other
particle nodes. The green and magenta inputs are 2D image inputs for custom image calculations.
Optionally, there are teal or white bitmap or mesh inputs, which appear on the node when you set the
Region menu in the Region tab to either Bitmap or Mesh.
– Input: The orange input takes the output of other particle nodes.
– Image 1 and 2: The green and magenta image inputs accept 2D images that are used for per
pixel calculations and compositing functions.
– Region: The teal or white region input takes a 2D image or a 3D mesh depending on whether
you set the Region menu to Bitmap or Mesh. The color of the input is determined by whichever
is selected first in the menu. The 3D mesh or a selectable channel from the bitmap defines the
area where the custom particle node takes effect.
A pCustom node using a Shape 3D node as the region where the custom event occurs
Number 1-8
Numbers are variables with a dial control that can be animated or connected to modifiers exactly as
any other control might. The numbers can be used in equations on particles at current time: n1, n2, n3,
n4, … or at any time: n1_at(float t), n2_at(float t), n3_at(float t), n4_at(float t), where t is the time you
want. The values of these controls are available to expressions in the Setup and Intermediate tabs.
Position 1-8
These eight point controls include 3D X,Y,Z position controls. They are normal positional controls and
can be animated or connected to modifiers as any other node might. They are available to expressions
entered in the Setup, Intermediate, and Channels tabs.
Setup 1-8
Up to eight separate expressions can be calculated in the Setup tab of the pCustom node. The Setup
expressions are evaluated once per frame, before any other calculations are performed. The results
are then made available to the other expressions in the node as variables s1, s2, s3, and s4.
Think of them as global setup scripts that can be referenced by the intermediate and channel scripts.
Particle
Particle position, velocity, rotation, and other controls are available in the Particle tab.
The following particle properties are exposed to the pCustom control:
pxi1, pyi1 the 2d position of a particle, corrected for image 1’s aspect
pxi2, pyi2 the 2d position of a particle, corrected for image 2’s aspect
rgnhit this value is 1 if the particle hit the pCustom node’s defined region
rgndist this variable contains the particles distance from the region
rgnix, rgniy, rgniz values representing where on the region the particle hit
rgnnx, rgnny, rgnnz region surface normal of the particle when it hit the region
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Inputs
The pCustom Force node has three inputs. Like most particle nodes, this orange input accepts only
other particle nodes. A green and magenta are 2D image inputs for custom image calculations.
Optionally there are teal or white bitmap or mesh inputs, which appear on the node when you set the
Region menu in the Region tab to either Bitmap or Mesh.
– Input: The orange input takes the output of other particle nodes.
– Image 1 and 2: The green and magenta image inputs accept 2D images that are used for per-
pixel calculations and compositing functions.
– Region: The teal or white region input takes a 2D image or a 3D mesh depending on whether
you set the Region menu to Bitmap or Mesh. The color of the input is determined by whichever
is selected first in the menu. The 3D mesh or a selectable channel from the bitmap defines the
area where the pCustom Force takes effect.
The tabs and controls located in the Inspector are similar to the controls found in the pCustom node.
Refer to the pCustom node in this chapter for more information.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
pDirectionalForce [pDF]
Inputs
The pDirectional Force node has a single orange input by default. Like most particle nodes, this
orange input accepts only other particle nodes. A green or magenta bitmap or mesh input appears on
the node when you set the Region menu in the Region tab to either Bitmap or Mesh.
– Input: The orange input takes the output of other particle nodes.
– Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area where the directional force takes effect.
A pDirectional Force node placed between the pEmitter and pRender nodes
Inspector
Randomize
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result. Two nodes with the same seed values will produce the same random results. Click the
Randomize button to select a new seed value randomly, or adjust the slider to select a new seed
value manually.
Strength
Determines the power of the force. Positive values will move the particles in the direction set by the
controls; negative values will move the particles in the opposite direction.
Direction
Determines the direction in X/Y space.
Direction Z
Determines the direction in Z space.
pEmitter [pEm]
Inputs
By default, the pEmitter node has no inputs at all. You can enable an image input by selecting Bitmap
from the Style menu in the Style tab. Also, two region inputs, one for bitmap and one for mesh, appear
on the node when you set the Region menu in the Region tab to either Bitmap or Mesh. The colors of
these inputs change depending on the order in which they are enabled.
– Style Bitmap Input: This image input accepts a 2D image to use as the particles’ image. Since
this image duplicates into potentially thousands of particles, it is best to keep these images
small and square—for instance, 256 x 256 pixels.
– Region: The region inputs takes a 2D image or a 3D mesh depending on whether you set the
Region menu to Bitmap or Mesh. The color of the input is determined by whichever is selected
first in the menu. The 3D mesh or a selectable channel from the bitmap defines the area where
the particles are emitted.
A pEmitter node connected to a pRender node is a typical setup for more particle systems.
Number Variance
This modifies the amount of particles generated for each frame, as specified by the Number control.
For example, if Number is set to 10.0 and Number Variance is set to 2.0, the emitter will produce
anywhere from 9-11 particles per frame. If the value of Number Variance is more than twice as large as
the value of Number, it is possible that no particles will be generated for a given frame.
Lifespan
This control determines how long a particle will exist before it disappears or ‘dies.’ The default value of
this control is 100 frames, although this can be set to any value. The timing of many other particle
controls is relative to the Lifespan of the particle. For example, the size of a particle can be set to
increase over the last 80% of its life, using the Size Over Life graph in the Style tab of the pEmitter.
Lifespan Variance
Like Number Variance, the Lifespan Variance control allows the Lifespan of particles produced to be
modified. If Particle Lifespan was set to 100 frames and the Lifespan Variance to 20 frames, particles
generated by the emitter would have a lifespan of 90–110 frames.
Color
This provides the ability to specify from where the color of each particle is derived. The default setting
is Use Style Color, which will provide the color from each particle according to the settings in the Style
tab of the pEmitter node.
The alternate setting is Use Color From Region, which overrides the color settings from the Style tab
and uses the color of the underlying bitmap region.
The Use Color From Region option only makes sense when the pEmitter region is set to use a bitmap
produced by another node in the composition. Particles generated in a region other than a bitmap
region will be rendered as white when the Use Color From Region option is selected.
Position Variance
This control determines whether or not particles can be ‘born’ outside the boundaries of the pEmitter
region. By default, the value is set to zero, which will restrict the creation area for new particles to the
exact boundaries of the defined region. Increasing this control’s value above 0.0 will allow the particle
to be born slightly outside the boundaries of that region. The higher the value, the ‘softer’ the region’s
edge will become.
Temporal Distribution
In general, an effect is processed per frame, based on the comp frame rate. However, processing
some particles only at the exact frame boundaries can cause pulsing. To make the behavior subtly
more realistic, the particles can be birthed in subframe increments.
The default, At The Same Time setting renders on frame boundaries, where as the other two settings
take advantage of sub frame rendering. Randomly Distributed randomizes birth times +/- around the
frame number, eg birth 10 particles at random sub times 24.1 24.85, 24.21, 24.37 etc. one particle at a
time. Evenly Distributed births particles at regular sub times, eg 10 particles, birth 1 at at time at 24.0,
24.1, 24.2, 24.3, 24.4, 24.5 ... 24.8, 24.9.
Velocity
The controls in the Velocity section determine the speed and direction of the particle cells as the are
generated from the emitter region.
Inherit
Inherit Velocity passes the emitter region’s velocity on to the particles. This slider has a wide range
that includes negative and positive values. A negative value causes the particles to move in the
opposite direction, a value of 1 will cause the particles to move with a velocity that matches the emitter
region’s velocity, and a value of 2 causes the particles to move ahead of the emitter region.
Rotation
Rotation controls are used to set the orientation of particle cells and animating that orientation
over time .
Rotation Mode
This menu control provides two options to help determine the orientation of the particles emitted.
When the particles are spherical, the effect of this control will be unnoticeable.
– Absolute Rotation: The particles will be oriented as specified by the Rotation controls, regardless
of velocity and heading.
– Rotation Relative To Motion: The particles will be oriented in the same direction as the
particle is moving. The Rotation controls can now be used to rotate the particle‘s orientation away
from its heading.
Spin
Spin controls are auto animated controls that change the orientation of particle cells over time.
Sets Tab
This tab contains settings that affect the physics of the particles emitted by the node. These settings
do not directly affect the appearance of the particles. Instead, they modify behavior like velocity, spin,
quantity, and lifespan.
Set 1-32
To assign the particles created by a pEmitter to a given set, simply select the checkbox of the set
number you want to assign. A single pEmitter node can be assigned to one or multiple sets. Once they
are assigned in the pEmitter, you can enable sets in other particle nodes so they only affect particles
from specific pEmitters.
Style Tab
The Style tab provides controls that affect the appearance of the particles. For detailed information
about the style Tab, see the “The Common Controls” section at the end of this chapter.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
pFlock [pFl]
Inputs
The pFlock node has a single orange input by default. Like most particle nodes, this orange input
accepts only other particle nodes. A green or magenta bitmap or mesh input appears on the node
when you set the Region menu in the Region tab to either Bitmap or Mesh.
– Input: The orange background input takes the output of other particle nodes.
– Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area where the flocking takes effect.
Inspector
Randomize
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result. Two nodes with the same seed values will produce the same random results. Click the
Randomize button to randomly select a new seed value, or adjust the slider to manually select a new
seed value.
Flock Number
The value of this control represents the number of other particles that the affected particle will attempt
to follow. The higher the value, the more visible “clumping” will appear in the particle system and the
larger the groups of particles will appear.
Follow Strength
This value represents the strength of each particle’s desire to follow other particles. Higher values will
cause the particle to appear to expend more energy and effort to follow other particles. Lower values
increase the likelihood that a given particle will break away from the pack.
Repel Strength
This value represents the force applied to particles that get closer together than the distance defined
by the Minimum Space control of the pFlock node. Higher values will cause particles to move away
from neighboring particles more rapidly, shooting away from the pack.
Minimum/Maximum Space
This range control represents the distance each particle attempts to maintain between it and other
particles. Particles will attempt to get no closer or farther than the space defined by the Minimum/
Maximum values of this range control. Smaller ranges will give the appearance of more organized
motion. Larger ranges will be perceived as disorganized and chaotic.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
pFollow [pFo]
Inputs
The pFollow node has a single orange input by default. Like most particle nodes, this orange
background input accepts only other particle nodes. A green bitmap or mesh input appears on the
node when you set the Region menu in the Region tab to either Bitmap or Mesh.
– Input: The orange input takes the output of other particle nodes.
– Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area where particles will follow the position point.
A pFollow node introduces a follow object that influences the particles’ motion.
Inspector
Random Seed
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result. Two nodes with the same seed values will produce the same random results. Click the
Randomize button to randomly select a new seed value, or adjust the slider to manually select a new
seed value.
Position XYZ
The position controls are used to create the new path by positioning the follow object. Moving the
XYZ parameters displays the onscreen position of the follow object. Animating these parameters
creates the new path the particles will be influenced by.
Spring
The Spring setting causes the particles to move back and forth along the path. The spread of the
spring motion increases over the life of the particles depending on the distance between the
particles and the follow object. Higher spring settings increase the elasticity, while lower settings
decrease elasticity.
Dampen
This value attenuates the spring action. A lower setting offers less resistance to the back and forth
spring action. A higher setting applies more resistance.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Inputs
The pFriction node has a single orange input by default. Like most particle nodes, this orange input
accepts only other particle nodes. A green or magenta bitmap or mesh input appears on the node
when you set the Region menu in the Region tab to either Bitmap or Mesh.
– Input: The orange input takes the output of other particle nodes.
– Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area where the friction occurs.
A pFriction node using a Shape 3D node as the region where friction is introduced to the particles.
Inspector
Velocity Friction
This value represents the Friction force applied to the particle’s Velocity. The larger the value, the
greater the friction, thus slowing down the particle.
Spin Friction
This value represents the Friction force applied to the particle’s Rotation or Spin. The larger the value,
the greater the friction, thus slowing down the rotation of the particle.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
pGradientForce [pGF]
Inputs
The pGradient Force node accepts two inputs: the default orange input from a particle node and one
from a bitmap image with an alpha channel gradient. A magenta or teal bitmap or mesh input appears
on the node when you set the Region menu in the Region tab to either Bitmap or Mesh.
– Input: The orange input takes the output of other particle nodes.
– Input: The green input takes the 2D image that contains the alpha channel gradient.
– Region: The magenta or teal region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area where the gradient force occurs.
A pGradient Force node using a Fast Noise node as the gradient to modify the particles’ motion
Inspector
Randomize
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result.
Two nodes with the same seed values will produce the same random results. Click the Randomize
button to randomly select a new seed value, or adjust the slider to manually select a new seed value.
Strength
Gradient Force has only one specific control, which affects the strength of the force and acceleration
applied to the particles. Negative values on this control will cause the Gradient Force to be applied
from black to white (low values to high values).
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Inputs
The pImage Emitter node has three inputs. Like most particle nodes, the orange input accepts only
other particle nodes. Green and magenta inputs are 2D image inputs for custom image calculations.
Optionally, there are teal or white bitmap or mesh inputs, which appear on the node when you set the
Region menu in the Region tab to either Bitmap or Mesh.
– Input: Unlike most other particle nodes, the orange input on the pImage Emitter accepts a 2D
image used as the emitter of the particles. If a region is defined for the emitter, this input is used
to define the color of the particles.
– Style Bitmap Input: This image input accepts a 2D image to use as the particles’ image. Since
this image duplicates into potentially thousands of particles, it is best to keep these images
small and square—for instance, 256 x 256 pixels.
– Region: The teal or white region input takes a 2D image or a 3D mesh depending on whether
you set the Region menu to Bitmap or Mesh. The color of the input is determined by whichever
is selected first in the menu. The 3D mesh or a selectable channel from the bitmap defines the
area where the particles are emitted.
A pImage Emitter node emits particles based on an image connected to the orange input.
The great majority of controls in this node are identical to those found in the pEmitter, and those
controls are documented in that previous section. Below are the descriptions of the controls unique to
the pImage Emitter node.
X and Y Density
The X and Y Density sliders are used to set the mapping of particles to pixels for each axis. They
control the density of the sampling grid. A value of 1.0 for either slider indicates 1 sample per pixel.
Smaller values will produce a looser, more pointillistic distribution of particles, while values
above 1.0 will create multiple particles per pixel in the image.
Alpha Threshold
The Alpha Threshold is used for limiting particle generation so that pixels with semitransparent alpha
values will not produce particles. This can be used to harden the edges of an otherwise soft alpha
channel. The higher the threshold value, the more opaque a pixel must be before it will generate a
particle. Note that the default threshold of 0.0 will create particles for every pixel, regardless of alpha,
although many may be transparent and invisible.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Sets Tab
NOTE: Pixels with a black (transparent) alpha channel will still generate invisible particles,
unless you raise the Alpha Threshold above 0.0. This can slow down rendering significantly.
An Alpha Threshold value of 1/255 = 0.004 is good for eliminating all fully transparent pixels.
The pixels are emitted in a fixed-size 2D grid on the XY plane, centered on the Pivot position.
Changing the Region from the default of All allows you to restrict particle creation to more
limited areas. If you need to change the size of this grid, use a Transform 3D node after
the pRender.
Remember that the various emitter controls apply only to particles when they are emitted.
That is, they set the initial state of the particle and do not affect it for the rest of its lifespan.
Since pImageEmitter (by default) emits particles only on the first frame, animating these
controls will have no effect. However, if the Create Particles Every Frame checkbox is turned
on, new particles will be emitted each frame and will use the specified initial settings for
that frame.
Inputs
The pKill node has a single orange input by default. Like most particle nodes, this orange input
accepts only other particle nodes. A green bitmap or mesh input appears on the node when you set
the Region menu in the Region tab to either Bitmap or Mesh.
– Input: The orange input takes the output of other particle nodes.
– Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area particles are killed.
A pKill node using a Shape 3D node as the region where particles die
Inspector
This node only contains common controls in the Conditions and Regions tabs. The Conditions and
Regions controls are used to define the location, age, and set of particles that are killed.
pMerge [pMg]
Inputs
The pMerge node has two identical inputs, one orange and one green. These two inputs accept only
other particle nodes.
– Particle 1 and 2 Input: The two inputs accept two streams of particles and merge them.
Inputs
The pPoint Force node has a single orange input by default. Like most particle nodes, this orange
input accepts only other particle nodes. A green bitmap or mesh input appears on the node when you
set the Region menu in the Region tab to either Bitmap or Mesh.
– Input: The orange input takes the output of other particle nodes.
– Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area where the point force affects the particles.
The pPoint Force node positions a tangent force that particles are attracted to or repelled from.
Inspector
Strength
This parameter sets the Strength of the force emitted by the node. Positive values represent attractive
forces; negative values represent repellent forces.
Power
This determines the degree to which the Strength of the force falls off over distance. A value of zero
causes no falloff of strength. Higher values will impose an ever-sharper falloff in strength of the force
with distance.
Limit Force
The Limit Force control is used to counterbalance potential problems with temporal sub-sampling.
Because the position of a particle is sampled only once a frame (unless sub-sampling is increased in
the pRender node), it is possible that a particle can overshoot the Point Force’s position and end up
getting thrown off in the opposite direction. Increasing the value of this control reduces the likelihood
that this will happen.
X, Y, Z Center Position
These controls are used to represent the X, Y, and Z coordinates of the point force in 3D space.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
pRender [pRn]
Inputs
The pRender node has one orange input, a green camera input, and a blue effects mask input. Like
most particle nodes, this orange input accepts only other particle nodes. A green bitmap or mesh
input appears on the node when you set the Region menu in the Region tab to either Bitmap or Mesh.
Inspector
Pre-Roll Options
Particle nodes generally need to know the position of each particle on the last frame before they can
calculate the effect of the forces applied to them on the current frame. This makes changing current
time manually by anything but single frame intervals likely to produce an inaccurate image.
The controls here are used to help accommodate this by providing methods of calculating the
intervening frames.
Restart
This control also works in 3D. Clicking on the Restart button will restart the particle system at the
current frame, removing any particles created up to that point and starting the particle system from
scratch at the current frame.
Pre-Roll
This control also works in 3D. Clicking on this button causes the particle system to recalculate, starting
from the beginning of the render range up to the current frame. It does not render the image
produced. It only calculates the position of each particle. This provides a relatively quick mechanism to
ensure that the particles displayed in the views are correctly positioned.
If the pRender node is displayed when the Pre-Roll button is selected, the progress of the pre-roll is
shown in the viewer, with each particle shown as point style only.
Automatic Pre-Roll
Selecting the Automatic Pre-Roll checkbox causes the particle system to automatically pre-roll the
particles to the current frame whenever the current frame changes. This prevents the need to manually
select the Pre-Roll button whenever advancing through time in jumps larger than a single frame. The
progress of the particle system during an Automatic Pre-Roll is not displayed to the viewers to prevent
distracting visual disruptions.
About Pre-Roll
Pre-Roll is necessary because the state of a particle system is entirely dependent on the last known
position of the particles. If the current time were changed to a frame where the last frame particle state
is unknown, the display of the particle is calculated on the last known position, producing
inaccurate results.
To demonstrate:
1 Add a pEmitter and a pRender node to the composition.
2 View the pRender in one of the viewers.
3 Set the Velocity of the particles to 0.1.
4 Place the pEmitter on the left edge of the screen.
5 Set the Current Frame to 0.
6 Set a Render Range from 0–100 and press the Play button.
Notice how the particle system only adds to the particles it has already created and does not try to
create the particles that would have been emitted in the intervening frames. Try selecting the Pre-Roll
button in the pRender node. Now the particle system state is represented correctly.
For simple, fast-rendering particle systems, it is recommended to leave the Automatic Pre-Roll option
on. For slower particle systems with long time ranges, it may be desirable to only Pre-Roll manually,
as required.
– Only Render in Hi-Q
Selecting this checkbox causes the style of the particles to be overridden when the Hi-Q
checkbox is deselected, producing only fast rendering Point-style particles. This is useful when
working with a large quantity of slow Image-based or Blob-style particles. To see the particles as
they would appear in a final render, simply enable the Hi-Q checkbox.
– View
This drop-down list provides options to determine the position of the camera view in a 3D particle
system. The default option of Scene (Perspective) will render the particle system from the
perspective of a virtual camera, the position of which can be modified using the controls in the
Scene tab. The other options provide orthographic views of the front, top, and side of the
particle system.
It is important to realize that the position of the onscreen controls for Particle nodes is unaffected
by this control. In 2D mode, the onscreen controls are always drawn as if the viewer were showing
the front orthographic view. (3D mode gets the position of controls right at all times.)
The View setting is ignored if a Camera 3D node is connected to the pRender node’s Camera
input on the node tree, or if the pRender is in 3D mode.
Conditions
Blur, Glow, and Blur Blend
When generating 2D particles, these sliders apply a Gaussian blur, glows, and blur blending to the
image as it is rendered, which can be used to soften the particles and blend them. The result is no
different than adding a Blur after the pRender node in the node tree.
Pre-Generate Frames
This control is used to cause the particle system to pre-generate a set number of frames before its first
valid frame. This is used to give a particle system an initial state from which to start.
A good example of when this might be useful is in a shot where particles are used to create the smoke
rising from a chimney. Set Pre-Generate Frames to a number high enough to ensure that the smoke is
already present in the scene before the render begins, rather than having it just starting to emerge
from the emitter for the first few frames.
Generate Z Buffer
Selecting this checkbox causes the pRender node to produce a Z Buffer channel in the image. The
depth of each particle is represented in the Z Buffer. This channel can then be used for additional
depth operations like Depth Blur, Depth Fog, and Downstream Z Merging.
Enabling this option is likely to increase the render times for the particle system dramatically.
Scene Tab
Z Clip
The Z Clip control is used to set a clipping plane in front of the camera. Particles that cross this plane
are clipped, preventing them from impacting on the virtual lens of the camera and dominating
the scene.
Grid Tab
These controls do not apply to 3D particles.
Image Tab
The controls in this tab are used to set the resolution, color depth, and pixel aspect of the rendered
image produced by the node.
Width/Height
This pair of controls is used to set the Width and Height dimensions of the image to be rendered
by the node.
Pixel Aspect
This control is used to specify the Pixel Aspect ratio of the rendered particles. An aspect ratio of 1:1
would generate a square pixel with the same dimensions on either side (like a computer display
monitor), and an aspect of 0.9:1 would create a slightly rectangular pixel (like an NTSC monitor).
NOTE: Right-click on the Width, Height, or Pixel Aspect controls to display a menu listing the
file formats defined in the preferences Frame Format tab. Selecting any of the listed options
will set the width, height, and pixel aspect to the values for that format, accordingly.
Depth
The Depth menu is used to set the pixel color depth of the particles. 32-bit pixels require 4X the
memory of 8-bit pixels but have far greater color accuracy. Float pixels allow high dynamic range
values outside the normal 0…1 range, for representing colors that are brighter than white or darker
than black.
Motion Blur
As with other 2D nodes in Fusion, Motion Blur is enabled from within the Settings tab. You may set
Quality, Shutter Angle, Sample Center, and Bias, and Blur will be applied to all moving particles.
NOTE: Motion Blur on 3D mode particles (rendered with a Renderer 3D) also requires that
identical motion blur settings are applied to the Renderer 3D node.
pSpawn [pSp]
Inputs
By default, the pSpawn node has a single orange input. Like most particle nodes, this orange input
accepts only other particle nodes. You can enable an image input by selecting Bitmap from the Style
menu in the Style tab. Also, two region inputs, one for bitmap and one for mesh, appear on the node
when you set the Region menu in the Region tab to either Bitmap or Mesh. The colors of these inputs
change depending on the order they are enabled.
– Input: The orange input accepts the output of other particle nodes.
– Style Bitmap Input: This image input accepts a 2D image to use as the particles’ image. Since
this image duplicates into potentially thousands of particles, it is best to keep these images
small and square—for instance, 256 x 256 pixels.
– Region: The region inputs take a 2D image or a 3D mesh depending on whether you set the
Region menu to Bitmap or Mesh. The color of the input is determined by whichever is selected
first in the menu. The 3D mesh or a selectable channel from the bitmap defines the area where
the particles are emitted.
A pSpawn node used to generate new particles at specific points in the old particles’ life
Inspector
The pSpawn node has a large number of controls, most of which exactly duplicate those found within
the pEmitter node. There are a few controls that are unique to the pSpawn node, and their effects are
described below.
Velocity Transfer
This control determines how much velocity of the source particle is transferred to the particles it
spawns. The default value of 1.0 causes each new particle to adopt 100 percent of the velocity and
direction from its source particle. Lower values will transfer less of the original motion to the
new particle.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Inputs
The pTangent Force node has a single orange input by default. Like most particle nodes, this orange
input accepts only other particle nodes. A green bitmap or mesh input appears on the node when you
set the Region menu in the Region tab to either Bitmap or Mesh.
– Input: The orange input takes the output of other particle nodes.
– Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area where the tangent force effects the particles.
The pTangent Force node positions a tangent force that particles maneuver around.
Inspector
Randomize
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result.
Two nodes with the same seed values will produce the same random results. Click the Randomize
button to randomly select a new seed value, or adjust the slider to manually select a new seed value.
X, Y, Z Center Position
These controls are used to represent the X, Y, and Z coordinates of the Tangent force in 3D space.
X, Y, Z Center Strength
These controls are used to determine the Strength of the Tangent force in 3D space.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
pTurbulence [pTr]
Inputs
The pTurbulence node has a single orange input by default. Like most particle nodes, this orange input
accepts only other particle nodes. A green bitmap or mesh input appears on the node when you set
the Region menu in the Region tab to either Bitmap or Mesh.
– Input: The orange input takes the output of other particle nodes.
– Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area of turbulence.
The pTurbulence node disturbs the rigid flow of particles for a more natural motion.
Inspector
Randomize
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result. Two nodes with the same seed values will produce the same random results. Click the
Randomize button to randomly select a new seed value, or adjust the slider to manually select a new
seed value.
X, Y, and Z Strength
The Strength control affects the amount of chaotic motion imparted to particles.
Density
Use this control to adjust the density in the turbulence field. Lower values causes more particle cells to
be affected similarly, almost as if “waves” of the turbulence field run through the particles, affecting
groups of cells at the same time. Higher values add finer variations to more individual particle cells
causing more of a spread in the turbulence field.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Inputs
The pVortex node has a single orange input by default. Like most particle nodes, this orange input
accepts only other particle nodes. A green bitmap or mesh input appears on the node when you set
the Region menu in the Region tab to either Bitmap or Mesh.
– Input: The orange input takes the output of other particle nodes.
– Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area of the vortex.
A pVortex node creates a spiraling motion for particles that fall within its pull.
Inspector
Strength
This control determines the Strength of the Vortex Force applied to each particle.
Power
This control determines the degree to which the Strength of the Vortex Force falls off with distance.
X, Y, and Z Offset
Use these sliders to set the amount by which the vortex Offsets the affected particles.
Size
This is used to set the Size of the Vortex Force.
Angle X and Y
These sliders control the amount of rotational force applied by the Vortex along the X and Y axes.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Region, and Settings tabs are common to all Particle nodes, so their descriptions can
be found in the following “The Common Controls” section.
Inspector
Style Tab
The Style Tab is common to the pEmitter, pSpawn, pChangeStyle, and pImage Emitter. It controls the
appearance of the particles using general controls like type, size, and color.
– Bitmap: This style produces particle cells based on an image file or another node in the Node
editor. When this option is selected an orange image input appears on the node in the node
editor. There are several controls for affecting the appearance and animation. In addition to the
controls in the Style section, a Merge section is displayed at the bottom of the inspector when
Bitmap is selected as the Style. The Merge section includes controls for additive or subtractive
merges when the particle cells overlap.
– Animate Over Time: This menu includes three options for determining how movie files play
when they are used as particle cell bitmaps. The Over Time setting plays the movie file
sequentially. For instance, when the comps is on frame 2, frame 2 of the movie file is displayed,
when the comp is on frame 3, frame 3 of the movie files is displayed and so on. If a particle
cell is not generated until frame 50, it begins with frame 50 of the movie file. This causes all
particle cells to use the same image on any give frame of the comp. The Particle Age setting
causes each particle cell to begin with the first frame of the movie file, regardless of when the
particle cell is generated. The Particle Birth Time setting causes each particle to begin with the
frame that coincides with the frame of the particle cell birth time. For instance, if the particle
is generated on frame 25, then it uses frame 25 of the movie file for the entire comp. Unlike
the other two options, the Particle Birth Time setting holds the same frame for the duration
of the comp
– Time Offset: This dial is used to slip or offset the starting frame used from the movie file.
For instance, setting it to 10 will cause the movie file to start at frame 10 instead of frame 1.
– Time Scale: This slider is a multiplier on the frame. Instead of using an offset, it changes the
starting frame by multiplying the frame by the value selected with the slider. For instance, if a
value of 2 is selected then when the playhead reaches frame 2, the movie files displays frame 4
(2x2=4) and when the playhead reaches frame 8, the movie file displays frame 16 (8x2=16).
– Gain: The gain slider is a multiplier of the pixel value. It is used to apply a correction to the
overall Gain of the Bitmap. Let’s say you have a bitmap particle cell that contains a pixel value
of R0.5 G0.5 B0.4 and you add a Gain of 1.2, you end up with a pixel value of R0.6 G0.6, B0.48
(i.e., 0.4 * 1.2 = 0.48) while leaving black pixels unaffected. Higher values produce a brighter
image, whereas lower values reduce both the brightness and the transparency of the image.
– Style Bitmap: This control appears when the Bitmap style is selected, along with an orange
Style Bitmap input on the node’s icon in the Node view. Connect a 2D node to this input to
provide images to be used for the particles. You can do this on the Node view, or you may
drag and drop the image source node onto the Style Bitmap control from the Node Editor or
Timeline, or right-click on the control and select the desired source from the Connect To menu.
– Brush: This styles produces particle cells based on any image file located in the brushes directory.
There are numerous controls for affecting the appearance and animation.
– Gain: The gain slider is a multiplier of the pixel value. It is used to apply a correction to the
overall Gain of the image that is used as the Brush. Let’s say you have a brush particle cell that
contains a pixel value of R0.5 G0.5 B0.4 and you add a Gain of 1.2, you end up with a pixel
value of R0.6 G0.6, B0.48 (i.e., 0.4 * 1.2 = 0.48) while leaving black pixels unaffected. Higher
values produce a brighter image, whereas lower values reduce both the brightness and the
transparency of the image.
– Brush: This menu shows the names of any image files stored in the Brushes directory. The
location of the Brushes directory is defined in the Preferences dialog, under Path Maps. The
default is the Brushes subdirectory within Fusion’s install folder.
– Use Aspect From: The Use Aspect From menu includes three settings for the aspect ratio of
the brush image. You can choose image format to use the brush image’s native aspect ration.
Choose Frame Format to use the aspect ratio set in the Frame Format Setting in the Fusion
Preferences, or choose Custom to enter your own Pixel X and Y dimensions.
– Line: This style produces straight line-type particles with optional “falloff.” The Size to Velocity
control described below (under Size Controls) is often useful with this Line type. The Fade control
adjusts the amount of falloff over the length of the line.
– Point Cluster: This style produces small clusters of single-pixel particles. Point Clusters are similar
to the Point style; however, they are more efficient when a large quantity of particles is required.
This style shares parameters with the Point style. Additional controls specific to Point Cluster style
are Number of Points and Number Variance.
– Sub Pixel Rendered: This checkbox determines whether the point particles are rendered with
Sub Pixel precision, which provides smoother-looking motion but blurrier particles that take
slightly longer to render.
– Number of Points and Variance: The value of this control determines how many points are in
each Point Cluster.
Color Controls
The Color Controls select the color and Alpha values of the particles generated by the emitter.
Color Variance
These range controls provide a means of expanding the colors produced by the pEmitter. Setting the
Red variance range at -0.2 to +0.2 will produce colors that vary 20% on either side of the red channel,
for a total variance of 40%. If the pEmitter is set to produce R0.5, G0.5, B0.5 (pure gray), the variance
shown above will produce points with a color range between R0.3, G0.5, B0.5, and R0.7, G0.5, B0.5.
To visualize color space as values between 0-256 or as 0-65535, change the values used by Fusion
using the Show Color As option provided in the General tab within the Preferences dialog.
Size to Velocity
This increases the size of each particle relative to the velocity or speed of the particle. The velocity of
the particle is added to the size, scaled by the value of this control.
1.0 on this control, such as for a particle traveling at 0.1, will add another 0.1 to the size (velocity * size to
velocity + size = new size). This is most useful for Line styles, but the control can be used to adjust the
size of any style.
Size Z Scale
This control measures the degree to which the size of each particle changes according to its Z
position. The effect is to exaggerate or reduce the impact of perspective. The default value is 1.0,
which provides a relatively realistic perspective effect.
Objects on the focal plane (Z = 0.0) will be actual-sized. Objects farther along Z will become smaller.
Objects closer along Z will get larger.
A value of 2.0 will exaggerate the effect dramatically, whereas a value of 0.0 will cancel the effects of
perspective entirely.
Fade Controls
This simple range slider provides a mechanism for fading a particle at the start and end of its lifetime.
Increasing the Fade In value will cause the particle to fade in at the start of its life. Decreasing the Fade
Out value will cause the particle to fade out at the end of its life.
This control’s values represent a percentage of the particle’s overall life, therefore, setting the Fade In
to 0.1 would cause the particle to fade in over the first 10% of its total lifespan. For example, a particle
with a life of 100 frames would fade in from frame 0…10.
Blur Controls
This set of particle controls can be used to apply a Blur to the individual particles. Blurring can be
applied globally, by age, or by Z depth position.
None of the Blur controls will have any effect on a 3D particle system.
Conditions Tab
The Conditions tab limits the particles that are affected by the node’s behavior. You can limit the
particle using probability or more specifically using sets.
Probability
The Probability slider determines the percentage of chance that the node affects any given particle.
The default value of 1.0 affects all particles. A setting of 0.6 would mean that each particle has a 60
percent chance of being affected by the control.
Probability is calculated for each particle on each frame. For example, a particle that is not affected by
a force on one frame has the same chance of being affected on the next frame.
Start/End Age
This range control can be used to restrict the effect of the node to a specified percentage of the
particle lifespan.
For example, to restrict the effect of a node to the last 20 percent of a particle’s life, set the Start value
to 0.8, and the End value remains at 1.0. The node on frames 80 through 100 only affects a particle
with a lifespan of 100 frames.
Region Tab
The Region tab is used to restrict the node’s effect to a geometric region or plane, and to determine
the area where particles are created if it’s a pEmitter node or where the behavior of a node has
influence.
The Region tab is common to almost all particle nodes. In the pEmitter node Emitter Regions are used
to determine the area where particles are created. In most other tools it is used to restrict the tool’s
effect to a geometric region or plane. There are seven types of regions, each with its own controls.
Only one emitter region can be set for a single pEmitter node. If the pRender is set to 2D, then the
emitter region will produce particles along a flat plane in Z Space. 3D emitter regions possess depth
and can produce particles inside a user-defined, three-dimensional region.
Mesh Regions
Region Type
The Region Type drop-down menu allows you to choose whether the region will include the inner
volume or just the surface. For example, with a pEmitter mesh region, this determines if the particles
emit from the surface or the full volume.
Limit By ObjectID
Selecting this checkbox allows the Object ID slider to select the ObjectID used as part of the region.
Style Tab
The Style tab exists in the pEmitter, pSpawn, pChangeStyle, and pImage Emitter. It controls the
appearance of the particles, allowing the look of the particles to be designed and animated over time.
Style
The Style menu provides access to the various types of particles supported by the Particle Suite. Each
style has its specific controls, as well as controls it will share with other styles.
– Point Style: This option produces particles precisely one pixel in size. Controls that are specific to
Point style are Apply Mode and Sub Pixel Rendered.
– Bitmap Style and Brush Style: Both the Bitmap and Brush styles produce particles based on an
image file. The Bitmap style relies on the image from another node in the node tree, and the Brush
style uses image files in the Brushes directory. They both have numerous controls for affecting
their appearance and animation, described below.
– Blob Style: This option produces large, soft spherical particles, with controls for Color, Size, Fade
timing, Merge method, and Noise.
– Line Style: This style produces straight line-type particles with optional “falloff.” The Size to
Velocity control described below (under Size Controls) is often useful with this Line type. The Fade
control adjusts the amount of falloff over the length of the line.
– Point Cluster Style: This style produces small clusters of single-pixel particles. Point Clusters are
similar to the Point style; however, they are more efficient when a large quantity of particles is
required. This style shares parameters with the Point style. Additional controls specific to Point
Cluster style are Number of Points and Number Variance.
Style Options
The following options appear only on some of the styles, as indicated below.
Apply Mode (Point and Point Cluster)
This control applies only to 2D particles; 3D particle systems are not affected.
– Add: Overlapping particles are combined by adding together the color values of each particle.
– Merge: Overlapping particles are merged.
Color Variance
These range controls provide a means of expanding the colors produced by the pEmitter. Setting the
Red variance range at -0.2 to +0.2 will produce colors that vary 20% on either side of the red channel,
for a total variance of 40%. If the pEmitter is set to produce R0.5, G0.5, B0.5 (pure gray), the variance
shown above will produce points with a color range between R0.3, G0.5, B0.5, and R0.7, G0.5, B0.5.
To visualize color space as values between 0-256 or as 0-65535, change the values used by Fusion
using the Show Color As option provided in the General tab within the Preferences dialog.
Position Nodes
This chapter details the Position nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Volume Fog [VLF] 1277
Volume Mask [VLM] 1284
Z to World Pos [Z2W] 1288
WPP Concept 1289
The Common Controls 1291
Inputs
The following inputs appear on the Volume Fog node in the Node Editor.
– Image: The orange input accepts the primary image where the fog will be applied. This image
contains a World Position Pass in the XYZ Position channels.
– Fog Image: The green Fog image input is for creating volumetric fog with varying depth and
extent; a 2D image can be connected here. A good starting point is to use a Fast Noise at a
small resolution of 256 x 256 pixels.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the fog to
certain areas.
– Scene Input: The magenta scene input accepts a 3D scene containing a 3D Camera.
Shape Tab
The Shape tab defines the size and location of the fog volume. You can either use the Pick buttons to
select the location and orientation in the viewer or use the Translation, Rotation, and Scale controls.
Shape
This menu switches between a basic spherical or rectangular volume to be placed in your image.
These volumes can then be further refined using the Fog image and effect mask.
Pick
Drag the Pick button into the viewer to select the XYZ coordinates from any 3D scene or 2D image
containing XYZ values, such as a rendered World Pass, to position the center of the Volume object.
When picking from a 2D image, make sure it’s rendered in 32-bit float to get full precision.
X, Y, Z Offset
These controls can be used to position the center of the fog volume manually or can be animated or
connected to other controls in Fusion.
Rotation Pick
Drag the Pick button into the viewer to select the rotational values from any 3D Scene or 2D image
containing those values, like an XYZ-Normal-Pass, to reorient the fog volume.
When picking from a 2D image, like an XYZ Normal pass, make sure it’s rendered in 32-bit float to get
full precision and accurate rotational values.
X, Y, Z Rotation
Use these controls to rotate the fog volume around its center.
X, Y, Z Scale
Scale the fog volume in any direction from its center to refine further the overall Size value
specified below.
Soft Edge
Controls how much the fog volume is faded toward the center from its perimeter to achieve a
softer look.
Color Tab
The Color tab controls the detail and color of the fog.
Adaptive Samples
Volumes images consist of multiple layers, so there may be 64 layers in a volume. This checkbox
adjusts the rendering algorithm for how to best blend those layers.
Dither: Applies a form of noise to improve the blending and hide visible layer differences.
Samples
Determines how many times a “ray” shot into the volume will be evaluated before the final image is
created. Not unlike raytracing, higher values lead to more detail inside the volume but also increase
render times.
Z Slices
The higher the Z Slices value, the more images from the connected Fog image sequence will be used
to form the depth of the volume.
You can, for example, use a Fast Noise with a high Seethe Rate to create such a sequence of images.
Be careful with the resolution of the images. Higher resolutions can require a large amount of memory.
As a rule of thumb, a resolution of 256 x 256 pixels with 256 Z Slices (i.e., forming a 256 x 256 x 256
cubic volume, which will use up to 256 MB for full color 32-bit float data) should give you a good
starting point.
Color
Allows you to modify the color of the fog generated. This will multiply over any color provided by the
connected Fog image.
Gain
Increases or decreases the intensity of the fog. More Gain will lead to a stronger glow and less
transparency in the fog. Lower values let the fog appear less dense.
Subtractive/Additive Slider
Similar to the Merge node, this value controls whether the fog is composed onto the image in Additive
or Subtractive mode, leading to a brighter or dimmer appearance of the fog.
Fog Only
This option outputs the generated fog on a black background, which then can be composited
manually or used as a mask on a Color Corrector for further refinement.
Noise Tab
The Noise tab controls the shape and pattern of the noise added to the fog.
Detail
Increase the value of this slider to produce a greater level of detail in the noise result. Larger values
add more layers of increasingly detailed noise without affecting the overall pattern. High values take
longer to render but can produce a more natural result.
Gain
This control increases or decreases the brightest parts of the noise map.
Brightness
This control adjusts the overall brightness of the noise map, before any gradient color mapping is
applied. In Gradient mode, this produces a similar effect to the Offset control.
Noise Rotation
Use the Rotation controls to orient the noise pattern in 3D.
Seethe
Adjust this thumbwheel control to interpolate the noise map against a different noise map. This will
cause a crawling shift in the noise, like it was drifting or flowing. This control must be animated to
affect the noise over time.
Discontinuous
Normally, the Noise function interpolates between values to create a smooth, continuous gradient of
results. Enable this checkbox to create hard discontinuity lines along some of the noise contours. The
result will be a dramatically different effect.
Inverted
Select this checkbox to invert the noise, creating a negative image of the original pattern. This is most
effective when Discontinuous is also enabled.
Camera Tab
For a perfect evaluation of a fog volume, a camera or 3D scene can be connected to the Scene input
of the node.
Camera
If multiple cameras are available in the connected Scene input, this menu allows the selection of the
correct camera needed to evaluate the fog volume. Instead of connecting a camera, position values
can be provided manually or by connecting the XYZ values to other controls.
Translation Pick
Drag the Pick button into the viewer to select XYZ coordinates from any 3D scene or 2D image
containing XYZ values, like a rendered World Pass, to define the center of the camera. When picking
from a 2D image, make sure it’s rendered in 32-bit float to get full precision.
X, Y, Z Offset
These controls can be used to define the center of the camera manually or can be animated or
connected to other controls in Fusion.
Light Tab
To utilize the controls in the Light tab, you must have actual lights in your 3D scene. Connect that
scene, including Camera and Lights, to the 3D input of the node.
Do Lighting
Enables or disables lighting calculations. Keep in mind that when not using OpenCL (i.e., rendering on
the CPU), these calculations may become a bit slow.
Do In-Scattering
Enables or disables light-scattering calculations. The volume will still be lit according to the state of the
Do Lighting checkbox, but scattering will not be performed.
Light Samples
Determines how accurate the lighting is calculated. Higher values mean more accurate calculation at
the expense of longer render times.
Density
This is similar to scattering in that it makes the fog appear thicker. With a high amount of scattering,
though, the light will be scattered out of the volume before it has had much chance to travel through
the fog, meaning it won’t pick up a lot of the transmission color. With a high density instead, the fog still
appears thicker, but the light gets a chance to be transmitted, thus picking up the transmission color
before it gets scattered out. Scattering is affected by the light direction when Asymmetry is not 0.0.
Density is not affected by light direction at all.
Asymmetry
Determines in what direction the light is scattered. A value of 0 produces uniform, or isotropic,
scattering, meaning all directions have equal probability. A value greater than 0 causes “forward
scattering,” meaning the light is scattered more into the direction of the light rays. This is similar to
what happens with water droplets in clouds. A value smaller than 0 produces “back scattering,” where
the light is more scattered back toward the original light source.
Transmission
Defines the color that is transmitted through the fog. The light that doesn’t get scattered out will tend
toward this color. It is a multiplier, though, so if you have a red light, but blue transmission, you won’t
see any blue.
Reflection
Changes the intensity of the light that is scattered out. Reflection can be used to modify the overall
color before Emission is added. This will be combined with the color channels of the volume texture
and then used to scale the values. The color options and the color channels of the volume texture are
multiplied together, so if the volume texture were red, setting the Reflection color options to blue
would not make the result blue. In such a case, they will multiply together to produce black.
Emission
This adds a bit of “glowing” to the fog, adding energy/light back into the calculation. If there are no
lights in the scene, and the fog emission is set to be 1.0, the results are similar to no lighting, like
turning off the Do Lighting option. Glowing can also be done while producing a different kind of look,
by having a Transmission greater than 1. This, however, would never happen in the real world.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Position nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
Examples
In these examples, we are looking at a volume from the outside. On the left, you see how the
Volume Fog looks with straight accumulation. That means the Do Lighting option is turned off.
On the right, you see the same volume with lighting/scattering turned on, and a single
point light.
On the left with straight accumulation; in the middle with lighting, scattering, and a single point
light; and on the right, the light in the scene has been moved, which also influences the look of
the volume.
Inputs
The following three inputs appear on the Volume Mask node in the Node Editor:
– Image: The orange image input accepts a 2D image containing a World Position Pass in the
XYZ Position channels.
– Mask Image: An image can be connected to the green mask image input for refining the mask.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the volume
mask to certain areas.
A Volume Mask tool takes advantage of World Position Pass for color correction in a 3D scene
Inspector
Shape Tab
The Shape tab defines the size and location of the Volume Mask. You can either use the Pick buttons
to select the location and orientation in the viewer or use the Translation, Rotation, and Scale controls.
Shape
This menu switches between a spherical or rectangular mask to be placed in your image. The mask
can be further refined using the mask image input.
Translation Pick
Drag the Pick button into the viewer to select XYZ coordinates from any 3D scene or 2D image
containing XYZ values, like a rendered World Pass, to position the center of the Volume Mask. When
picking from a 2D image, make sure it’s rendered in 32-bit float to get full precision.
Rotation Pick
Drag the Pick button into the viewer to select rotational values from any 3D scene or 2D image
containing those values, like an XYZ Normal pass, to reorient the mask.
When picking from a 2D image, like an XYZ Normal pass, make sure it’s rendered in 32-bit float, and
use World Space coordinates to get full precision and the correct rotational values.
X, Y, Z Rotation
Use these controls to rotate the mask around its center.
X, Y, Z Scale
Scale the mask in any direction from its center to further refine the overall Size value specified below.
Size
The overall size, in X, Y, and Z, of the mask created.
Soft Edge
Controls how much the Volume is faded toward the center from its perimeter to achieve a softer look.
r
The Volume Mask Color tab
Color Tab
The Color tab controls the color and blending of the mask image.
Color
Allows you to modify the color of the generated Volume Mask. This will add to any color provided by
the connected mask image.
Subtractive/Additive Slider
Similar to the Merge node, this value controls whether the mask is composed onto the image in
Additive or Subtractive mode, leading to a brighter or dimmer appearance of the mask.
Camera Tab
For a perfect evaluation of a Volume, a camera or 3D scene can be connected to the Scene input
of the node.
Camera
If multiple cameras are available in the connected Scene input, this drop-down menu allows you to
choose the correct camera needed to evaluate the Volume.
Instead of connecting a camera, position values can also be provided manually or by connecting the
XYZ values to other controls.
Translation Pick
Drag the Pick button into the viewer to select XYZ coordinates from any 3D scene or 2D image
containing XYZ values, like a rendered World Pass, to define the center of the camera.
When picking from a 2D image, make sure it’s rendered in 32-bit float to get full precision.
X, Y, Z Offset
These controls can be used to define the center of the camera manually or can be animated or
connected to other controls in Fusion.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Position nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The following inputs appear on the node tile in the Node Editor:
– Image: The orange image input accepts an image containing a World Position Pass or a
Z-depth pass, depending on the desired operation.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the World
Position Pass to certain areas.
– Scene Input: The magenta scene input accepts a 3D scene input containing a 3D Camera.
A Z to World Position node creates a World Position Pass from a Z-depth pass
Controls Tab
The Controls tab determines whether you are creating a World Position Pass or a Z channel. If there is
more than one camera in the connected scene, this tab also selects the camera to use for the
calculation.
Mode
This menu switches between creating a Z channel from a World Position Pass or vice versa.
Camera
If multiple cameras are available in the connected Scene input, this drop-down menu allows you to
choose the correct camera needed to evaluate the image.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Position nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
WPP Concept
The Position nodes in Fusion offer an entirely new way of working with masks and Volumetrics for
footage containing XYZ Position channels. Z to World offers the option to create those channels out of
a Z channel and 3D Camera information. For this overview, we refer to the World Position Pass as WPP.
What Is a WPP?
The WPP interprets each pixel’s XYZ position in 3D space as an RGB color value.
For instance, if a pixel sits at 0/0/0, the resulting pixel has an RGB value of 0/0/0 and thus will be black.
If the pixel sits at 1/0/0 in the 3D scene, the resulting pixel is entirely red. Of course, if the coordinates
of the pixel are something like -60/75/123, WPP interprets those values as RGB color values as well.
Due to the potentially enormous size of a 3D scene, the WPP channel should always be rendered in
32-bit floating point to provide the accuracy needed. The image below shows a 3D rendering of a
scene with its center sitting at 0/0/0 in 3D Space and the related WPP. For better visibility, the WPP is
normalized in this example.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Position category. The controls are
consistent and work the same way for each tool.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this will cause the tool to skip processing entirely, copying the input straight to the output.
Multiply by Mask
Selecting this option will cause the RGB values of the masked image to be multiplied by the mask
channel’s values. This will cause all pixels of the image not included in the mask (i.e., set to 0) to
become black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around the
edge of the object. If this option is disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
“Understanding Image Channels and Node Processing” in the Fusion Reference Manual or Chapter 79
in the DaVinci Resolve Reference Manual.
Motion Blur
– Motion Blur: This toggles the rendering of Motion Blur on the tool. When this control is toggled
on, the tool’s predicted motion is used to produce the motion blur caused by the virtual camera’s
shutter. When the control is toggled off, no motion blur is created.
– Quality: Quality determines the number of samples used to create the blur. A quality setting of 2
will cause Fusion to create two samples to either side of an object’s actual motion. Larger values
produce smoother results but increase the render time.
– Shutter Angle: Shutter Angle controls the angle of the virtual shutter used to produce the motion
blur effect. Larger angles create more blur but increase the render times. A value of 360 is the
equivalent of having the shutter open for one full frame exposure. Higher values are possible and
can be used to create interesting effects.
– Center Bias: Center Bias modifies the position of the center of the motion blur. This allows for the
creation of motion trail effects.
– Sample Spread: Adjusting this control modifies the weighting given to each sample. This affects
the brightness of the samples.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Resolve Connect
This chapter details the single node found in the Resolve Connect category, available
only in standalone Fusion Studio.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
External Matte Saver [EMS] 1295
NOTE: The Resolve Connect category and External Matte Saver node are available only in
Fusion Studio.
Inputs
By default, the node provides a single input for a 2D image you want to save as a matte.
– Input: Although initially there is only a single orange input for a matte to connect, the Inspector
provides an Add button for adding additional inputs. Each input uses a new color, but all accept
2D RGBA images.
An External Matte Saver node added as a separate branch in a node tree to render the mattes
Controls Tab
The Controls tab is used to name the saved file and determine where on your hard drive the file
is stored.
Filename
Enter the name you want to use for the EXR file in the Filename field. At the end of the name, append
the .exr extension to ensure that the file is saved as an EXR file.
Browse
Clicking the Browse button opens a standard file browser window where you can select the location
to save the file.
Mattes Tab
The Mattes tab is where you set up the number of mattes saved in the file, the name for each channel,
and the RGBA channels saved from each input.
Channels menu
The Channels menu allows you to select which channels are saved in the matte. You can choose the
alpha channel, the RGB channels, or the RGBA channels.
Channels Name
The Channels Name field allows you to customize the name of the matte channel you are saving. This
name is displayed in DaVinci Resolve’s Color page.
Node Name
The Node Name field displays the source of the matte. This is automatically populated when you
connect a node to the input.
Add
Clicking the Add button adds an input on the node and another set of fields for you to configure and
name the new matte channel.
Settings Tab
The Settings Tab in the Inspector is similar to settings found in the Saver tool. The controls are
consistent and work the same way as the Settings in other tools.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this causes the tool to skip processing entirely, copying the input straight to the output.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels of the image not included in the mask (i.e., set to 0) to become
black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around the
edge of the object. If this option is disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
“Understanding Image Channels and Node Processing” in the Fusion Reference Manual or Chapter 79
in the DaVinci Resolve Reference Manual.
Motion Blur
– Motion Blur: This toggles the rendering of Motion Blur on the tool. When this control is toggled
on, the tool’s predicted motion is used to produce the motion blur caused by the virtual camera’s
shutter. When the control is toggled off, no motion blur is created.
– Quality: Quality determines the number of samples used to create the blur. A quality setting of
2 causes Fusion to create two samples to either side of an object’s actual motion. Larger values
produce smoother results but increase the render time.
– Shutter Angle: Shutter Angle controls the angle of the virtual shutter used to produce the motion
blur effect. Larger angles create more blur but increase the render times. A value of 360 is the
equivalent of having the shutter open for one full frame exposure. Higher values are possible and
can be used to create interesting effects.
– Center Bias: Center Bias modifies the position of the center of the motion blur. This allows for the
creation of motion trail effects.
– Sample Spread: Adjusting this control modifies the weighting given to each sample. This affects
the brightness of the samples.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Shape Nodes
This chapter details the Shape nodes available in Fusion.
Contents
sBoolean 1301
sDuplicate 1304
sEllipse 1306
sExpand 1309
sGrid 1310
sJitter 1311
sMerge 1313
sNGon 1314
sOutline 1317
sRectangle 1319
sRender 1322
sStar 1324
sTransform 1327
Common Controls 1329
The sBoolean node combines or excludes overlapping areas of two shapes based on a menu of
boolean operations.
Like almost all shape nodes, you can only view the sBoolean node’s results through a sRender node.
External Inputs
The following inputs appear on the node’s tile in the Node Editor. Except when using the subtract
boolean operation, which shape you connect into which input does not matter.
– Input1: [orange, required] This input accepts the output of another shape node. This input is
used as the base shape when the subtract boolean operation is chosen.
– Input2: [green, optional] This input accepts the output of another shape node. This input is
used to cut the base shape hole when the subtract boolean operation is chosen.
Inspector
Operation
The operation menu includes four boolean operations:
– Intersection: Sometimes called an AND operation, this setting will only show areas where the two
shapes overlap. The result is only where input 1 AND input 2 overlap.
– Union: Sometimes called an OR operation, this setting will only show areas where either of the
two shapes exists. The result is where either input 1 OR input 2 exists. The Union setting is similar
to the result of the sMerge node.
– Subtract: Sometimes called a NOT operation, this setting outputs the shape of input 1 but
eliminates the areas where input 2 overlaps. The result is input 1 minus input 2.
– Xor: Sometimes called an AND NOT operation, this setting outputs the shape of input 1 or input 2
but eliminates the areas where they overlap. The result is (input 1 minus input 2) + (input 2 minus
input 1).
Style Mode
The Style mode menu only includes one option. The Replace setting replaces the color and alpha
level of the incoming shapes with the color set in the Style tab.
Style Tab
Style
Any color assigned to the individual shape nodes is replaced by the color set using the Style
tab controls.
Color
The color controls determine the color of the output shape from the sBoolean node. To choose a
shape color, you can click the color disclosure arrow, use the color swatch, or drag the eyedropper
into the viewer to select a color from an image. The RGBA sliders or number fields can be used to
enter each color channel’s value or the strength of the alpha channel.
Allow Combining
When this checkbox is enabled, the alpha channel value is maintained even when passing through
other nodes downstream that may cause the shape to overlap with copies of itself. When disabled, the
alpha channel value may increase when the shape overlaps itself.
For instance, if an ellipse’s alpha channel is set to .5, enabling the Allow Combining checkbox
maintains that value even if the shape passes through a duplicate or grid node that causes the shape
to overlap. Disabling the checkbox causes the alpha channel values to be compounded at each
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
sDuplicate
The sDuplicate node creates copies of the input shape, offsetting each copy’s position, size, and
rotation. Like almost all shape nodes, you can only view the sDuplicate node’s results through a
sRender node.
External Inputs
The following input appears on the node’s tile in the Node Editor:
– Input1: [orange, required] This input accepts the output of another Shape node. The shape
connected to this input is copied and offset based on the controls in the Inspector.
Controls
The Controls tab is used to determine the number of copies and set their position, size, and
rotation offset.
Copies
This slider determines the number of copies created by the node. The number does not include the
original shape, so entering a value of five will produce five copies plus the original.
X and Y Offset
These sliders set the X and Y distance between each of the copies. Each copy is offset from the
previous copy by the value entered in the X and Y number fields. The copies all start at 0, the center of
the original shape, and are offset from there. Using Fusion’s normalized coordinate system, entering X
Offset at 0.5 would move each copy half the frame’s width to the right. Entering -1.0 would move each
copy to the left by the width of the frame.
X and Y Size
Sets the X and Y size offset based on the previous shape size. For instance, an X and Y value of 1.0
creates copies identical in size to the original but. Entering a value of X and Y of 0.5 will cause each
copy to be half the size of the copy before it.
Axis Mode
The Axis mode menu provides four options for determining how each copy determines its rotational
pivot point.
– Absolute: Allows you to set an X and Y position for the axis of rotation based on the original
shape’s location. The axis of rotation is then copied and offset with each duplicated shape.
– Origin Relative: Each copy uses its center point as its axis of rotation.
– Origin Absolute: Each copy uses the center of the original shape as its axis of rotation.
– Progressive: Compounds each shape copy by progressively transforming each copy based on the
previous shape’s position, rotation, and scale.
X and Y Pivot
The X and Y pivot controls are displayed when the Axis mode is set to Absolute. You can use these
position controls to place the axis of rotation.
Rotation
Determines an offset rotation applied to each copy. The rotation is calculated from the offset rotation
of the previous copy. To rotate all copies identically, use the Angle parameter on the original shape or
use a sTransform node.
sEllipse
The sEllipse node is used to create circular shapes. Like almost all shape nodes, you can only view the
sEllipse node’s results through a sRender node.
External Inputs
This node generates shapes and does not have any inputs.
An sEllipse node connecting to an sGrid node, and then viewed using an sRender node
Inspector
Solid
When enabled, the Solid checkbox fills the elliptical shape with the color defined in the Style tab.
When disabled, an outline created by the Border Width control is displayed, and the center is made
transparent.
Border Width
This parameter expands or contracts the border around the shape. Although it can be used when the
Solid checkbox is enabled, it is primarily used to determine the outline thickness when the checkbox
is disabled.
Cap style
When the Solid checkbox is disabled, three cap style options are displayed. The cap styles can create
lines with flat, rounded, or squared ends. Flat caps have flat, squared ends, while rounded caps have
semi-circular ends. Squared caps have projecting ends that extend half the line width beyond the end
of the line.
The caps are not visible unless the length is below 1.0.
Position
The position parameter is only displayed when the Solid checkbox is disabled. It allows you to position
the starting point of the shape. When used in conjunction with the length parameter, it positions the
gap in the ellipse outline.
Length
The length parameter is only displayed when the Solid checkbox is disabled. A length of 1.0 is a
closed shape. Setting the length below 1.0 creates an opening or gap in the outline. Keyframing the
length parameters allows you to create write-on style animations.
X and Y Offset
These parameters are used to position the shape left, right, up, and down in the frame. The shape
starts in the center of the frame, and the parameters are used to offset the position. The offset
coordinates are normalized based on the width of the frame. An X offset of 0.0 is centered, and a
value of 0.5 places the center of the shape directly on the right edge of the frame.
Width/Height
The width and height determine the vertical and horizontal size of the ellipse. If the values are
identical, then you have a perfect circle.
Angle
The angle rotates the shape, which on a perfect circle doesn’t change the image all that much, but if
you create an oval or an outline with a short length, you can rotate the shape based on the center axis.
Style
The Style tab is used to assign a color to the shape and control its transparency.
Color
The color controls determine the color of the fill and border. To choose a shape color, you can click the
color disclosure arrow, use the color swatch, or drag the eyedropper into the viewer to select a color
from an image. The RGBA sliders or number fields can be used to enter each color channel’s value or
the strength of the alpha channel.
Allow Combining
When this checkbox is enabled, the alpha channel value is maintained even when passing through
other nodes downstream that may cause the shape to overlap with copies of itself. When disabled, the
alpha channel value may increase when the shape overlaps itself.
For instance, if an ellipse’s alpha channel is set to .5, enabling the Allow Combining checkbox
maintains that value even if the shape passes through a Duplicate or Grid node that causes the shape
to overlap. Disabling the checkbox causes the alpha channel values to be compounded at each
overlapping area.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
The sExpand node is used to dilate or erode shapes. Like almost all Shape nodes, you can only view
the sExpand node’s results through a sRender node.
External Inputs
The following input appears on the node’s tile in the Node Editor.
– Input1: [orange, required] This input accepts the output of another shape node. This shape or
compound shape connected to this input is either eroded or dilated.
Star and ellipse shapes combined in an sBoolean node, then output to an sExpand for dilating or eroding
Inspector
Controls
The Controls tab includes all of the parameters for the sExpand node.
Amount
A positive value dilates the shape while a negative value erodes it.
Miter Limit
The Miter parameter is only displayed when the Miter or Miter Clip border style is selected. The miter
limit determines when the pointed edges become beveled based on the shape’s thickness.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
sGrid
The sGrid node replicates the shape on an X and Y grid and adds the ability to offset the rows and
columns. Like almost all Shape nodes, you can only view the sGrid node’s results through a
sRender node.
External Inputs
The following input appears on the node’s tile in the Node Editor.
– Input1: [orange, required] This input accepts the output of another Shape node. The shape
connected to this input is replicated on a custom grid.
An sEllipse shapes connected to an sGrid and then output to an sRender for viewing and combining with other elements
Controls
The Controls tab is used to determine the number of grid cells and their offset position.
X and Y Offset
Sets the X and Y distance between the rows and columns. An offset value of 0.0 will have all the rows
and columns on top of each other. Entering X Offset at 1.0 would spread the columns the width to
the frame.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
sJitter
The sJitter node is most often used to randomly position an array of shapes generated from a sGrid or
sDuplicate node. However, it includes an auto-animating random mode that can be used to distort and
randomly jitter single shapes.
Like almost all Shape nodes, you can only view the sJitter node’s results through a sRender node.
External Inputs
The following input appears on the node’s tile in the Node Editor.
– Input1: [orange, required] This input accepts the output of another Shape node. The shape
connected to this input is offset, distorted, and animated based on the sJitter node settings.
An array of shapes created by the sGrid node input into an sJitter node to randomly offset or scale the shapes
Inspector
Controls
The Controls tab offers range sliders that determine the variation amount for offset, size, and rotation.
The Point Jitter parameters are used to offset the invisible points that create the vector shapes.
Jitter Mode
The Jitter Mode menu allows you to choose between static position and size offsets or enabling an
auto-animation mode. Leaving the default Fixed selection allows you to offset a grid of shapes,
animating with keyframes or modifiers if needed. The Random menu selection auto-animates the
parameters based on the range you define using the range sliders. If all the range sliders are left in the
default position, no random animation is created. Increasing the range on any given parameter will
randomly animate that parameter between the range slider values.
Shape Rotate
This parameter rotates each shape in an array.
Point Jitter
The X and Y Point Jitter parameters use the vector control points to distort the shape. This can be
used to give a distressed appearance to ellipses or wobbly animation to other shapes.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
sMerge
The sMerge node combines shapes similar to a standard Merge node, except the sMerge node can
accept more than two shape inputs.
Like almost all Shape nodes, you can only view the sMerge node’s results through a sRender node.
External Inputs
The node displays only two inputs first, but as each shape node is connected, a new input appears on
the node, assuring there is always one free to add a new shape into the composite.
– Input[#]: These multi-colored inputs are used to connect multiple Shape node. There is no
limit to the number of inputs this node can accept. The node dynamically adds more inputs as
needed, ensuring that there is always at least one input available.
Inspector
Controls
The only control for the sMerge node is the Override Axis checkbox, which overrides the shape’s axis.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
sNGon
The sNGon node is used to create multi-sided shapes like triangles, pentagons, and octagons. Like
almost all Shape nodes, you can only view the sNGon node’s results through a sRender node.
External Inputs
This node generates shapes and does not have any inputs.
Inspector
Controls
The Controls tab is used to define multi-sided shape characteristics, including fill, border, size,
and position.
Solid
When enabled, the Solid checkbox fills the NGon shape with the color defined in the Style tab. When
disabled, an outline created by the Border Width control is displayed, and the center is made
transparent.
Border Width
This parameter expands or contracts the border around the shape. Although it can be used when the
Solid checkbox is enabled, it is primarily used to determine the outline thickness when the checkbox
is disabled.
Border Style
The Border Style parameter controls how the sides of the NGon join at the corners. There are three
styles provided as options. Bevel squares off the corners. Round creates rounded corners. Miter
maintains pointed corners.
Cap style
When the Solid checkbox is disabled, three cap style options are displayed. The cap styles can create
lines with flat, rounded, or squared ends. Flat caps have flat, squared ends, while rounded caps have
semi-circular ends. Squared caps have projecting ends that extend half the line width beyond the end
of the line.
The caps are not visible unless the length is below 1.0.
Length
The Length parameter is only displayed when the Solid checkbox is disabled. A length of 1.0 is a
closed shape. Setting the length below 1.0 creates an opening or gap in the outline. Keyframing the
Length parameters allows you to create write-on style animations.
X and Y Offset
These parameters are used to position the shape left, right, up, and down in the frame. The shape
starts in the center of the frame, and the parameters are used to offset the position. The offset
coordinates are normalized based on the width of the frame. So, an X offset of 0.0 is centered and a
value of 0.5 places the center of the shape directly on the right edge of the frame.
Width/Height
The Width and Height parameters determine the vertical and horizontal size of the ellipse. If the values
are identical, then all sides are of equal length.
Angle
The Angle parameter rotates the shape based on the center axis.
Style Tab
Style
The Style tab is used to assign a color to the shape and control its transparency.
Color
The Color controls determine the color of the fill and border. To choose a shape color, you can click
the color disclosure arrow, use the color swatch, or drag the eyedropper into the viewer to select a
color from an image. The RGBA sliders or number fields can be used to enter each color channel’s
value or the strength of the alpha channel.
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
sOutline
The sOutline node is used to create outlines from merged or boolean compound shapes. The
individual shapes retain their own style, color, size, position, and other characteristics. The only
difference is the border thickness, border style, position, and length are applied to all incoming shapes
uniformly in the sOutline node.
Like almost all shape nodes, you can only view the sOutline node’s results through a sRender node.
External Inputs
The following input appears on the node’s tile in the Node Editor:
– Input1: [orange, required] This input accepts the another shape node’s output, but more likely a
compound shape from a sMerge or sBoolean. An outline is created from the compound shape
connected to this input.
Inspector
Controls
The Controls tab is used to define the outline thickness, border and cap style, position, and length that
is applied to the compound shape connected to the input.
Thickness
This parameter controls the width of the outline.
Border Style
The Border Style parameter controls how the outline joins at the corners. There are three styles
provided as options. Bevel squares off the corners. Round creates rounded corners. Miter maintains
pointed corners.
Cap style
Three Cap Style options are used to create lines with flat, rounded, or squared ends. Flat caps have
flat, squared ends, while rounded caps have semi-circular ends. Squared caps have projecting ends
that extend half the line width beyond the end of the line.
The caps are not visible unless the length is below 1.0.
Position
The Position parameter allows you to position the starting point of the shape. When used in
conjunction with the Length parameter, it positions the gap in the outline.
Length
The Length parameter controls the end position of the outline. A length of 1.0 is a closed shape.
Setting the length below 1.0 creates an opening or gap in the outline. Keyframing the Length
parameters allows you to create write-on style animations.
sRectangle
The sRectangle node is used to create rectangular shapes. Like almost all shape nodes, you can only
view the sRectangle node’s results through a sRender node.
External Inputs
This node generates shapes and does not have any inputs.
An sRectangle node connecting to an sDuplicate node, and then viewed using an sRender node
Inspector
Solid
When enabled, the Solid checkbox fills the rectangle shape with the color defined in the Style tab.
When disabled, an outline created by the Border Width control is displayed, and the center is made
transparent.
Border Width
This parameter expands or contracts the border around the shape. Although it can be used when the
Solid checkbox is enabled, it is primarily used to determine the outline thickness when the checkbox
is disabled.
Border Style
The Border Style parameter controls how the sides of the rectangle join at the corners. There are
three styles provided as options. Bevel squares off the corners. Round creates rounded corners. Miter
maintains pointed corners.
Cap style
When the Solid checkbox is disabled, three Cap Style options are displayed. The cap styles can create
lines with flat, rounded or squared ends. Flat caps have flat, squared ends, while rounded caps have
semi-circular ends. Squared caps have projecting ends that extend half the line width beyond the end
of the line.
The caps are not visible unless the length is below 1.0.
Position
The Position parameter is only displayed when the Solid checkbox is disabled. It allows you to position
the starting point of the shape. When used in conjunction with the Length parameter, it positions the
gap in the outline.
Length
The Length parameter is only displayed when the Solid checkbox is disabled. A length of 1.0 is a
closed shape. Setting the length below 1.0 creates an opening or gap in the outline. Keyframing the
Length parameters allows you to create write-on style animations.
X and Y Offset
These parameters are used to position the shape left, right, up, and down in the frame. The shape
starts in the center of the frame, and the parameters are used to offset the position. The offset
coordinates are normalized based on the width of the frame. So an X offset of 0.0 is centered and a
value of 0.5 places the center of the shape directly on the right edge of the frame.
Width/Height
The Width and Height parameters determine the vertical and horizontal size of the rectangle. If the
values are identical, then you have a square.
Corner Radius
This parameter determines if the corners of the rectangle are sharp or rounded. A value of 0.0
produces sharp corners, while a value of 1.0 will create a circle from a staring square shape or a pill
shape from a rectangle.
Angle
The Angle parameter rotates the shape based on the center axis.
Style
The Style tab is used to assign color to the shape and control its transparency.
Color
The Color parameter controls determine the color of the fill and border from the sRectangle node.
To choose a shape color, you can click the color disclosure arrow and use the color swatch, or drag
the eye dropper into the viewer to select a color from an image. The RGBA sliders or number fields
can be used to enter the value of each color channel or the strength of the alpha channel.
Allow Combining
When this checkbox is enabled, the alpha channel value is maintained even when passing through
other nodes downstream that may cause the shape to overlap with copies of itself. When disabled, the
alpha channel value may increase when the shape overlaps itself. For instance, if a rectangle alpha
channel is set to .5, enabling the Allow Combining checkbox maintains that value even if the shape
passes through a duplicate or grid node that causes the shape and alpha channel to overlap. Disabling
the checkbox causes the alpha channel values to be compounded at each overlapping area.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
The sRender node converts the vector shapes to an image. The output of the sRender allows the
vector shapes to be integrated with other elements in a composite.
Inputs
There is one input on the Background node for an Effect Mask input.
– Input1: [orange, required] This input accepts the output of your final shape node. A rendered
bitmap image is created from the sRender node for composting into the rest of your comp.
– Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the displayed area to only those pixels within the mask.
Multiple Shape nodes connected to the sRender node and then processed and composited with a title
Image Tab
The controls in this tab are used to set the resolution, color depth, and pixel aspect of the image
produced by the sRender node.
Process Mode
Use this menu control to select the Fields Processing mode used by Fusion to render the resulting
image. The default Full Frames option is appropriate for progressive formats.
Width/Height
This pair of controls are used to set the Width and Height dimensions of the image to be created by
the sRender node.
Pixel Aspect
This control is used to specify the Pixel Aspect ratio of the created images. An aspect ratio of 1:1 would
generate a square pixel with the same dimensions on either side (like a computer display monitor), and
an aspect of 0.9:1 would create a slightly rectangular pixel (like an NTSC monitor).
NOTE: Right-click on the Width, Height, or Pixel Aspect controls to display a menu listing the
file formats defined in the preferences Frame Format tab. Selecting any of the listed options
will set the width, height, and pixel aspect to the values for that format, accordingly.
Auto Resolution
When this checkbox is selected, the width, height, and pixel aspect of the image created by the node
will be locked to values defined in the composition’s Frame Format preferences. If the Frame Format
preferences change, the resolution of the image produced by the node will change to match. Disabling
this option can be useful to build a composition at a different resolution than the eventual target
resolution for the final render.
Depth
The Depth button array is used to set the pixel color depth of the image created by the Creator node.
32-bit pixels require 4X the memory of 8-bit pixels but have far greater color accuracy. Float pixels
allow high dynamic range values outside the normal 0..1 range, for representing colors that are brighter
than white or darker than black.
Remove Curve
Depending on the selected Gamma Space or on the Gamma Space found in Auto mode, the Gamma
Curve is removed from, or a log-lin conversion is performed on, the material, effectively converting it
to a linear output space.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
sStar
The sStar node is used to create multi-point star shapes. Like almost all Shape nodes, you can only
view the sStar node’s results through a sRender node.
External Inputs
This node generates shapes and does not have any inputs.
An sStar node connecting to an sDuplicate node, and then viewed using an sRender node
Inspector
Controls
The Controls tab is used to define the star shape’s characteristics, including number of points, depth,
fill, border, size, and position.
Points
This slider determines the number of points or arms on the star.
Depth
The depth slider controls the inner radius or width of the arms. A depth of 0.001 makes hair-thin arms,
while a depth of 1.0 makes a faceted circle.
Solid
When enabled, the Solid checkbox fills the star shape with the color defined in the Style tab. When
disabled, an outline created by the Border Width control is displayed, and the center is made
transparent.
Border Width
This parameter expands or contracts the border around the shape. Although it can be used when the
Solid checkbox is enabled, it is primarily used to determine the outline thickness when the checkbox
is disabled.
Cap style
When the Solid checkbox is disabled, three cap style options are displayed. The cap styles can create
lines with flat, rounded or squared ends. Flat caps have flat, squared ends while rounded caps have
semi-circular ends. Squared caps have projecting ends that extend half the line width beyond the end
of the line.
The caps are not visible unless the length is below 1.0.
Position
The Position parameter is only displayed when the Solid checkbox is disabled. It allows you to position
the starting point of the shape. When used in conjunction with the length parameter, it positions the
gap in the outline.
Length
The Length parameter is only displayed when the Solid checkbox is disabled. A length of 1.0 is a
closed shape. Setting the length below 1.0 creates an opening or gap in the outline. Keyframing the
Length parameters allows you to create write-on style animations.
X and Y Offset
These parameters are used to position the shape left, right, up, and down in the frame. The shape
starts in the center of the frame, and the parameters are used to offset the position. The offset
coordinates are normalized based on the width of the frame. So an X offset of 0.0 is centered and a
value of 0.5 places the center of the shape directly on the right edge of the frame.
Width/Height
The Width and Height parameters determine the vertical and horizontal size of the star. If the values
are identical, then all arms of the star are of equal length.
Angle
The Angle parameter rotates the shape based on the center axis.
Style Tab
Color
The Color controls determine the color of the fill and border from the sStar node. To choose a shape
color, you can click the color disclosure arrow and use the color swatch, or drag the eye dropper into
the viewer to select a color from an image. The RGBA sliders or number fields can be used to enter
the value of each color channel or the strength of the alpha channel.
Allow Combining
When this checkbox is enabled, the alpha channel value is maintained even when passing through
other nodes downstream that may cause the shape to overlap with copies of itself. When disabled, the
alpha channel value may increase when the shape overlaps itself. For instance, if a star alpha channel
is set to .5, enabling the Allow Combining checkbox maintains that value even if the shape passes
through a duplicate or grid node that causes the shape and alpha channel to overlap. Disabling the
checkbox causes the alpha channel values to be compounded at each overlapping area.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
sTransform
The sTransform node is used to add an additional set of transform controls to the existing controls that
are contained in Shape nodes. These additional transforms can be used to create hierarchical
animations. For instance, you can use a sStar’s built-in Angle control to spin the star around. The star
can then be output to an sTransform node. The rotation control in the sTransform can be used to orbit
the star around the frame.
Like almost all Shape nodes, you can only view the sStar node’s results through a sRender node.
An sStar node connecting to an sTransform node, and then viewed using an sRender node
Inspector
Controls
The Controls tab is used to define the add a set of transform controls to the incoming shape.
X and Y Offset
These parameters are used to position the shape left, right, up, and down in the frame. The shape
starts in the center of the frame, and the parameters are used to offset the position. The offset
coordinates are normalized based on the width of the frame. So an X offset of 0.0 is centered and a
value of 0.5 places the center of the shape directly on the right edge of the frame.
X and Y Size
The X and Y Size determine the vertical and horizontal scaling of the incoming shape. If the values are
different then the shape will be skewed from its original design.
Rotation
The dial rotates the shape based on the pivot controls.
Transform Axis
Check this box to apply the transform to the shape’s axis.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Common Controls
Nodes that handle Shape operations share a number of identical controls in the Inspector. This section
describes controls that are common amongst Shape nodes.
Settings Tab
The Settings tab in the Inspector can be found on every Shape node. Most of the controls listed here
are only found in the sRender node but a few are common to all Shape nodes.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower left corner of the node when the full
tile is displayed or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more information on
scripting nodes, see the Fusion scripting documentation.
Stereo Nodes
This chapter details the Stereo nodes available in Fusion. Stereoscopic nodes are
available only in Fusion Studio and DaVinci Resolve Studio.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Anaglyph [Ana] 1332
Combiner [Com] 1335
Disparity [Dis] 1337
Disparity To Z [D2Z] 1340
Global Align [GA] 1344
New Eye [NE] 1346
Splitter [Spl] 1349
Stereo Align [SA] 1350
Z To Disparity [Z2D] 1356
The Common Controls 1359
NOTE: The Anaglyph node is available only in Fusion Studio and DaVinci Resolve Studio.
Inputs
The three inputs on the Anaglyph node are the left eye input, right eye input, and effect mask.
– Left Eye Input: The orange input is used to connect the 2D image representing the left eye in
the stereo comp.
– Right Eye Input: The green input is used to connect the 2D image representing the right eye in
the stereo comp.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the
stereoscopic creation to only those pixels within the mask.
Controls Tab
Using the parameters in the Controls tab, the separate images are combined to create a
stereoscopic output.
Method
In addition to the color used for encoding the image, you can also choose five different methods from
the Method menu: Monochrome, Half-color, Color, Optimized, and Dubois. These methods are
described below.
– Monochrome: Assuming you are using a Red/Cyan Color Type, the left eye contains the
luminance of the left image and is placed in the output of the red channel. The right eye contains
the luminance of the right image and is placed in the output green and blue channels.
Monochrome Half-Color
– Color: The left eye contains the color channels from the left image that match the glasses’ color
for that eye. The right eye contains the color channels from the right image that match the glasses’
color for that eye.
– Optimized: Used with red/cyan glasses, for example, the resulting brightness of what shows
through the left eye is substantially less than the brightness of the right eye. Using typical ITU-R
601 ratios for luminance as a guide, the red eye would give 0.299 brightness, while the cyan
eye would give 0.587+0.114=0.701 brightness—over twice as bright. The difference in brightness
between the eyes can produce what are referred to as retinal rivalry or binocular rivalry, which
can destroy the stereo effect. The Optimized method generates the right eye in the same fashion
as the Color method. The left eye also uses the green and blue channels but in combination with
increased brightness that reduces retinal rivalry. Since it uses the same two channels from each
of the source images, it doesn’t reproduce the remaining one. For example, 1.05× the green and
0.45× the blue channels of the left image is placed in the red output channel, and the green and
blue channels of the right image are placed in the output green and blue channels. Red from both
the left and right images is not used.
Color Optimized
– Dubois: Images with fairly saturated colors can produce retinal rivalry with the Half-color, Color,
and Optimized methods because the color is visible in only one eye. For example, with red/
cyan glasses, a saturated green object looks black in the red eye, and green in the cyan eye.
The Dubois method uses the spectral characteristics of (specifically) red/cyan glasses and CRT
(Trinitron) phosphors to produce a better anaglyph and in the end, tends to reduce retinal rivalry
caused by such color differences in each eye. It also tends to reduce ghosting produced when
one eye ‘leaks’ into the other eye. The particular calculated matrix in Fusion is designed for red/
Dubois
Swap Eyes
Allows you to swap the left and right eye inputs easily.
Horiz Stack
Takes an image that contains both left and right eye information stacked horizontally. These images
are often referred to as “crosseyed” or “straight stereo” images. You only need to connect that one
image to the orange input of the node. It then creates an image half the width of the original input,
using the left half of the original image for the left eye and the right half of the original image for the
right eye. Color encoding takes place using the specified color type and method.
Vert Stack
Takes an image that contains both left and right eye information stacked vertically. You only need to
connect that one image to the orange input of the node. It then creates an image half the height of the
original input, using the bottom half of the original image for the left eye and the top half of the original
image for the right eye. Color encoding takes place using the specified color type and method.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Stereo nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Combiner [Com]
NOTE: The Combiner node is available only in Fusion Studio and DaVinci Resolve Studio.
Inputs
The two inputs on the Combiner node are used to connect the two images that get combined in a
stacked stereo image.
– Image 1 Input: The orange input is used to connect the 2D image representing the left eye in
the stereo comp.
– Image 2 Input: The green input is used to connect the 2D image representing the right eye in
the stereo comp.
Left and right eye images are connected into a Combiner node to generate a stacked stereo image.
Inspector
Controls Tab
To stack the images, the left eye image is connected to the orange input, and the right eye image is
connected to the green input of the node.
Combine
The Combine menu provides three options for how the two images are made into a stacked
stereo image.
– None: No operation will take place. The output image is identical to the left eye input.
– Horiz: Both images will be stacked horizontally, or side-by-side, with the image connected to the
left eye input on the left. This will result in an output image double the width of the input image.
– Vert: Both images will be stacked vertically, or on top of each other, with the image connected
to the left eye input on the bottom. This will result in an output image double the height of the
input image.
Add Metadata
Metadata is carried along with the images and can be added to the existing metadata using this
checkbox. To view Metadata, use the viewer’s SubView menu set to Metadata.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Stereo nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Disparity [Dis]
NOTE: The Disparity node is available only in Fusion Studio and DaVinci Resolve Studio.
Inputs
The two inputs on the Disparity node are used to connect the left and right images.
– Left Input: The orange input is used to connect either the left eye image or the stacked image.
– Right Input: The green input is used to connect the right eye image. This input is available only
when the Stack Mode menu is set to Separate.
Outputs
Unlike most nodes in Fusion, Disparity has two outputs for the left and right eye.
Left Output: This holds the left eye image with a new disparity channel, or a Stacked Mode image with
a new disparity channel.
Right Output: This holds the right eye image with a new disparity channel. This output is visible only if
Stack Mode is set to Separate.
Left and right eye images are connected into a Disparity node to generate and render out a stereo image
Inspector
Advanced
The Advanced settings section has parameter controls to tune the Disparity map calculations. The
default settings have been chosen to be the best default values from experimentation with many
different shots and should serve as a good standard. In most cases, tweaking of the Advanced
settings is not needed.
Smoothness
This controls the smoothness of the disparity. Higher values help deal with noise, while lower values
bring out more detail.
Edges
This slider is another control for smoothness but applies it based on the color channel. It tends to have
the effect of determining how edges in the disparity follow edges in the color images. When it is set to
a lower value, the disparity becomes smoother and tends to overshoot edges. When it is set to a
higher value, edges in the disparity align more tightly with the edges in the color images, and details
from the color channels start to slip into the disparity, which is not usually desirable.
As a rough guideline, if you are using the disparity to produce a Z channel for post effects like depth of
field, experiment with higher values, but if you are using the disparity to do interpolation, you might
want to keep the values lower.
In general, if the Edges slider is set is too high, there can be problems with streaked out edges when
the disparity is used for interpolation.
Match Weight
This controls how matching is done between neighboring pixels in the left image and neighboring
pixels in the right image. When a lower value is used, large structural color features are matched.
When higher values are used, small sharp variations in the color are matched. Typically, a good value
for this slider is in the [0.7, 0.9] range. Setting this option higher tends to improve the matching results
in the presence of differences due to smoothly varying shadows or local lighting variations between
the left and right images. You should still color match the initial images so they are as similar as
possible; this option tends to help with local variations (e.g., lighting differences due to light passing
through a mirror rig).
Mismatch Penalty
This controls how the penalty for mismatched regions grows as they become more dissimilar. The
slider provides a choice between a balance of Quadratic (lower values) and Linear (higher values)
penalties. Lower value Quadratic settings strongly penalize large dissimilarities, while higher value
Linear settings are more robust to dissimilar matches. Moving this slider toward lower tends to give a
disparity with more small random variations in it, while higher values produce smoother, more visually
pleasing results.
Warp Count
Turning down the Warp Count makes the disparity computations faster. In particular, the computational
time depends linearly upon this option. To understand what this option does, you need to understand
that the Disparity algorithm progressively warps the left image until it matches with the right image.
Iteration Count
Turning down the Iteration Count makes the disparity computations faster. In particular, the
computational time depends linearly upon this option. Just like adjusting Warp Count, at some point
adjusting this option higher will yield diminishing returns and will not produce significantly better
results. By default, this value is set to something that should converge for all possible shots and can be
tweaked lower fairly often without reducing the disparity’s quality.
Filtering
This menu determines the filtering operations used during flow generation. Catmull-Rom filtering will
produce better results, but at the same time, it increases the computation time steeply.
Stack Mode
This menu determines how the input images are stacked.
When set to Separate, the Right Input and Output will appear, and separate left and right images must
be connected.
Swap Eyes
Enabling this checkbox causes the left and right images to swap.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Stereo nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Disparity To Z [D2Z]
NOTE: The Disparity to Z node is available only in Fusion Studio and DaVinci Resolve Studio.
Inputs
The three inputs on the Disparity To Z node are used to connect the left and right images and a
camera node.
– Left Input: The orange input is used to connect either the left eye image or the stack image.
– Right Input: The green input is used to connect the right eye image. This input is available only
when the Stack Mode menu is set to Separate.
– Stereo Camera: The magenta input is used to connect a stereo camera node.
Outputs
Unlike most nodes in Fusion, Disparity To Z has two outputs for the left and right eye.
– Left Output: This holds the left eye image with a new Z channel, or a Stacked Mode image with
a new disparity channel.
– Right Output: This holds the right eye image with a new Z channel. This output is visible only if
Stack Mode is set to Separate.
Inspector
NOTE: Z values are negative, becoming more negative the further you are from the camera.
The viewers only show 0.0 to 1.0 color, so to visualize other data it has to be converted via a
normalization method to fit in a display 0-1 range. To do this, right-click in the viewer and
choose Options > Show Full Color Range.
Output Z to RGB
Rather than keeping the Z values within the associated aux channel only, they will be copied into the
RGB channels for further modification with any of Fusion’s nodes.
Refine Z
The Enable checkbox refines the depth map based upon the RGB channels. The refinement causes
edges in the flow to align more closely with edges in the color channels. The downside is that
unwanted details in the color channels start to show up in the flow. You may want to experiment with
using this option to soften out harsh edges for Z-channel post effects like depth of field or fogging.
HiQ Only
Activating this checkbox causes the Refine Z option to process only when rendering is set to High
Quality. You can ensure High Quality is enabled by right-clicking to the left or right of the transport
controls in the main toolbar.
Strength
Increasing this slider does two things. It smooths out the depth in constant color regions and moves
edges in the Z channel to correlate with edges in the RGB channels.
Increasing the refinement has the undesirable effect of causing texture in the color channel to show up
in the Z channel. You will want to find a balance between the two.
Radius
This is the radius of the smoothing algorithm.
Stack Mode
This menu determines how the input images are stacked.
When set to Separate, the Right Input and Output will appear, and separate left and right images must
be connected.
Swap Eyes
Enabling this checkbox causes left and right images to be swapped.
TIP: If artistic mode is a little too “artistic” for you and you want more physically-based
parameters to adjust (e.g., convergence and eye separation), create a dummy Camera 3D,
connect it into the Disparity To Z > Camera input, and then fiddle with the Camera 3D’s controls.
Foreground Depth
This is the depth to which Foreground Disparity will be mapped. Think of this as the depth of the
nearest object. Note that values here are positive depth.
Background Depth
This is the depth to which Background Disparity will be mapped. Think of this as the depth of the most
distant object.
Falloff
Falloff controls the shape of the depth curve between the requested foreground and background
depths. When set to Hyperbolic, the disparity-depth curve behaves roughly like depth = constant/
disparity. When set to Linear, the curve behaves like depth = constant * disparity. Hyperbolic tends to
emphasize Z features in the foreground, while linear gives foreground/background features in the Z
channel equal weighting.
Unless there’s a specific reason, choose Hyperbolic, as it is more physically accurate, while Linear
does not correspond to nature and is purely for artistic effect.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Stereo nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
NOTE: The Global Align node is available only in Fusion Studio and DaVinci Resolve Studio.
Inputs
The two inputs on the Global Align node are used to connect the left and right images.
– Left Input: The orange input is used to connect either the left eye image or the stack image.
– Right Input: The green input is used to connect the right eye image. This input is available only
when the Stack Mode menu is set to Separate.
Outputs
Unlike most nodes in Fusion, Global Align has two outputs for the left and right eye.
– Left Output: This outputs the newly aligned left eye image.
– Right Output: This outputs the newly aligned right eye image.
A Global Align node used to manually correct left and right eye discrepancies
Controls Tab
The Controls tab includes translation and rotation controls to align the stereo images manually.
Translation X and Y
– Balance: Determines how the global offset is applied to the stereo footage.
– None: No translation is applied.
– Left Only: The left eye is shifted, while the right eye remains unaltered.
– Right Only: The right eye is shifted, while the left eye remains unaltered.
– Split Both: Left and right eyes are shifted in opposite directions.
Rotation
– Balance: Determines how the global rotation is applied to the stereo footage.
– None: No rotation is applied.
– Left Only: The left eye is rotated, while the right eye remains unaltered.
– Right Only: The right eye is rotated, while the left eye remains unaltered.
– Split Both: Left and right eyes are rotated in opposite directions.
Angle
This dial adjusts the angle of the rotation. Keep in mind that the result depends on the Balance
settings. If only rotating one eye by, for example, 10 degrees, a full 10-degree rotation will be applied
to that eye.
When applying rotation in Split mode, one eye will receive a -5 degree and the other eye a
+5 degree rotation.
Visualization
This control allows for different color encodings of the left and right eye to conveniently examine the
results of the above controls without needing to add an extra Anaglyph or Combiner node.
Set this to None for final output.
Stack Mode
Determines how the input images are stacked.
When set to Separate, the right input and output will appear, and separate left and right images must
be connected.
Swap Eyes
With Stacked Mode, image stereo pairs’ left and right images can be swapped.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Stereo nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
NOTE: The New Eye node is available only in Fusion Studio and DaVinci Resolve Studio.
Outputs
Unlike most nodes in Fusion, New Eye has two outputs for the left and right eye.
– Left Output: This outputs the left eye image with a new disparity channel, or a Stacked Mode
image with a new disparity channel.
– Right Output: This outputs the right eye image with a new disparity channel. This output is
visible only if Stack Mode is set to Separate.
A New Eye node creates a new stereo image using embedded disparity
Inspector
Enable
The Enable checkbox allows you to activate the left or right eye independently. The New Eye will
replace enabled eye with an interpolated eye. For example, if the left eye is your “master” eye and you
are recreating the right eye, you would disable the left eye and enable the right eye.
Lock XY
Locks the X and Y interpolation parameters. When they are unlocked, you can provide separate
interpolation factors for using the X and Y disparity. For example, if you are working with the right eye
and you have the X Interpolation slider set to 1.0 and the Y Interpolation slider set to -1.0, you will be
effectively interpolating the left eye onto the right eye but vertically aligned to the left eye.
XY Interpolation Factor
Interpolation determines where the interpolated frame is positioned, relative to the two source frames:
A slider position of -1.0 outputs the frame Left and a slider position of 1.0 outputs the frame Right. A
slider position of 0.0 outputs a result that is halfway between Left and Right.
Depth Ordering
The Depth Ordering is used to determine which parts of the image should be rendered on top. When
warping images, there is often overlap. When the image overlaps itself, there are two options for which
values should be drawn on top.
– Largest Disparity On Top: The larger disparity values will be drawn on top in the overlapping
image sections.
– Smallest Disparity On Top: The smaller disparity values will be drawn on top in the overlapping
image sections.
Clamp Edges
Under certain circumstances, this option can remove the transparent gaps that may appear on the
edges of interpolated frames. Clamp Edges will cause a stretching artifact near the edges of the frame
that is especially visible with objects moving through it or when the camera is moving.
Because of these artifacts, it is a good idea to use Clamp Edges only to correct small gaps around the
edges of an interpolated frame.
Softness
Helps to reduce the stretchy artifacts that might be introduced by Clamp Edges.
If you have more than one of the Source Frame and Warp Direction checkboxes turned on, this can
lead to doubling up of the stretching effect near the edges. In this case, you’ll want to keep the
softness rather small at around 0.01. If you have only one checkbox enabled, you can use a larger
softness at around 0.03.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Stereo nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Splitter [Spl]
NOTE: The Splitter node is available only in Fusion Studio and DaVinci Resolve Studio.
Inputs
The two inputs on the Splitter node are used to connect the left and right images.
– Left Input: The orange input is used to connect a stacked stereo image.
Outputs
Unlike most nodes in Fusion, the Splitter node has two outputs for the left and right eye.
– Left Output: This outputs the left eye image.
– Right Output: This outputs the right eye image.
Inspector
l
The Splitter Controls tab
Controls Tab
The Controls tab is used to define the type of stacked image connected to the node’s input.
Split
The Split menu contains three options for determining the orientation of the stacked input image.
– None: No operation takes place. The output image on both outputs is identical to the input image.
– Horiz: The node expects a horizontally stacked image. This will result in two output images, each
being half the width of the input image.
– Vert: The node expects a vertically stacked image. This will result in two output images, each
being half the height of the input image.
Swap Eyes
Allows you to easily swap the left and right eye outputs.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Stereo nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
NOTE: The Stereo Align node is available only in Fusion Studio and DaVinci Resolve Studio.
By combining these operations in one node, you can execute them using only a single image
resampling. In essence, this node can be thought of as applying scales and translation to the
disparities and then using the modified disparities to interpolate between the views.
NOTE: Changing the eye separation can cause holes to appear, and it may not be possible to
fill them since the information needed may not be in either image. Even if the information is
there, the disparity may have mismatched the holes. You may need to fill the holes manually.
This node modifies only the RGBA channels.
TIP: Stereo Align does not interpolate the aux channels but instead destroys them. In
particular, the disparity channels are consumed/destroyed. Add another Disparity node after
the StereoAlign if you want to generate Disparity for the realigned footage.
Inputs
The two inputs on the Stereo Align node are used to connect the left and right images.
– Left Input: The orange input is used to connect either the left eye image or the stack image.
– Right Input: The green input is used to connect the right eye image. This input is available only
when the Stack Mode menu is set to Separate.
Outputs
Unlike most nodes in Fusion, Stereo Align has two outputs for the left and right eye.
– Left Output: This outputs the left eye image with a new disparity channel, or a Stacked Mode
image with a new disparity channel.
– Right Output: This outputs the right eye image with a new disparity channel. This output is
visible only if Stack Mode is set to Separate.
Inspector
Controls Tab
Vertical Alignment
This option determines how the vertical alignment is split between two eyes. Usually, the left eye is
declared inviolate, and the right eye is aligned to it to avoid resampling artifacts.
Apply to
– Right: Only the right eye is adjusted.
– Left: Only the left eye is adjusted.
– Both: The vertical alignment is split evenly between the left and right eyes.
Mode
– Global: The eyes are simply translated up or down by the Y-shift to match up.
– Per Pixel: The eyes are warped pixel-by-pixel using the disparity to vertically align.
Keep in mind that this can introduce sampling artifacts and edge artifacts.
Y-shift
Y-shift is available only when the Mode menu is set to Global. You can either adjust the Y-shift
manually to get a match or drag the Sample button into the viewer, which picks from the disparity
channel of the left eye. Also remember, if you use this node to modify disparity, you can’t use the
Sample button while viewing the node’s output.
Convergence Point
The Convergence Point section is used as a global X-translation of L/R images.
Apply to
This menu determines which eyes are affected by convergence. You can choose to apply the
convergence to the left eye, right eye, or split between the two. In most cases, this will be set to Split. If
you set the eyes to Split, then the convergence will be shared 50-50 between both eyes. Sharing the
convergence between both eyes means you get half the shift in each eye, which in turn means smaller
holes and artifacts that need to be fixed later. The tradeoff is that you’ve resampled both eyes rather
than keeping one eye as a pure reference master.
X-shift
You can either use the slider to adjust the X-shift manually to get a match or use the Sample button to
pick from the disparity channels for easy point-to-feature alignment.
Snap
You can snap the global shift to whole pixels using this option. In this mode, there is no resampling of
the image, but rather a simple shift is done so there will be no softening or image degradation.
Separation
This is a scale factor for eye separation.
– When set to 0.0, this leaves the eyes unchanged.
– Setting it to 0.1 increases the shifts of all objects in the scene by a factor of 10% in each eye.
– Setting it to 0.1 will scale the shifts of all objects 10% smaller.
Unlike the Split option for vertical alignment, which splits the alignment effect 50-50 between both
eyes, the Both option will apply 100-100 eye separation to both eyes. If you are changing eye
separation, it can be a good idea to enable per-pixel vertical alignment, or the results of interpolating
from both frames can double up.
Depth Ordering
The Depth Ordering is used to determine which parts of the image should be rendered on top. When
warping images, there is often overlap. When the image overlaps itself, there are two options for which
values should be drawn on top.
– Largest Disparity On Top: The larger disparity values will be drawn on top in the overlapping
image sections.
– Smallest Disparity On Top: The smaller disparity values will be drawn on top in the overlapping
image sections.
Clamp Edges
Under certain circumstances, this option can remove the transparent gaps that may appear on the
edges of interpolated frames. Clamp Edges will cause a stretching artifact near the edges of the frame
that is especially visible with objects moving through it or when the camera is moving.
Because of these artifacts, it is a good idea to use Clamp Edges only to correct small gaps around the
edges of an interpolated frame.
Edge Softness
Helps to reduce the stretchy artifacts that might be introduced by Clamp Edges.
If you have more than one of the Source Frame and Warp Direction checkboxes turned on, this can
lead to doubling up of the stretching effect near the edges. In this case, you’ll want to keep the
softness rather small at around 0.01. If you have only one checkbox enabled, you can use a larger
softness at around 0.03.
Stack Mode
In Stack Mode, L and R outputs will output the same image.
If High Quality is off, the interpolations are done using nearest-neighbor sampling, leading to a more
“noisy” result. To ensure High Quality is enabled, right-click under the viewers, near the transport
controls, and choose High Quality from the pop-up menu.
Swap Eyes
Allows you to easily swap the left and right eye outputs.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Stereo nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Example
NOTE: The Z To Disparity node is available only in Fusion Studio and DaVinci Resolve Studio.
Inputs
The three inputs on the Z To Disparity node are used to connect the left and right images and a
camera node.
– Left Input: The orange input is used to connect either the left eye image or the stack image.
– Right Input: The green input is used to connect the right eye image. This input is available only
when the Stack Mode menu is set to Separate.
– Stereo Camera: The magenta input is used to connect a stereo perspective camera, which may
be either a Camera 3D with eye separation, or a tracked L/R Camera 3D.
Outputs
Unlike most nodes in Fusion, Z To Disparity has two outputs for the left and right eye.
– Left Output: This outputs the left eye image containing a new disparity channel, or a Stacked
Mode image with a new disparity channel.
– Right Output: This outputs the right eye image with a new disparity channel. This output is
visible only if Stack Mode is set to Separate.
A Z To Disparity node takes an image with a Z channel and creates a disparity channel
Controls Tab
The Controls tab includes settings that refine the conversion algorithm.
Refine Disparity
This refines the Disparity map based on the RGB channels.
Strength
Increasing this slider does two things. It smooths out the depth in constant color regions and moves
edges in the Z channel to correlate with edges in the RGB channels. Increasing the refinement has the
undesirable effect of causing texture in the color channel to show up in the Z channel. You will want to
find a balance between the two.
Radius
This is the pixel-radius of the smoothing algorithm.
Stack Mode
In Stack Mode, L and R outputs will output the same image.
If HiQ is off, the interpolations are done using nearest-neighbor sampling, leading to a more
“noisy” result.
Swap Eyes
This allows you to easily swap the left and right eye outputs.
Camera Mode
If you need correct real-world disparity values because you are trying to match some effect to an
existing scene, you should use the External setting to get precise disparity values back. When External
is selected, a magenta camera input is available on the node to connect an existing stereo Camera 3D
node, and use the Camera settings to determine the Disparity settings.
If you just want any disparity and do not particularly care about the exact details of how it is offset and
scaled, or if there is no camera available, then the Artistic setting might be helpful.
Camera
If you connect a Merge 3D node that contains multiple cameras to the camera input, the Camera menu
allows you to select the camera to use.
If you do not have a camera, you can adjust the artistic controls to produce a custom disparity channel
whose values will not be physically correct but will be good enough for compositing hacks. There are
two controls to adjust:
Convergence Point
This is the Z value of the convergence plane. This corresponds to the negative of the Convergence
Distance control that appears in Camera 3D. At this distance, objects in the left and right eyes are at
exactly the same position (i.e., have zero disparity).
Objects that are closer appear to pop out of the screen, and objects that are further appear behind
the screen.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Stereo nodes. These common controls are
described in detail in the following “The Common Controls” section.
Settings Tab
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this will cause the tool to skip processing entirely, copying the input straight to the output.
Multiply by Mask
Selecting this option will cause the RGB values of the masked image to be multiplied by the mask
channel’s values. This will cause all pixels of the image not included in the mask (i.e., set to 0) to
become black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around the
edge of the object. If this option is disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on Coverage and Background Color channels, see Chapter 18, “Understanding
Image Channels” in the Fusion Reference Manual or Chapter 79 in the DaVinci Resolve
Reference Manual.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Tracker Nodes
This chapter details the Tracker nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Tracker [Tra] 1362
Planar Tracker Node [PTra] 1376
Planar Transform Node [PXF] 1386
Camera Tracker [CTra] 1388
The Common Controls 1403
Inputs
The Tracker has three inputs:
– Background: The orange image input accepts the main 2D image to be tracked.
– Foreground: The optional green foreground accepts a 2D image to be merged on top of the
background as a corner pin or match move.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the tracking to
certain areas.
When used as an offshoot from the node tree, an image can be tracked and then that tracking data is
published for use on another node somewhere else in the node tree. The output of the Tracker does
not need to connect to another node. The tracking data is published and can be used via the Connect
To contextual menu.
The Tracker can also work as a replacement for a Merge tool in match-moving setups. Below, the
Tracker tracks the image connected to the orange background input and applies the tracking data to
the image connected to the foreground input. The same foreground-over-background merge
capabilities are available in the Tracker node.
Pattern Rectangle
In the viewer, the tracker displays a solid-line red rectangle called the pattern rectangle. Every pixel
within the rectangle makes up the pattern used for tracking. You can resize the pattern if necessary by
dragging on the rectangle’s borders.
Dragging the handle magnifies the pattern rectangle for precise placement.
TIP: There is no limit to the number of trackers that can be used in one composition, or the
number of objects that use the tracking data. There is also no limit to the number of patterns
that can be tracked by a single Tracker node. This chapter serves as a reference for the
various controls in the Tracker, but we strongly suggest you read the more general
information in Chapter 22, “Using the Tracking Node” in the Fusion Reference Manual or
Chapter 83 in the DaVinci Resolve Reference Manual.
The Tracker can be employed in two forms: as a node in the Node Editor or as a modifier
attached to a parameter. When used as a node in the Node Editor, the image tracked comes
from the input to the Tracker node. When used as a modifier, controls appear in the Modifiers
tab for the node with the connected control. Tracker Modifiers can track only one pattern, but
the image source can come from anywhere in the composition. Use this technique when
tracking a quick position for an element.
Trackers Tab
The Trackers tab contains controls for creating, positioning, and initiating tracking operations. After
tracking, offset controls are used to improve the alignment of the image following the track.
Track Buttons
There are four buttons to initiate tracking, and one button in the middle used to stop a track in
progress. These buttons can track the current pattern forward or backward in time. Holding the pointer
over each button displays a tooltip with the name of each button.
The buttons operate as follows:
– Track Reverse: Clicking this button causes all active trackers to begin tracking, starting at the end
of the render range and moving backward through time until the beginning of the render range.
– Track Reverse From Current Time: Clicking this button causes all active trackers to begin
tracking, starting at the current frame and moving backward through time until the beginning of
the render range.
– Stop Tracking: Clicking this button or pressing ESC stops the tracking process immediately. This
button is active only when tracking is in process.
– Track Forward From Current Time: Clicking this button causes all active trackers to begin
tracking, starting at the current frame and moving forward through time until the end of the render
range.
– Track Forward: Clicking this button causes all active trackers to begin tracking, starting at the first
frame in the render range and moving forward through time until the end of the render range.
TIP: If the project is field rendered, a value of 1 sets a keyframe on every field. Since the
Tracker is extremely accurate, this will result in a slight up-and-down jittering due to the
position of the fields. For better results when tracking interlaced footage in Field mode, set
the Frames Per Path Point slider to a value of 2, which results in one keyframe per frame of
your footage.
Adaptive Mode
Fusion is capable of reacquiring the tracked pattern, as needed, to help with complex tracks. This
menu determines the Adaptive tracking method.
– None: When set to None, the tracker searches for the original pattern in each frame.
– Every Frame: When set to Every Frame, the tracker reacquires the pattern every frame. This helps
the Tracker compensate for gradual changes in profile and lighting over time.
– Best Match: When set to Best Match, the tracker compares the original selected pattern to the
pattern acquired at each frame. If the variation between the two patterns exceeds the threshold
amount defined by the Match Tolerance control, the tracker does not reacquire the pattern on that
frame. This helps to avoid Tracker drift caused by transient artifacts that cross the pattern’s path
(such as a shadow).
Path Center
This menu determines how the Tracker behaves when repositioning a pattern. This menu is particularly
useful when a pattern leaves the frame or changes so significantly that it can no longer be tracked.
– Pattern Center: When Pattern Center is selected in the menu, the tracked path continues from the
center of the new path. This is appropriate when replacing an existing path entirely.
– Track Center (append): When Track Center (append) is selected in the menu, the path tracked by
a new pattern will be appended to the existing path. The path created is automatically offset by
the required amount. This setting is used to set a new tracking pattern when the original pattern
moves out of the frame or gets obscured by other objects. This technique work bests if the new
pattern is located close to the position of the original pattern to avoid any problems with parallax
or lens distortion.
Tracker List
A Tracker node can track multiple patterns. Each tracker pattern created in the current Tracker node is
managed in the Tracker List.
Tracker List
The Tracker List shows the names of all trackers created.
– Each tracker pattern appears in the list by name, next to a small checkbox. Clicking the name of
the tracker pattern will select that tracker pattern.
Tracker States
– Enabled (black checkbox): An enabled pattern will re-track each time the track is initiated. Its
path data is available for use by other nodes, and the data is available for Stabilization and Corner
Positioning.
– Suspended (white circle): A Suspended pattern does not re-track when the track is initiated.
The data is locked to prevent additional changes. The data from the path is still available for
other nodes, and the data is available for advanced Tracking modes like Stabilization and Corner
Positioning.
– Disabled (clear): A Disabled pattern does not create a path when tracking is initialized, and its
data is not available to other nodes or for advanced Tracking operations like Stabilization and
Corner Positioning.
Add/Delete Tracker
Use these buttons to add or delete trackers from your Tracker List.
Show
This menu selects what controls are displayed in the Tracker node controls. They do not affect the
operation of the tracker; they only affect the lower half of the Inspector interface.
– Selected Tracker Details: When Selected Tracker Details is chosen, the controls displayed pertain
only to the currently selected tracker. You will have access to the Pattern window and the Offset
sliders.
– All Trackers: When All Trackers is selected, the pattern window for each of the added tracking
patterns is displayed simultaneously below the Tracker List.
Under normal circumstances, the channel selected shows in the pattern display. If the selected
channel is blue, then a grayscale representation of the blue channel for the pattern appears. The
image is represented in color only when you activate the Full Color button.
Override this behavior by selecting the Show Full Color button beneath the pattern display instead of
the Show Selected Channel button.
TIP: Because Fusion looks for the channel with the highest contrast automatically, you might
end up tracking a noisy but high-contrast channel. Before tracking, it’s always a good idea to
zoom in to your footage and check the RGB channels individually.
Tracker Sizes
In addition to onscreen controls, each tracker has a set of sizing parameters that let you adjust the
pattern and search box.
– Pattern Width and Height: Use these controls to adjust the width and height of the selected
tracker pattern manually. The size of the tracker pattern can also be adjusted in the viewer, which
is the normal method, but small adjustments are often easier to accomplish with the precision of
manual controls.
– Search Width and Height: The search area defines how far Fusion will look in the image from
frame to frame to reacquire the pattern during tracking. As with the Pattern Width and Height, the
search area can be adjusted in the viewer, but you may want to make small adjustments manually
using these controls.
Tracked Center
This positional control indicates the position of the tracker’s center. To remove a previously tracked
path from a tracker pattern, right-click this parameter and select Remove Path from the
contextual menu.
Operation Tab
While the Trackers tab controls let you customize how the Tracker node analyzes motion to create
motion paths, the Operation tab puts the analyzed motion data to use, performing image transforms of
various kinds.
The Tracker node is capable of performing a wide variety of functions, from match moving an object
into a moving scene, smoothing out a shaky camera movement, or replacing the content of a sign. Use
the options and buttons in the Operation tab to select the function performed by the Tracker node.
Operation Menu
The Operation menu contains four functions performed by the Tracker. The remaining controls in this
tab fine-tune the result of this selection.
– None: The Tracker performs no additional operation on the image beyond merely locating and
tracking the chosen pattern. This is the default mode, used to create a path that will then drive
another parameter on another node.
Merge
The Merge control determines what is done (if anything) with the image provided to the green
Foreground input of the Tracker. This menu appears when the operation is set to anything other
than None.
– BG Only: The foreground input is ignored; only the background is affected. This is used primarily
when stabilizing the background image.
– FG Only: The foreground input is transformed to match the movement in the background, and this
transformed image is passed through the Tracker’s output. This Merge technique is used when
match moving one layer’s motion to another layer’s motion.
– FG Over BG: The foreground image is merged over the background image, using the Merge
method described by the Apply Mode control that appears.
– BG Over FG: The background is merged over the foreground. This technique is often used when
tracking a layer with an Alpha channel so that a more static background can be applied behind it.
– Operator Modes: This menu is used to select the Operation Mode of the merge. It determines
how the foreground and background are combined to produce a result. This drop-down menu is
visible only when the Merge node’s Apply Mode is set to either Normal or Screen.
NOTE: For an excellent description of the math underlying the Operation modes, read
“Compositing Digital Images,” Porter, T., and T. Duff, SIGGRAPH 84 proceedings, pages
253-259. Essentially, the math is as described below.
The formula used to combine pixels in the merge is always fg * x + bg * y. The different operations
determine exactly what x and y are, as shown in the description for each mode.
The Operator modes are as follows:
– Over: The Over mode adds the foreground layer to the background layer by replacing the
pixels in the background with the pixels from the Z wherever the foreground’s alpha channel is
greater than 1.
x = 1, y = 1-[foreground Alpha]
– In: The In mode multiplies the alpha channel of the background input against the pixels in
the foreground. The color channels of the foreground input are ignored. Only pixels from the
foreground are seen in the final output. This essentially clips the foreground using the mask
from the background.
x = [background Alpha], y = 0
– Held Out: Held Out is essentially the opposite of the In operation. The pixels in the foreground
image are multiplied against the inverted alpha channel of the background image. You can
accomplish exactly the same result using the In operation and a Matte Control node to invert
the matte channel of the background image.
x = 1-[background Alpha], y = 0
– ATop: ATop places the foreground over the background only where the background has
a matte.
– XOr: XOr combines the foreground with the background wherever either the foreground or the
background have a matte, but never where both have a matte.
In most software applications, you will find the Additive/Subtractive option displayed as a simple
checkbox. Fusion lets you blend between the Additive and Subtractive versions of the merge
operation, which is occasionally useful for dealing with problem composites with edges that are
calling attention to themselves as too bright or too dark.
For example, using a Subtractive setting on a premultiplied image may result in darker edges.
Using an Additive setting with a non-premultiplied image may result in lightening the edges.
Edges
This menu selects how the revealed edges are handled when the image is moved to match position
and scaling.
– Black Edges: Out-of-frame edges revealed by Stabilization are left black.
– Wrap: Portions of the image moved off frame to one side are used to fill edges that are revealed
on the opposite side.
– Duplicate: The last valid pixel on an edge is repeated to the edge of the frame.
– Mirror: Image pixels are mirrored to fill to the edge of the frame.
Mapping Type
The Mapping Type control appears only in the Corner Positioning mode. There are two options
in the menu:
– Bi_Linear: The first method is Bi-Linear, where the foreground image is mapped into the
background without any attempt to correct for perspective distortion. This is identical to how
previous versions of Fusion operated.
Stabilize Settings
The Tracker node automatically outputs several steady and unsteady position outputs to which other
controls in the Node Editor can be connected. The Stable Position output provides X and Y
coordinates to match or reverse motion in a sequence. These controls are available even when the
operation is not set to Match Move, since the Stable Position output is always available for connection
to other nodes.
Pivot Type
The Pivot type menu determines how the anchor point for rotation is selected.
– Tracker Average: Averages the location based on the tracking points.
– Selected Tracker: Provides a menu where one of the current trackers can be selected
as the pivot point.
– Manual: Displays X and Y position number fields where you can manually position the pivot points.
Reference
The Reference mode determines the “snapshot frame” based on the frame where the pattern is first
selected. All Stabilization is intended to return the image back to that reference.
– Select Time: Lets you select the current frame.
– Start: The Snapshot Frame is determined to be the first frame in the tracked path. All Stabilization
is intended to return the image back to that reference.
– Start and End: The Start and End Reference mode is somewhat different from all other Reference
modes. Where the others are intended to take a snapshot frame to which all stabilization returns,
immobilizing the image, the Start and End mode is intended to smooth existing motion, without
removing it. This mode averages the motion between the Start and End of the path, drawing a
straight line between those points.
When this mode is active, it reveals the Reference Intermediate Points control. Increasing the value
of this control increases the number of points in the path used by the Reference, smoothing the
motion from a straight line between Start and End without making it wholly linear.
– End: The Snapshot Frame is determined to be the last frame in the tracked path. All Stabilization is
intended to return the image back to that reference.
Enlargement Scale
The zoom factor that is used when positioning the pattern rectangle when the above option is
activated.
TIP: The outputs of a tracker (seen in the Connect to… menu) can also be used by
scripts. They are:
– SteadyPosition: Steady Position
– UnsteadyPosition: Unsteady Position
– SteadyAxis: Steady Axis
– SteadySize: Steady Size
– UnsteadySize: Unsteady Size
– SteadyAngle: Steady Angle
– UnsteadyAngle: Unsteady Angle
– Position1: Tracker 1 Offset position
– PerspectivePosition1: Tracker 1 Perspective Offset position
– PositionX1: Tracker 1 Offset X position (3D Space)
– PositionY1: Tracker 1 Offset Y position (3D Space)
– PerspectivePositionX1: Tracker 1 Perspective Offset X position (3D Space)
– PerspectivePositionY1: Tracker 1 Perspective Offset Y position (3D Space)
– SteadyPosition1: Tracker 1 Steady Position
– UnsteadyPosition1: Tracker 1 Unsteady Position (likewise for the 2nd, 3rd, and so on)
TIP: Part of using a Planar Tracker is also knowing when to give up and fall back to using
Fusion’s Tracker node or to manual keyframing. Some shots are simply not trackable, or the
resulting track suffers from too much jitter or drift. The Planar Tracker is a useful time-saving
node in the artist’s toolbox, but while it may track most shots, it is not a 100% solution.
Inspector
Controls Tab
The Controls tab contains controls for determining how the Planar Tracker will be used, setting the
reference frame and initiating the track.
Operation Mode
The Operation Mode menu selects the purpose of the Planar Tracker node. The Planar Tracker has
four modes of operation:
– Track: Used to isolate a planar surface and track its movement over time. Then, you can create a
Planar Transform node that uses this data to match move another clip in various ways.
– Steady: After analyzing a planar surface, this mode removes all motion and distortions from the
planar surface, usually in preparation for some kind of paint or roto task, prior to “unsteadying” the
clip to restore the motion.
– Corner Pin: After analyzing a planar surface, this mode computes and applies a matching
perspective distortion to a foreground image you connect to the foreground input of the Planar
Tracker node, and merges it on top of the tracked footage.
– Stabilize: After analyzing a planar surface, this mode allows smoothing of a clip’s translation,
rotation, and scale over time. This is good for getting unwanted vibrations out of a clip while
retaining the overall camera motion that was intended.
NOTE: None of the operations can be combined together. For example, both Corner Pin and
Stabilize cannot be done at the same time, nor can a track be done while in corner
pinning mode.
Reference Time
The Reference Time determines the frame where the pattern is outlined. It is also the time from which
tracking begins. The reference frame cannot be changed once it has been set without destroying all
pre-existing tracking information, so scrub through the footage to be tracked and choose carefully.
The reference frame must be chosen carefully to give the best possible quality track.
You choose a reference frame by moving the playhead to an appropriate frame and then clicking the
Set button to choose that frame.
Pattern Polygon
You specify which region of the image you want to track by drawing a polygon on the reference frame.
Typically, when you first add a Planar Tracker node, you are immediately ready to start drawing a
polygon in the viewer, so it’s best to do this right away. When choosing where to draw a polygon, make
sure the region selected belongs to a physically planar surface in the shot. In a pinch, a region that is
only approximately planar can be used, but the less planar the surface, the poorer the quality of the
resulting track.
As a rule of thumb, the more pixels in the pattern, the better the quality of the track. In particular, this
means the reference frame pattern should be:
– As large as possible.
– As much in frame as possible.
– As unoccluded as possible by any moving foreground objects.
– At its maximal size (e.g., when tracking an approaching road sign, it is good to pick a later frame
where it is 400 x 200 pixels big rather than 80 x 40 pixels).
– Relatively undistorted (e.g., when the camera orbits around a flat stop sign, it is better to pick a
frame where the sign is face-on parallel to the camera rather than a frame where it is at a highly
oblique angle).
If the pattern contains too few pixels or not enough trackable features, this can cause problems with
the resulting track, such as jitter, wobble, and slippage. Sometimes dropping down to a simpler motion
type can help in this situation.
After you’ve drawn a pattern, a set of Pattern parameters lets you transform and invert the resulting
polygon, if necessary.
Track Mode
Track mode is unlike the other three options in the Operation menu in that is the only option that
initiates the planar tracking. The other modes use the tracking data generated by the Track mode.
Tracker
There are two available trackers to pick from:
– Point: Tracks points from frame to frame. Internally, this tracker does not actually track points-per-
se but rather small patterns like Fusion’s Tracker node. The point tracker possesses the ability
to automatically create its internal occlusion mask to detect and reject outlier tracks that do not
belong to the dominant motion. Tracks are colored green or red in the viewer, depending on
whether the point tracker thinks they belong to the dominant motion or they have been rejected.
The user can optionally supply an external occlusion mask to further guide the Point tracker.
In general, it is recommended to first quickly track the shot with the Point tracker and examine the
results. If the results are not good enough, then try the Hybrid tracker.
Motion Type
Determines how the Planar Tracker internally models the distortion of the planar surface being tracked.
The five distortion models are:
– Translation.
– Translation, Rotation (rigid motions).
– Translation, Rotation, Scale (takes squares to squares, scale is uniform in x and y).
– Affine includes translation, rotation, scale, skew (maps squares to parallelograms).
– Perspective (maps squares to generic quadrilaterals).
Each successive model is more general and includes all previous models as a special case.
When in doubt, choose Perspective for the initial track attempt. If the footage being tracked has
perspective distortions in it, and the Planar Tracker is forced to work with a simpler motion type, this
can end up causing the track to slide and wobble.
Sometimes with troublesome shots, it can help to drop down to a simpler motion model—for example,
when many track points are clustered on one side of the tracked region or when tracking a small
region where there are not many trackable pixels.
Output
Controls what is output from the Planar Tracker node while in the Track operation mode.
– Background: Outputs the input image unchanged.
– Background - Preprocessed: The Planar Tracker does various types of preprocessing on the input
image (e.g., converting it to luma) before tracking. It can be useful to see this when deciding which
track channel to choose.
– Mask: Outputs the pattern as a black and white mask.
– Mask Over Background: Outputs the pattern mask merged over the background.
Track Channel
Determines which image channel in the background image is tracked. It is good to pick a channel with
high contrast, lots of trackable features, and low noise. Allowed values are red, green, blue, and
luminance.
Show Splines
This button to the right of the “Trim to end” button opens the Spline Editor and shows the splines
associated with the Planar Tracker node. This can be useful for manually deleting points from the Track
and Stable Track splines.
Steady Mode
In Steady mode, the Planar Tracker transforms the background plate to keep the pattern as motionless
as possible. Any leftover motion is because the Planar Tracker failed to follow the pattern accurately or
because the pattern did not belong to a physically planar surface.
Steady mode is not very useful for actual stabilization, but is useful for checking the quality of a track. If
the track is good, during playback the pattern should not move at all while the rest of the background
plate distorts around it. It can be helpful to zoom in on parts of the pattern and place the mouse cursor
over a feature and see how far that feature drifts away from the mouse cursor over time.
Steady Time
This is the time where the pattern’s position is snapshotted and frozen in place. It is most common to
set this to the reference frame.
Clipping Mode
Determines what happens to the parts of the background image that get moved off frame by the
steady transform:
– Domain: The off frame parts are kept.
– Frame: The off frames parts are thrown away.
Domain mode is useful when Steady mode is being used to “lock” an effect to the pattern.
As an example, consider painting on the license plate of a moving car. One way to do this is to use a
Planar Tracker node to steady the license plate, then a Paint node to paint on the license plate, and
then a second Planar Tracker to undo the steady transform. If the Clipping mode is set to Domain, the
off frame parts generated by the first Planar Tracker are preserved so that the second Planar Tracker
can, in turn, map them back into the frame.
Merge Mode
Controls how the foreground (the corner pinned texture) is merged over the background (the tracked
footage). If there are multiple corner pins, this option is shared by all of them. There are four options to
pick from:
– BG only
– FG only
– FG over BG
– BG over FG
Stabilize Mode
Stabilize mode is used to smooth out shakiness in the camera by applying a transform that partially
counteracts the camera shake. This stabilizing transform (contained in the Stable Track spline) is
computed by comparing neighboring frames.
NOTE: Stabilize mode only smooths out motions, while Steady mode tries to completely “lock
off” all motion.
Be aware that the Planar Tracker stabilizes based on the motion of the pattern, so it is essential to
choose the pattern carefully. If the motion of the pattern does not represent the motion of the camera,
then there may be unexpected results. For example, if tracking the side of a moving truck and the
camera is moving alongside it, the Planar Tracker smooths the combined motion of both the truck and
the mounted camera. In some cases, this is not the desired effect. It may be better to choose the
pattern to be on some fixed object like the road or the side of a building, which would result in
smoothing only the motion of the camera.
One unavoidable side effect of the stabilization process is that transparent edges appear along the
edges of the image. These edges appear because the stabilizer does not have any information about
what lies off frame, so it cannot fill in the missing bits. The Planar Tracker node offers the option to
either crop or zoom away these edges. When filming, if the need for post-production stabilization is
anticipated, it can sometimes be useful to film at a higher resolution (or lower zoom).
Parameters to Smooth
Specify which of the following parameters to smooth:
– X Translation
– Y Translation
– Rotation
– Scale
Compute Stabilization
Clicking this button runs the stabilizer, overwriting the results of any previous stabilization. As soon as
the stabilization is finished, the output of the Planar Tracker node will be immediately updated with the
stabilization applied.
NOTE: The stabilizer uses the Track spline (created by the tracker) to produce the Stable
Track spline. Both of these splines’ keyframes contain 4 x 4 matrices, and the keyframes are
editable in the Spline Editor.
Clipping Mode
Determines what happens to the parts of the background image that get moved off frame by the
stabilization:
– Domain: The off frame parts are kept.
– Frame: The off frames parts are thrown away.
Frame Mode
This controls how transparent edges are handled. The available options include:
– Full: Do nothing. Leaves the transparent edges as is.
– Crop: Crops away the transparent edges. When this option is selected, the size of Planar Tracker’s
output image is smaller than the input image. No image resamplings occur. In Crop mode, use
the Auto Crop button or manually adjust the crop window by changing the X Offset, Y Offset, and
Scale sliders.
– Auto Crop Button: When this button is clicked, the Planar Tracker will examine all the frames
and pick the largest possible crop window that removes all the transparent edges. The
computed crop window will always be centered in frame and pixel aligned. When clicked, Auto
Crop updates the X/Y Offset and Scale sliders.
– Zoom: Scales the image bigger until the transparent edges are off frame. Choosing this option
causes an image resampling to occur. The downside of this approach is that it reduces the quality
(slightly softens) of the output image. In Zoom mode, use the Auto Zoom button or manually adjust
the zoom window by changing the X Offset, Y Offset, and Scale sliders.
– Auto Zoom: When this button is clicked, the Planar Tracker will examine all the frames and pick
the smallest possible zoom factor that removes all the transparent edges. The computed zoom
window will always be centered in frame. When clicked, Auto Zoom updates the X/Y Offset and
Scale sliders.
Options Tab
These controls affect the look of onscreen controls in the viewer.
Darken Image
Darkens the image while in Track mode in order to better see the controls and tracks in the viewer.
The Shift+D keyboard shortcut toggles this.
Show Trails
Toggles the display of the trails following the location of trackers.
Trail Length
Allows changing the length of tracker trails. If the pattern is moving very slowly, increasing the length
can sometimes make the trails easier to follow in the viewer. If the pattern is moving very fast, the
tracks can look like spaghetti in the viewer. Decreasing the length can help.
Inlier/Outlier Colors
When tracking, the tracker analyzes the frame and detects which of the multitudinous tracks belong to
the dominant motion and which ones represent anomalous, unexplainable motion. By default, tracks
belonging to the dominant motion are colored green (and are called inliers) and those that do not
belong are colored red (and are called outliers). Only the inlier tracks are used when computing the
final resulting track.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Tracking nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
The Planar Transform node applies perspective distortions generated by a Planar Tracker node onto
any input mask or masked image. The Planar Transform node can be used to reduce the amount of
time spent on rotoscoping objects. The workflow here centers around the notion that the Planar
Tracker node can be used to track objects that are only roughly planar. After an object is tracked, a
Planar Transform node can then be used to warp a rotospline, making it approximately follow the
object over time. Fine-level cleanup work on the rotospline then must be done.
Depending on how well the Planar Tracker followed the object, this can result in substantial time
savings in the amount of tedious rotoscoping. The key to using this technique is recognizing situations
where the Planar Tracker performs well on an object that needs to be rotoscoped.
A rough outline of the workflow involved is:
1 Track: Using a Planar Tracker node, select a pattern that represents the object to be rotoscoped.
Track the shot (see the tracking workflow in the Track section for the Planar Tracker node).
2 Create a Planar Transform node: Press the Create Planar Transform button on the Planar Tracker
node to do this. The newly created Planar Transform node can be freely cut and pasted into
another composition as desired.
3 Rotoscope the object: Move to any frame that was tracked by the Planar Tracker. When unsure if a
frame was tracked, look in the Spline Editor for a tracking keyframe on the Planar Transform node.
Connect a Polygon node into the Planar Transform node. While viewing the Planar Transform
node, rotoscope the object.
4 Refine: Scrub the timeline to see how well the polygon follows the object. Adjust the polyline on
frames where it is off. It is possible to add new points to further refine the polygon.
Inputs
The Planar Transform has only two inputs:
– Image Input: The orange image input accepts a 2D image on which the transform will be
applied.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the output of
the Planar Transform to certain areas.
Inspector
Controls Tab
The Planar Transform node has very few controls, and they are all located in the Controls tab. It’s
designed to apply the analyzed Planar Tracking data as a match move,
Reference Time
This is the reference time that the pattern was taken from in the Planar Tracker node used to produce
the Planar Transform.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Tracking nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
3D objects composited on to the video clip that use the camera tracker to remain
aligned with the objects in the frame as the image moves
Inspector
Track Tab
The Track tab contains the controls you need to set up an initial analysis of the scene.
Auto Track
Automatically detects trackable features and tracks them through the source footage. Tracks will be
automatically terminated when the track error becomes too high, and new tracks are created as
needed. The values of the Detection Threshold and Minimum Feature Separation sliders can be used
to control the number and distribution of auto tracks.
Detection Threshold
Determines the sensitivity to detect features. Automatically generated tracks will be assigned to the
shot and the Detection Threshold will force them to be either in locations of high contrast or
low contrast.
Track Channel
Used to nominate a color channel to track: red, green, blue, or luminance. When nominating a channel,
choose one that has a high level of contrast and detail.
Track Range
Used to determine which frames are tracked:
– Global: The global range, which is the full duration of the Timeline.
– Render: The render duration set on the Timeline.
– Valid: The valid range is the duration of the source media.
– Custom: A user determined range. When this is selected, a separate range slider appears to set
the start and end of the track range.
Bidirectional Tracking
Enabling this will force the tracker to track backward after the initial forward tracking. When tracking
backward, new tracks are not started but rather existing tracks are extended backward in time. It is
recommended to leave this option on, as long tracks help give better solved cameras and
point clouds.
Gutter Size
Trackers can become unstable when they get close to the edge of the image and either drift or jitter or
completely lose their pattern. The Camera Tracker will automatically terminate any tracks that enter the
gutter region. Gutter size is given as a percentage of pattern size. By default, it’s 100% of pattern size,
so a 0.04 pattern means a 0.04 gutter.
Camera Tab
The controls of the Camera tab let you specify the physical aspects of the live-action camera, which
will be used as a starting point when searching for solve parameters that match the real-world camera.
The more accurate the information provided in this section, the more accurate the camera solve.
The Camera tab includes controls relating to the lens and gate aspects of the camera being solved for.
Focal Length
Specify the known constant focal length used to shoot the scene or provide a guess if the Refine
Focal Length option is activated in the Solve tab.
Film Gate
Choose a film gate preset from the drop-down menu or manually enter the film back size in the
Aperture Width and Aperture Height inputs. Note that these values are in inches.
Aperture Width
In the event that the camera used to shoot the scene is not in the preset drop-down menu, manually
enter the aperture width (inches).
Aperture Height
In the event that the camera used to shoot the scene is not in the preset drop-down menu, manually
enter the aperture height (inches).
Center Point
This is where the camera lens is aligned to the camera. The default is (0.5, 0.5), which is the middle of
the sensor.
Solve Tab
Solve
Pressing Solve will launch the solver, which uses the tracking information and the camera
specifications to generate a virtual camera path and point cloud, approximating the motion of the
physical camera in the live-action footage. The console will automatically open, displaying the
progress of the solver.
Delete
Delete will remove any solved information, such as the camera and the point cloud, but will keep all
the tracking data.
Foreground Threshold
This slider sets the detection threshold for finding the tracks on fast-moving objects. The higher the
value, the more forgiving.
NOTE: When solving for the camera’s motion path, a simulated lens is internally created to
model lens distortion in the source footage. This simulated lens model is much simpler than
real-world lenses but captures the lens distortion characteristics important for getting an
accurate camera solve. Two types of distortion are modeled by Camera Tracker:
Radial Distortion: The strength of this type of distortion varies depending on the distance
from the center of the lens. Examples of this include pincushion, barrel, and mustache
distortion. Larger values correspond to larger lens curvatures. Modeling radial distortion is
especially important for wide angle lenses and fisheye lenses (which will have a lot of
distortion because they capture 180 degrees of an environment and then optically squeeze it
onto a flat rectangular sensor).
Tangential Distortion: This kind of distortion is produced when the camera’s imaging sensor
and physical lens are not parallel to each other. It tends to produce skew distortions in the
footage similar to distortions that can be produced by dragging the corners of a corner pin
within Fusion. This kind of distortion occurs in very cheap consumer cameras and is
practically non-existent in film cameras, DSLRs, and pretty much any kind of camera used in
film or broadcast. It is recommended that it be left disabled.
Track Filtering
The Camera Tracker can produce a large number of automatically generated tracks. Rather than
spending a lot of time individually examining the quality of each track, it is useful to have some less
time-intensive ways to filter out large swaths of potentially bad tracks. The following input sliders are
useful for selecting large amounts of tracks based on certain quality metrics, and then a number of
different possible operations can be made on them. For example, weaker tracks can selected and
deleted, yielding a stronger set of tracks to solve from. Each filter can be individually enabled
or disabled.
Delete Will remove the tracks from the set. When there are bad tracks, the simplest and
easiest option is to simply delete them.
Trim Previous Will cut the tracked frames from the current frame to the start of the track.
Sometimes it can be more useful to trim a track than deleting it. For example,
high quality long tracks that become inaccurate when the feature they are
tracking starts to become occluded or when the tracked feature moves too close
to the edge of the image.
Trim Next Will cut the tracked frames from the current frame to the end of the track.
Rename Will replace the current auto generated name with a new name.
Set Color Will allow for user assigned color of the tracking points.
Export Flag This controls whether the locators corresponding to the selected tracks will be
exported in the point cloud. By default all locators flagged as exportable.
Solve Weight By default, all the tracks are used and equally weighted when solving for the
camera’s motion path. The most common use of this option is to set a track’s
weight to zero so it does not influence the camera’s motion path but is still
has a reconstructed 3D locator. Setting a tracks’ weight to values other than
1.0 or 0.0 should only be done by advanced users.
Onscreen display of track names and values are controlled by
these functions:”
Show Names Will display the track name, by default these are a number.
Show Frame Range Will display the start and end frame of a track.
Show Solve Error Will display the amount of solve error each selected track has.
TIP: Also select tracks directly in the 2D viewer using the mouse or in the 3D viewer by
selecting their corresponding locators in the point cloud.
Export Tab
The Export tab lets you turn the tracked and solved data this node has generated into a form that can
be used for compositing.
The export of individual nodes can be enabled/disabled in the Export Options tab.
3D Scene Transform
Although the camera is solved, it has no idea where the ground plane or center of the scene is
located. By default, the solver will always place the camera in Fusion’s 3D virtual environment so that
on the first frame it is located at the origin (0, 0, 0) and is looking down the -Z axis. You have the choice
to export this raw scene without giving the Camera Tracker any more information, or you can set the
ground plane and origin to simplify your job when you begin working in the 3D scene. The 3D Scene
Transform controls provide a mechanism to correctly orient the physical ground plane in the footage
with the virtual ground plane in the 3D viewer. Adjusting the 3D Scene Transform does not modify the
camera solve but simply repositions the 3D scene to best represent the position of the live-
action camera.
NOTE: If you export the scene and then make changes in the 3D Scene Transform, it is
important to manually click Update Previous Export to see the results in the exported nodes.
Aligned/Unaligned
The Aligned/Unaligned menu locks or unlocks the origin and ground plane settings. When set to
Unaligned, you can select the ground plane and origin either manually or by selecting locators in the
viewer. When in unaligned mode, a 3D Transform control in the 3D viewer can be manually
manipulated to adjust the origin.
Once alignment of the ground plane and origin has been completed, the section is locked by
switching the menu to Aligned.
To get the best result when setting the ground plane, try to select as many points as possible
belonging to the ground and having a wide separation.
TIP: When selecting points for the ground plane, it is helpful to have the Camera Tracker
node viewed in side-by-side 2D and 3D views. It may be easier to select tracks belonging to
the ground by selecting tracks from multiple frames in the 2D viewer rather than trying to box
select locators in the 3D viewer.
Setting the origin can help you place 3D objects in the scene with more precision. To set the origin,
you can follow similar steps, but only one locator is required for the origin to be set. When selecting a
locator for the origin, select one that has a very low solve error.
Subdivision Level Shows how many polygons are in the ground plane.
Wireframe Sets whether the ground plane is set as wireframe or solid surface when
displayed in 3D.
Line Thickness Adjusts how wide the lines will draw in the view.
Offset By default, the center of the ground plane is placed at the origin (0, 0, 0).
This can be used to shift the ground plane up and down along the y-axis.
Export Options
Provides a checkbox list of what will be exported as nodes when the Export button is pressed. These
options are Camera, Point Cloud, Ground Plane, Renderer, Lens Distortion, and Enable Image Plane in
the camera.
The Animation menu allows you to choose between animating the camera and animating the point
cloud. Animating the camera leaves the point cloud in a locked position while the camera is keyframed
to match the live-action shot. Animating the point cloud does the opposite. The camera is locked in
position while the entire point cloud is keyframed to match the live-action shot.
Options Tab
The Options tab lets you customize the Camera Tracker’s onscreen controls so you can work most
effectively with the scene material you have.
Trail Length
Displays trail lines of the tracks overlaid on the viewer. The amount of frames forward and back from
the current frame is set by length.
Location Size
In the 3D viewer, the point cloud locators can be sized by this control.
Track Colors, Locator Colors, and Export Colors each have options for setting their color to
one of the following:
– User Assigned
– Solve Error
– Take From Image
– White
Export Colors Colors of the locators that get exported within the Point Cloud node.
Visibility
Toggles which overlays will be displayed in the 2D and 3D viewers. The options are Tracker Markers,
Trails, Tooltips in the 2D Viewer, Tooltips in the 3D viewer, Reprojected Locators, and Tracker Patterns.
Colors
Sets the color of the overlays.
– Selection Color: Controls the color of selected tracks/locators.
– Preview New Tracks Color: Controls the color of the points displayed in the viewer when the
Preview AutoTrack Locations option is enabled.
– Solve Error Gradient: By default, tracks and locators are colored by a green-yellow-red gradient
to indicate their solve error. This gradient is completely user adjustable.
Reporting
Outputs various parameters and information to the Console.
Workflow
Getting a good solve is a repeated process.
Track > Solve > Refine Filters > Solve > Cleanup tracks > Solve > Cleanup from point cloud >
Solve > Repeat.
Initially, there are numerous tracks, and not all are good, so a process of filtering and cleaning up
unwanted tracks to get to the best set is required. At the end of each cleanup stage, pressing Solve
ideally gives you a progressively lower solve error. This needs to be below 1.0 for it to be good for use
with full HD content, and even lower for higher resolutions. Refining the tracks often but not always
results in a better solve.
False Tracks
False tracks are caused by a number of conditions, such as moving objects in a shot, or reflections
and highlights from a car. There are other types of false tracks like parallax errors where two objects
are at different depths, and the intersection gets tracked. These moiré effects can cause the track to
creep. Recognizing these False tracks and eliminating them is the most important step in the
solve process.
Seed Frames
Two seed frames are used in the solve process. The algorithm chooses two frames that are as far
apart in time yet share the same tracks. That is why longer tracks make a more significant difference in
the selection of seed frames.
The two Seed frames are used as the reference frames, which should be from different angles of the
same scene. The solve process will use these as a master starting point to fit the rest of the tracks in
the sequence.
There is an option in the Solve tab to Auto Detect Seed Frames, which is the default setting and most
often a good idea. However, auto detecting seed frames can make for a longer solve. When refining
the Trackers and re-solving, disable the checkbox and use the Seed 1 and Seed 2 sliders to enter the
previous solve’s seed frames. These seed frames can be found in the Solve Summary at the top of the
Inspector after the initial solve.
Refine Filters
After the first solve, all the Trackers will have extra data generated. These are solve errors and
tracking errors.
Use the refine filters to reduce unwanted tracks, like setting minimum tracker length to eight frames.
As the value for each filter is adjusted, the Solve dialog will indicate how many tracks are affected by
the filter. Then Solve again.
Onscreen Culling
Under the Options tab, set the track to 20. This will display each track on footage with +-20 frame trail.
When scrubbing/playing through the footage, false tracks can be seen and selected onscreen, and
deleted by pressing the Delete key. This process takes an experienced eye to spot tracks that go bad.
Then Solve again.
You can view the exported scene in a 3D perspective viewer. The point cloud will be visible. Move and
pan around the point cloud, select and delete points that seem to have no inline with the image and
the scene space. Then Solve again.
Repeat the process until the solve error is below 1.0 before exporting.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Tracking nodes. These common controls
are described in detail in the following “The Common Controls” section.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Tracking category. The controls are
consistent and work the same way for each tool.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this will cause the tool to skip processing entirely, copying the input straight to the output.
Multiply by Mask
Selecting this option will cause the RGB values of the masked image to be multiplied by the mask
channel’s values. This will cause all pixels of the image not included in the mask (i.e., set to 0) to
become black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around the
edge of the object. If this option is disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
“Understanding Image Channels” in the Fusion Studio Reference Manual or Chapter 79 in the
DaVinci Resolve Reference Manual.
Motion Blur
– Motion Blur: This toggles the rendering of Motion Blur on the tool. When this control is toggled
on, the tool’s predicted motion is used to produce the motion blur caused by the virtual camera’s
shutter. When the control is toggled off, no motion blur is created.
– Quality: Quality determines the number of samples used to create the blur. A quality setting of 2
will cause Fusion to create two samples to either side of an object’s actual motion. Larger values
produce smoother results but increase the render time.
– Shutter Angle: Shutter Angle controls the angle of the virtual shutter used to produce the motion
blur effect. Larger angles create more blur but increase the render times. A value of 360 is the
equivalent of having the shutter open for one full frame exposure. Higher values are possible and
can be used to create interesting effects.
– Center Bias: Center Bias modifies the position of the center of the motion blur. This allows for the
creation of motion trail effects.
– Sample Spread: Adjusting this control modifies the weighting given to each sample. This affects
the brightness of the samples.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Transform Nodes
This chapter details the Transform nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Camera Shake [CSH] 1407
Crop [CRP] 1410
DVE [DVE] 1413
Letterbox [LBX] 1415
Resize [RSZ] 1418
Scale [SCL] 1421
Transform [XF] 1424
The Common Controls 1428
Inputs
The two inputs on the Camera Shake node are used to connect a 2D image and an effect mask, which
can be used to limit the camera shake area.
– Input: The orange input is used for the primary 2D image that shakes.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the camera
shake area to only those pixels within the mask. An effects mask is applied to the tool after the
tool is processed.
Controls Tab
The Controls tab includes parameters for adjusting the offsets, strength, speed, and frequency of the
simulated camera shake movement.
Deviation X and Y
These controls determine the amount of shake applied to the image along the horizontal (X) and
vertical (Y) axes. Values between 0.0 and 1.0 are permitted. A value of 1.0 generates shake positions
anywhere within the boundaries of the image.
Rotation Deviation
This determines the amount of shake that is applied to the rotational axis. Values between 0.0 and 1.0
are permitted.
Randomness
Higher values in this control cause the movement of the shake to be more irregular or random. Smaller
values cause the movement to be more predictable.
Overall Strength
This adjusts the general amplitude of all the parameters and blends that affect in and out. A value of
1.0 applies the effect as described by the remainder of the controls.
Speed
Speed controls the frequency, or rate, of the shake.
Frequency Method
This selects the overall shape of the shake. Available frequencies are Sine, Rectified Sine, and Square
Wave. A Square Wave generates a much more mechanical-looking motion than a Sine.
Filter Method
When rescaling a pixel, surrounding pixels are often used to give a more realistic result. There are
various algorithms for combining these pixels, called filters. More complex filters can give better
results but are usually slower to calculate.
Most of these filters are useful only when making an image larger. When shrinking images, it is
common to use the Bi-Linear filter, however, the Catmull-Rom filter will apply some sharpening to the
results and may be useful for preserving detail when scaling down an image.
Example
Resize filters. From left to right: Nearest Neighbor, Box, Linear, Quadratic, Cubic, Catmull-Rom,
Gaussian, Mitchell, Lanczos, Sinc, and Bessel.
Edges
This menu determines how the Edges of the image are treated.
– Canvas: This causes the edges that are revealed by the shake to be the canvas color—usually
transparent or black.
– Wrap: This causes the edges to wrap around (the top is wrapped to the bottom, the left is
wrapped to the right, and so on).
Invert Transform
Select this control to Invert any position, rotation, or scaling transformation. This option might be useful
for exactly removing the motion produced in an upstream Camera Shake.
Flatten Transform
The Flatten Transform option prevents this node from concatenating its transformation with adjacent
nodes. The node may still concatenate transforms from its input, but it will not concatenate its
transformation with the node at its output.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Transform nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
Crop [CRP]
TIP: You can crop an image in the viewer by activating the Allow Box Selection in the upper-
left corner of the viewer while the Crop node is selected and viewed. Then, drag a crop
rectangle around the area of interest to perform the operation.
NOTE: Because this node changes the physical resolution of the image, animating the
parameters is not advised.
Inputs
The single input on the Crop node is used to connect a 2D image for cropping.
– Input: The orange input is used for the primary 2D image you want to crop.
Inspector
Controls Tab
The Controls tab provides XY Offset and XY Size methods for cropping the image.
Offset X and Y
These controls position the image off the screen by pushing it left/right or up/down. The cropped
image disappears off the edges of the output image. The values of these controls are measured
in pixels.
Size X and Y
Use these controls to set the vertical and horizontal resolution of the image output by the Crop node.
The values of these controls are measured in pixels.
Keep Aspect
When toggled on, the Crop node maintains the aspect of the input image.
Keep Centered
When toggled on, the Crop node automatically adjusts the X and Y Offset controls to keep the image
centered. The XY Offset sliders are automatically adjusted, and control over the cropping is done with
the Size sliders or the Allow Box Selection button in the viewer.
Reset Offset
This resets the X and Y Offsets to their defaults.
Clipping Mode
This option sets the mode used to handle the edges of the image when performing domain of
definition (DoD) rendering. This is profoundly important for nodes like Blur, which may require samples
from portions of the image outside the current domain.
– Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If the
upstream DoD is smaller than the frame, the remaining area in the frame will be treated as black/
transparent.
– Domain: Setting this option to Domain will respect the upstream DoD when applying the node’s
effect. This can have adverse clipping effects in situations where the node employs a large filter.
– None: Setting this option to None does not perform any source image clipping at all. This means
that any data required to process the node’s effect that would normally be outside the upstream
DoD is treated as black/transparent.
Auto Crop
This evaluates the image and attempts to determine the background color. It then crops each side of
the image to the first pixel that is not that color.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Transform nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The three inputs on the DVE node are used to connect a 2D image, DVE mask, and an effect mask,
which can be used to limit the DVE area.
– Input: The orange input is used for the primary 2D image that is transformed by the DVE.
– DVE Mask: The white DVE mask input is used to mask the image prior to the DVE transform
being applied. This has the effect of modifying both the image and the mask.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input causes the DVE to
modify only the image within the mask. An effects mask is applied to the tool after the tool is
processed.
Controls Tab
The Controls tab includes all the transform parameters for the DVE.
Pivot X, Y, and Z
Positions the axis of rotation and scaling. The default is 0.5, 0.5 for X and Y, which is in the center of
the image, and 0 for Z, which is at the center of Z space.
Rotation Order
Use these buttons to determine in what order rotations are applied to the image.
XYZ Rotation
These controls are used to rotate the image around the pivot along the X-, Y- and Z-axis.
Center X and Y
This positions the center of the DVE image onscreen. The default is 0.5, 0.5, which positions the DVE
in the center of the image.
Z Move
This zooms the image in and out along the Z-axis. Visually, when this control is animated, the effect is
similar to watching an object approach from a distance.
Perspective
This adds additional perspective to an image rotated along the X- or Y-axis, similar to changing the
Field of View and zoom of a camera.
Masking Tab
The DVE node allows pre-masking of its input image. This offers the ability to create transformations
from the masked area of the image while leaving the remainder of the image unaffected.
Black Background
Toggle this on to erase the area outside the mask from the transformed image.
Fill Black
Toggle this on to erase the area within the mask (before transformation) from the DVE’s input,
effectively cutting the masked area out of the image. Enabling both Black Background and Fill Black
will show only the masked, transformed area.
Alpha Mode
This determines how the DVE will handle the alpha channel of the image when merging the
transformed image areas over the untransformed image.
– Ignore Alpha: This causes the input image’s alpha channel to be ignored, so all masked areas will
be opaque.
– Subtractive/Additive: These cause the internal merge of the pre-masked DVE image over the
input image to be either Subtractive or Additive.
– An Additive setting is necessary when the foreground DVE image is premultiplied, meaning
that the pixels in the color channels have been multiplied by the pixels in the alpha channel.
The result is that transparent pixels are always black, since any number multiplied by 0 always
equals 0. This obscures the background (by multiplying with the inverse of the foreground
alpha), and then simply adds the pixels from the foreground.
– A Subtractive setting is necessary if the foreground DVE image is not premultiplied. The
compositing method is similar to an Additive merge, but the foreground DVE image is first
multiplied by its own alpha, to eliminate any background pixels outside the alpha area.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Transform nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
Letterbox [LBX]
Inputs
The single input on the Letterbox node is used to connect a 2D image for letterbox/cropping.
– Input: The orange input is used for the primary 2D image you want to letterbox/crop.
The Letterbox node converting the Merge output resolution and adding letterbox masking where needed.
Inspector
Controls Tab
The Controls tab includes parameters for adjusting the resolution and pixel aspect of the image. It also
has the option of letterboxing or pan-and-scan formatting.
TIP: You can use the formatting contextual menu to quickly select a resolution from a list.
Place the pointer over the Width or Height controls, and then right-click to display the
contextual menu. The bottom of the menu displays a Select Frame Format submenu with
available frame formats. Select any one of the choices from the menu to set the Height,
Width, and Aspect controls automatically.
Center X and Y
This Center control repositions the image window when used in conjunction with Pan-and-Scan mode.
It has no effect on the image when the node is set to Letterbox mode.
Mode
This control is used to determine the Letterbox node’s mode of operation.
– Letterbox/Envelope: This corrects the aspect of the input image and resizes it to match the
specified width.
– Pan-and-Scan: This corrects the aspect of the input image and resizes it to match the specified
height. If the resized input image is wider than the specified width, the Center control can be used
to animate the visible portion of the resized input.
Filter Method
When rescaling a pixel, surrounding pixels are often used to give a more realistic result. There are
various algorithms for combining these pixels, called filters. More complex filters can give better
results but are usually slower to calculate. The best filter for the job often depends on the amount of
scaling and on the contents of the image itself.
– Box: This is a simple interpolation resize of the image.
– Linear: This uses a simplistic filter, which produces relatively clean and fast results.
– Quadratic: This filter produces a nominal result. It offers a good compromise between
speed and quality.
– Cubic: This produces better results with continuous-tone images. If the images have fine detail in
them, the results may be blurrier than desired.
– Catmull-Rom: This produces good results with continuous-tone images that are resized down.
This produces sharp results with finely detailed images.
– Gaussian: This is very similar in speed and quality to Bi-Cubic.
– Mitchell: This is similar to Catmull-Rom but produces better results with finely detailed images. It is
slower than Catmull-Rom.
– Lanczos: This is very similar to Mitchell and Catmull-Rom but is a little cleaner and also slower.
– Sinc: This is an advanced filter that produces very sharp, detailed results; however, it may produce
visible “ringing” in some situations.
– Bessel: This is similar to the Sinc filter but may be slightly faster.
Most of these filters are useful only when making an image larger. When shrinking images, it is
common to use the Bi-Linear filter; however, the Catmull-Rom filter will apply some sharpening to the
results and may be useful for preserving detail when scaling down an image.
Example
Different resize filters. From left to right: Nearest Neighbor, Box, Linear, Quadratic, Cubic,
Catmull-Rom, Gaussian, Mitchell, Lanczos, Sinc, and Bessel.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Transform nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
Resize [RSZ]
NOTE: Because this node changes the physical resolution of the image, animating the
controls is not advised.
Inputs
The single input on the Resize node is used to connect a 2D image for resizing.
– Input: The orange input is used for the primary 2D image you want to resize.
The Resize node can be used to scale an image and change its resolution.
Inspector
Controls Tab
The Controls tab includes parameters for changing the resolution of the image. It uses pixel values in
the Width and Height controls.
Width
This controls the new resolution for the image along the X-axis.
Height
This controls the new resolution for the image along the Y-axis.
TIP: You can use the formatting contextual menu to quickly select a resolution from a list.
Place the mouse pointer over the Width or Height controls, and then right-click to display the
contextual menu. The bottom of the menu displays a Select Frame Format submenu with
available frame formats. Select any one of the choices from the menu to set the Height and
Width controls automatically.
Auto Resolution
Activating this checkbox automatically sets the Width and Height sliders to the Frame Format settings
found in the Preferences window for Fusion Studio or the resolution in the DaVinci Resolve Timeline.
Filter Method
When rescaling a pixel, surrounding pixels are often used to give a more realistic result. There are
various algorithms for combining these pixels, called filters. More complex filters can give better
results but are usually slower to calculate. The best filter for the job often depends on the amount of
scaling and on the contents of the image itself.
– Box: This is a simple interpolation resize of the image.
– Linear: This uses a simplistic filter, which produces relatively clean and fast results.
– Quadratic: This filter produces a nominal result. It offers a good compromise between speed and
quality.
– Cubic: This produces better results with continuous-tone images. If the images have fine detail in
them, the results may be blurrier than desired.
– Catmull-Rom: This produces good results with continuous-tone images that are resized down.
This produces sharp results with finely detailed images.
– Gaussian: This is very similar in speed and quality to Bi-Cubic.
– Mitchell: This is similar to Catmull-Rom but produces better results with finely detailed images. It is
slower than Catmull-Rom.
– Lanczos: This is very similar to Mitchell and Catmull-Rom but is a little cleaner and also slower.
– Sinc: This is an advanced filter that produces very sharp, detailed results; however, it may produce
visible “ringing” in some situations.
– Bessel: This is similar to the Sinc filter but may be slightly faster.
Example
Different resize filters. From left to right: Nearest Neighbor, Box, Linear, Quadratic, Cubic,
Catmull-Rom, Gaussian, Mitchell, Lanczos, Sinc, and Bessel.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Transform nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
Scale [SCL]
NOTE: Because this node changes the physical resolution of the image, animating the
controls is not advised.
Inputs
The single input on the Scale node is used to connect a 2D image for scaling.
– Input: The orange input is used for the primary 2D image you want to scale.
Inspector
Controls Tab
The Controls tab includes parameters for changing the resolution of the image. It uses a multiplier of
size to set the new resolution. An Edges menu allows you to determine how the edges of the frame
are handled if the scaling decreases.
Lock X/Y
When selected, only a Size control is shown, and changes to the image’s scale are applied to both
axes equally. If the checkbox is cleared, individual Size controls appear for both X and Y Size.
Size
The Size control is used to set the scale used to adjust the resolution of the source image. A value of
1.0 would have no affect on the image, while 2.0 would scale the image to twice its current resolution.
A value of 0.5 would halve the image’s resolution.
Filter Method
When rescaling a pixel, surrounding pixels are often used to give a more realistic result. There are
various algorithms for combining these pixels, called filters. More complex filters can give better
Most of these filters are useful only when making an image larger. When shrinking images, it is
common to use the Bi-Linear filter; however, the Catmull-Rom filter will apply some sharpening to the
results and may be useful for preserving detail when scaling down an image.
NOTE: Because this node changes the physical resolution of the image, animating the
controls is not advised.
Example
Different resize filters. From left to right: Nearest Neighbor, Box, Linear, Quadratic, Cubic,
Catmull-Rom, Gaussian, Mitchell, Lanczos, Sinc, and Bessel.
Inputs
The two inputs on the Transform node are used to connect a 2D image and an effect mask, which can
be used to limit the transformed area.
– Input: The orange input is used for the primary 2D image that gets transformed.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the transform
area to only those pixels within the mask. An effects mask is applied to the tool after the
tool is processed.
The Transform node can be used to scale an image without changing its resolution.
Controls Tab
The Controls tab presents multiple ways to transform, flip (vertical), flop (horizontal), scale, and rotate
an image. It also includes reference size controls that can reinterpret the coordinates used for width
and height from relative values of 0-1 into pixel values based on the image’s resolution.
Center X and Y
This sets the position of the image on the screen. The default is 0.5, 0.5, which places the image in the
center of the screen. The value shown is always the actual position multiplied by the reference size.
See below for a description of the reference size.
Pivot X and Y
This positions the axis of rotation and scaling. The default is 0.5, 0.5, which is the center of the image.
Size
This modifies the scale of the image. Values range from 0 to 5, but any value greater than zero can be
entered into the edit box. If the Use Size and Aspect checkbox is selected, this control will scale the
image equally along both axes. If the Use Size and Aspect option is off, independent control is
provided for X and Y.
Aspect
This control changes the aspect ratio of an image. Setting the value above 1.0 stretches the image
along the X-axis. Values between 0.0 and 1.0 stretch the image along the Y-axis. This control is
available only when the Use Size and Aspect checkbox is enabled.
Angle
This control rotates the image around the axis. Increasing the Angle rotates the image in a
counterclockwise direction. Decreasing the Angle rotates the image in a clockwise direction.
Edges
This menu determines how the edges of the image are treated when the edge of the raster
is exposed.
– Canvas: This causes the edges of the image that are revealed to show the current Canvas Color.
This defaults to black with no Alpha and can be set using the Set Canvas Color node.
– Wrap: This wraps the edges of the image around the borders of the image. This is useful for
seamless images to be panned, creating an endless moving background image.
– Duplicate: This causes the edges of the image to be duplicated as best as possible, continuing
the image beyond its original size.
– Mirror: Image pixels are mirrored to fill to the edge of the frame.
Filter Method
When rescaling a pixel, surrounding pixels are often used to give a more realistic result. There are
various algorithms for combining these pixels, called filters. More complex filters can give better
results but are usually slower to calculate. The best filter for the job often depends on the amount of
scaling and on the contents of the image itself.
– Box: This is a simple interpolation resize of the image.
– Linear: This uses a simplistic filter, which produces relatively clean and fast results.
– Quadratic: This filter produces a nominal result. It offers a good compromise
between speed and quality.
– Cubic: This produces better results with continuous-tone images. If the images have fine detail in
them, the results may be blurrier than desired.
– Catmull-Rom: This produces good results with continuous-tone images that are resized down.
This produces sharp results with finely detailed images.
– Gaussian: This is very similar in speed and quality to Bi-Cubic.
– Mitchell: This is similar to Catmull-Rom but produces better results with finely detailed images. It is
slower than Catmull-Rom.
– Lanczos: This is very similar to Mitchell and Catmull-Rom but is a little cleaner and also slower.
– Sinc: This is an advanced filter that produces very sharp, detailed results; however, it may produce
visible “ringing” in some situations.
– Bessel: This is similar to the Sinc filter but may be slightly faster.
Example
Different resize filters. From left to right: Nearest Neighbor, Box, Linear, Quadratic, Cubic,
Catmull-Rom, Gaussian, Mitchell, Lanczos, Sinc, and Bessel.
Invert Transform
Select this control to invert any position, rotation, or scaling transformation. This option is useful when
connecting the Transform to the position of a tracker for the purpose of reintroducing motion back into
a stabilized image.
Flatten Transform
The Flatten Transform option prevents this node from concatenating its transformation with adjacent
nodes. The node may still concatenate transforms from its input, but it will not concatenate its
transformation with the node at its output.
Reference Size
The controls under the Reference Size menu do not directly affect the image. Instead they allow you to
control how Fusion represents the position of the Transform node’s center.
Normally, coordinates are represented as values between 0 and 1, where 1 is a distance equal to the
full width or height of the image. This allows for resolution independence, because you can change
the size of the image without having to change the value of the center.
One disadvantage to this approach is that it complicates making pixel-accurate adjustments to an
image. To demonstrate, imagine an image that is 100 x 100 pixels in size. To move the center of the
image to the right by 5 pixels, we would change the X value of the transform center from 0.5, 0.5 to
0.55, 0.5. We know the change must be 0.05 because 5/100 = 0.05.
The Reference Size controls allow you to specify the dimensions of the image. This changes the way
the control values are displayed, so that the Center shows the actual pixel positions in the X and Y
number fields of the Center control. Extending our example, if you set the Width and Height to 100
each, the Center would now be shown as 50, 50, and we would move it 5 pixels toward the right by
entering 55, 50.
Internally, the Transform node still stores this value as a number between 0 to 1, and if you were to
query the Center controls value via scripting, or publish the Center control for use by other nodes,
then you would retrieve the original normalized value. The change is visible only in the value shown for
Transform Center in the node control.
Auto Resolution
Enable this checkbox to use the current frame format settings in Fusion Studio or the timeline
resolution in DaVinci Resolve to set the Reference Width and Reference Height values.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Transform category. The Settings
controls are even found on third-party Transform-type plug-in tools. The controls are consistent and
work the same way for each tool.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this will cause the tool to skip processing entirely, copying the input straight to the output.
Multiply by Mask
Selecting this option will cause the RGB values of the masked image to be multiplied by the mask
channel’s values. This will cause all pixels of the image not included in the mask (i.e., set to 0) to
become black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around the
edge of the object. If this option is disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
“Understanding Image Channels” in the Fusion Reference Manual or Chapter 79 in the
DaVinci Resolve Reference Manual.
Motion Blur
– Motion Blur: This toggles the rendering of Motion Blur on the tool. When this control is toggled
on, the tool’s predicted motion is used to produce the motion blur caused by the virtual camera’s
shutter. When the control is toggled off, no motion blur is created.
– Quality: Quality determines the number of samples used to create the blur. A quality setting of 2
will cause Fusion to create two samples to either side of an object’s actual motion. Larger values
produce smoother results but increase the render time.
– Shutter Angle: Shutter Angle controls the angle of the virtual shutter used to produce the motion
blur effect. Larger angles create more blur but increase the render times. A value of 360 is the
equivalent of having the shutter open for one full frame exposure. Higher values are possible and
can be used to create interesting effects.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off GPU hardware-
accelerated rendering. Enabled uses the GPU hardware for rendering the node. Auto uses a capable
GPU if one is available and falls back to software rendering when a capable GPU is not available.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
VR Nodes
This chapter details the Virtual Reality (VR) nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
VR Nodes 1432
Lat Long Patcher [LLP] 1433
Pano Map [PaM] 1434
Spherical Camera [3SC] 1436
Spherical Stabilizer 1437
The Common Controls 1439
TIP: You can create stereo VR using two stacked Lat Long images, one for each eye.
Fusion supports several common spherical image formats and can easily convert between them.
– VCross and HCross: VCross and HCross are the six square faces of a cube laid out in a cross,
vertically or horizontally, with the forward view in the center of the cross in a 3:4 or 4:3 image.
– VStrip and HStrip: VStrip and HStrip are the six square faces of a cube laid vertically or
horizontally in a line, ordered as Left, Right, Up, Down, Back, Front (+X, -X, +Y, -Y, +Z, -Z) in a 1:6 or
6:1 image.
– LatLong: LatLong is a single 2:1 image in an equirectangular mapping.
You can display both spherical video and live 3D scenes from the comp directly to headsets, including
those from Oculus Rift and HTC Vive.
Fusion’s “Fix it in post” tools for VR make it easy to perform several important tasks that are common in
these types of productions.
NOTE: The VR category and Lat Long node are available only in Fusion Studio and
DaVinci Resolve Studio.
Input
The Lat Long Patcher node includes two inputs. The orange input accepts a 2D image in an
equirectangular format, where the X-axis represents 0–360 degrees longitude, and the Y-axis
represents –90 to +90 degrees latitude. The effect mask input is provided, although rarely used
on VR nodes.
– Image Input: The orange image input accepts a equirectangular (lat-long) 2D RGBA image.
– Effect Mask: The effect mask input is provided, although rarely used on VR nodes.
Controls Tab
The Controls tab is used to extract and later reapply a section from an equirectangular image. Rotation
controls allow you to select the exact portion you need to repair.
Mode
– Extract: Pulls a de-warped 90-degree square image from the equirectangular image.
– Apply: Warps and merges a 90-degree square image over the equirectangular image. Because
the square image’s alpha is used, this allows, for example, paint strokes or text drawn over a
transparent black background to be applied to the original equirectangular image, avoiding any
double-filtering from de-warping and re-warping the original.
Rotation Order
These buttons choose the ordering of the rotations around each axis. For example, XYZ rotates first
around the X axis (pitch/tilt), then around the Y axis (pan/yaw), and then around the Z axis (roll). Any of
the six possible orderings can be chosen.
Rotation
These dials rotate the spherical image around each of the X, Y, and Z axes, offering independent
control over pitch/tilt, pan/yaw, and roll, respectively.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other VR nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
NOTE: The VR category and Pano Map node are available only in Fusion Studio and
DaVinci Resolve Studio.
Input
The Pano Map node includes two inputs. The orange input accepts a 2D image in an equirectangular,
cube map or other spherical formats. The effect mask input is provided, although rarely used
on VR nodes.
– Image Input: The orange Image input accepts a spherical formatted 2D RGBA image that gets
converted to another spherical format.
– Effect Mask: The effect mask input is provided, although rarely used on VR nodes.
Inspector
Controls Tab
The Controls tab is used to determine the format of the input image and the desired output format.
From/To
– Auto: Auto detects the incoming image layout from the metadata and image frame aspect.
– VCross and HCross: VCross and HCross are the six square faces of a cube laid out in a cross,
vertically or horizontally, with the forward view in the center of the cross in a 3:4 or 4:3 image.
Rotation Order
These buttons choose the ordering of the rotations around each axis. For example, XYZ rotates first
around the X axis (pitch/tilt), then around the Y axis (pan/yaw), and then around the Z axis (roll). Any of
the six possible orderings can be chosen.
Rotation
These dials rotate the spherical image around each of the X, Y, and Z axes, offering independent
control over pitch/tilt, pan/yaw, and roll, respectively.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other VR nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
NOTE: The VR category and Spherical Stabilizer node are available only in Fusion Studio and
DaVinci Resolve Studio.
Inputs
The Spherical Stabilizer node has a single orange input.
– Image: This orange image input node requires an image in a spherical layout, which can be any
of Lat Long (2:1 equirectangular), Horizontal/Vertical Cross, or Horizontal/Vertical Strip.
Controls Tab
The Controls tab contains parameters to initiate the tracking and modify the results for
stabilization or smoothing.
Track Controls
These buttons initiate tracking and analysis of the shot. Be aware that the reference frame used for
stabilization is set to the first frame tracked.
– Track Backward from End Frame starts tracking backward from the end of the
current render range.
– Track Backward from Current Time starts tracking backward from the current frame.
– Stop ceases tracking, preserving all results so far.
– Track Forward from Current Time starts tracking forward from the start of the current render range.
– Track Forward from Start Frame starts tracking forward from the current time.
Append to Track
– Replace causes the Track Controls to discard any previous tracking results and replace them with
the newly-created track.
– Append adds the new tracking results to any earlier tracks.
Stabilization Strength
This control varies the amount of smoothing or stabilization applied, from 0.0 (no change) to 1.0
(maximum).
Smoothing
The Spherical Stabilizer node can eliminate all rotation from a shot, fixing the forward viewpoint (Still
mode, 0.0) or gently smooth out any panning, rolling, or tilting to increase viewer comfort (Smooth
mode, 1.0). This slider allows either option or anything in between.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other VR nodes. These common controls are
described in detail in the following “The Common Controls” section.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the VR category. The controls are
consistent and work the same way for each tool.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Warp Nodes
This chapter details the Warp nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog
when searching for tools and in scripting references.
Contents
Coordinate Space [CDS] 1442
Corner Positioner [CPN] 1443
Dent [DNT] 1445
Displace [DSP] 1447
Drip [DRP] 1449
Grid Warp [GRD] 1451
Lens Distort [LENS] 1458
Perspective Positioner [PPN] 1460
Vector Distortion [DST] 1462
Vortex [VTX] 1464
The Common Controls 1466
Inputs
The two inputs on the Coordinate Space node are used to connect a 2D image and an effect mask,
which can be used to limit the distorted area.
– Input: The orange input is used for the primary 2D image that is distorted.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the
distortion to only those pixels within the mask. An effects mask is applied to the tool after the
tool is processed.
The Coordinate Space node can help create motion graphics backgrounds
Example
To demonstrate a basic tunnel effect that can be achieved with this node:
1 Add a Text+ node with some text, and then animate it to move along a path from the top of
the frame to the bottom.
2 Connect the output of the Text+ node to a Coordinate Space node.
3 Select Polar to Rectangular from the Shape menu.
Inspector
Controls Tab
The Controls tab Shape menu switches between Rectangular to Polar and Polar to Rectangular.
Consider the following example to demonstrate the two coordinate spaces.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The two inputs on the Corner Positioner node are used to connect a 2D image and an effect mask,
which can be used to limit the warped area.
Inspector
Mapping Type
This determines the method used to project the image caused by the Corner Positioner. In Bi-Linear
mode, a straight 2D warping takes place. In Perspective mode, the image is calculated with the offsets
in 2D space and then mapped into a 3D perspective.
Corners X and Y
There are four points in the Corner Positioner. Drag these around to position each corner of the image
interactively. Attach these control points to any of the usual modifiers.
The image input is deformed and perspective corrected to match the position of the four corners.
Offset X and Y
These controls can be used to offset the position of the corners slightly. This is useful when the
corners are attached to Trackers with patterns that may not be positioned exactly where they
are needed.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Dent [DNT]
Inputs
The two inputs on the Dent node are used to connect a 2D image and an effect mask, which can be
used to limit the warped area.
– Input: The orange input is used for the primary 2D image that is warped.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the
Dent to only those pixels within the mask. An effects mask is applied to the tool after the
tool is processed.
The Dent node can help create lens distortion effects or a motion graphics background.
Inspector
Controls Tab
The adjustments in the Controls tab are used to change the Dent style, position, size, and strength.
Type
Select the type of Dent filter to use from this menu. All parameters for the Dent can be keyframed.
Dent 1
This creates a bulge dent.
Kaleidoscope
This creates a dent, mirrors it, and inverts it.
Dent 2
This creates a displacement dent.
Dent 3
This creates a deform dent.
Cosine Dent
This creates a fracture to a center point.
Sine Dent
This creates a smooth rounded dent.
Center X and Y
This positions the Center of the Dent effect on the image. The default values are 0.5, 0.5, which center
the effect in the image.
Size
This changes the size of the area affected by the dent. Animate this slider to make the dent grow.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Displace [DSP]
Inputs
There are three inputs on the Displace node: The primary image, the displacement map foreground
image, and an effect mask.
– Input: The orange image input is a required connection for the primary image you wish to
displace.
– Foreground Image: The green input is also required as the image used to displace the
background. Once connected, you can choose red, green, blue, alpha, or luminance channel to
create the displacement.
– Effect Mask: The optional blue effect mask input expects a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the displacement to only those pixels within the mask. An effects mask is applied to
the tool after it is processed.
The Displace node using a Fast Noise node for the Displace map
Controls Tab
The Controls tab is used to change the style, position, size , strength, and lighting (embossing) of the
displacement.
Type
The Type menu is used to choose in what mode the Displace node operates. The Radial mode uses
the map image that refracts each pixel out from the center, while X/Y mode provides control over the
amount of displacement along each axis individually.
NOTE: There is one set of Refraction controls while in Radial mode, and two sets in XY
mode—one for each of the X and Y channels.
Refraction Channel
This drop-down menu controls which channel from the foreground image is used to displace the
image. Select from Red, Green, Blue, Alpha, or Luminance channels. In XY mode, this control appears
twice, once for the X displacement and once for the Y displacement.
Light Power
This controls the intensity, or strength, of the simulated light, causing bright and dim areas to form
according to the contour of the refraction image. Higher values cause the bright and dim areas to be
more pronounced.
Light Angle
This sets the angle of the simulated light source.
Light Channel
Select the channel from the refraction image to use as the simulated light source. Select from Color,
Red, Green, Blue, Alpha, or Luminance channels.
NOTE: The Radial mode pushes pixels inward or outward from a center point, based on pixel
values from the Displacement map. The XY mode uses two different channels from the map
to displace pixels horizontally and vertically, allowing more precise results. Using the XY
mode, the Displace node can even accomplish simple morphing effects. The Light controls
allow directional highlighting of refracted pixels for simulating a beveled look.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Drip [DRP]
Inputs
The two inputs on the Drip node are used to connect a 2D image and an effect mask, which can be
used to limit the warped area.
– Input: The orange input is used for the primary 2D image that is warped.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the
warping to only those pixels within the mask. An effects mask is applied to the tool after the
tool is processed.
Inspector
Controls Tab
The Controls tab is used to change the style, position, size , strength, and phase for animating the
“ripples” of the Drip.
Shape
Use this control to select the shape of the Drip.
Circular
This creates circular ripples. This is the default Drip mode.
Square
This creates even-sided quadrilateral drips.
Random
This creates a randomly dispersed noise that distorts your image and is similar to a particle effect.
Horizontal
This creates horizontal waves that move in one direction.
Vertical
This creates vertical waves that move in one direction.
Exponential
This creates a Drip effect that looks like a diamond shape with inverted, curved sides (an exponential
curve flipped and mirrored).
Star
This creates an eight-way symmetrical star-shaped ripple that acts as a kaleidoscope when the phase
is animated.
Center X and Y
Use this control to position the center of the Drip effect in the image. The default is 0.5, 0.5, which
centers the effect in the image.
Aspect
Control the aspect ratio of the various Drip shapes. A value of 1.0 causes the shapes to be
symmetrical. Smaller values cause the shape to be taller and narrower, while larger values cause
shorter and wider shapes.
Amplitude
The Amplitude of the Drip effect refers to the peak height of each ripple. Use the slider to change the
amount of distortion the Drip applies to the image. A value of 0.0 gives all ripples no height and
therefore makes the effect transparent. A maximum Amplitude of 10 makes each ripple extremely
visible and completely distorts the image. Higher numbers can be entered via the text entry boxes.
Dampening
Controls the Dampening, or falloff, of the amplitude as it moves away from the center of the effect. It
can be used to limit the size or area affected by the Drip.
Frequency
This changes the number of ripples emanating from the center of the Drip effect. A value of 0.0
indicates no ripples. Move the slider up to a value of 100 to correspond with the density of
desired ripples.
Phase
This controls the offset of the frequencies from the center. Animate the Phase value to make the ripple
emanate from the center of the effect.
Common Controls
Settings Tab
The Settings Tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inspector
Controls Tab
The Controls tab contains parameters that configure the onscreen grid as well the type of distortion
applied when a control point on the grid is moved.
Selection Type
These three buttons determine the selection types used for manipulating the points. There are three
options available.
Selected
When in Selected mode, adjustments to the grid are applied only to the currently selected points.
This mode is identical to normal polyline operation.
Region
In Region mode, all points within the area around the mouse pointer move when the mouse button is
clicked. New points that enter the region during the move are ignored. Choosing this option exposes
Magnet Distance and Magnet Strength controls to determine the size and falloff of the area.
Magnetic
In Magnetic mode, all points within the area around the mouse pointer move when the mouse button
is clicked. New points that enter the region during the move are affected as well. Choosing this option
exposes Magnet Distance and Magnet Strength controls to determine the size and falloff of the area.
Magnet Distance
The default node for selecting and manipulating the grid is a Magnet node. The magnet is represented
in the viewer by a circle around the mouse pointer. The Magnet Distance slider controls how large the
region of effect for the magnet is, as in the size of the circle. Drag on the grid and any vertex within the
range of the slider moves.
To increase the size of the magnet, increase the value of this slider. Alternately, adjust the size of the
magnet by holding down the D key while dragging the mouse.
Magnet Strength
The Magnet Strength slider increases or decreases the falloff of the magnet cursor’s effect. At a
setting of 0.0, the magnetic cursor has no effect, and vertices do not move at all. As the values
increase, the magnet causes a greater range of motion in the selected vertices. Use smaller values for
a more sensitive adjustment and larger values for broad-sweeping changes to the grid.
Subdivision Level
The Subdivision Level determines how many subdivisions there are between each set of divisions.
Subdivisions do not generate vertices at intersections. The more subdivisions, the smoother the
deformation is likely to be, but the slower it is to render.
Center
The Center coordinates determine the exact center of the grid. The onscreen Center control is
invisible while editing the grid. Select the Edit Rect mode, and the grid center becomes visible and
available for editing.
Angle
This Angle control rotates the entire grid.
Size
The Size control increases or decreases the scale of the grid.
Edit Buttons
There are four edit modes available, each of which can be selected by clicking on the
appropriate button.
Edit None
Set the grid to Edit None mode to disable the display of all onscreen controls.
Edit Grid
The Edit Grid mode is the default mode. While this mode is enabled, the grid is drawn in the viewer,
and the control vertices of the grid can be manipulated directly.
Edit Rectangle
When the grid is in Edit Rectangle mode, the onscreen controls display a rectangle that determines
the dimensions of the grid. The sides of the rectangle can be adjusted to increase or decrease the
grid’s dimension. This mode also reveals the onscreen Center control for the grid.
Edit Line
The Edit Line mode is beneficial for creating grids around organic shapes. When this mode is enabled,
all onscreen controls disappear, and a spline can be drawn around the shape or object to be
deformed. While drawing the spline, a grid is automatically created that best represents that object.
Additional controls for Tolerance, Over Size, and Snap Distance appear when this mode is enabled.
These controls are documented below.
Copy Buttons
These two buttons provide a technique for copying the exact shape and dimensions of the source grid
to the destination, or the destination grid to the source. This is particularly useful after setting the
source grid to ensure that the destination grid’s initial state matches the source grid before beginning
a deformation.
Point Tolerance
This control is visible only when the Edit Line mode is enabled. The Point Tolerance slider determines
how much tessellation the grid applies to match the density of points in the spline closely. The lower
this value, the fewer vertices there are in the resulting grid, and the more uniform the grid appears.
Higher values start applying denser grids with variations to account for regions in the spline that
require more detail.
Oversize Amount
This control is visible only when the Edit Line mode is enabled. The Oversize Amount slider is used to
set how large an area around the spline should be included in the grid. Higher values create a larger
border, which can be useful when blending a deformation back into the source image.
Render Tab
The Render tab controls the final rendered quality and appearance of the warping.
Render Method
The Render Method drop-down menu is used to select the rendering technique and quality applied to
the mesh. The three settings are arranged in order of quality, with the first, Wireframe, as the fastest
and lowest of quality. The default mode is Render, which produces final resolution, full-quality results.
Anti-Aliasing
The Anti-Aliasing control appears only as a checkbox when in Wireframe Render mode.
In other modes, it is a drop-down menu with three levels of quality. Higher degrees of anti-aliasing
improve image quality dramatically but vastly increase render times. The Low setting may be an
appropriate option while setting up a large dense grid or previewing a node tree, but rarely for a
final render.
Filter Type
When the Render Method is set to something other than Wireframe mode, the Filter Type menu is
visible and set to Area Sample. This setting prevents the grid from calculating area samples for each
vertex in the grid, providing good render quality. Super Sample can provide even better results but
requires much greater render times.
Anti-Aliased
This checkbox appears only when the Render Method is set to Wireframe. Use this checkbox to
enable/disable anti-aliasing for the lines that make up the wireframe.
Black Background
The Black Background checkbox determines whether pixels outside of the grid in the source image
are set to black or if they are preserved.
Smooth/Linear
Use Smooth and Linear options to apply or remove smoothing from selected vertices.
Select All
This option selects all points in the mesh.
Stop Rendering
This option stops rendering, which disables all rendering of the Grid Warp node until the mode is
turned off. This is frequently useful when making a series of fine adjustments to a complex grid.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The two inputs on the Lens Distort node are used to connect a 2D image and an effect mask, which
can be used to limit the distorted area.
– Input: The orange input is used for the primary 2D image that is distorted.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the distortion to only those pixels within the mask. An effects mask is applied to the tool
after the tool is processed.
Lens Distort applied on the live-action media at the beginning of the node tree, and once again at the end
Controls Tab
The Controls tab presents various ways to customize or build the lens distortion model you want.
Camera Settings allow you to specify the camera used to capture the content.
Mode
Undistort removes the lens distortion to create a flattened image. Distort brings the original lens
distortion back into the image.
Edges
Determines how samples that fall outside the frame are treated.
– Canvas: Pixels outside the frame are set to the default canvas color. In most cases, this is black
with no alpha.
– Duplicate: Pixels outside the frame are duplicated. This results in “smeared” edges but is useful
when, for example, applying a blur because in that case black pixels would result in the unwanted
blurring between the actual image and the black canvas.
Clipping Mode
– Domain: Retains all pixels that might be moved out of the frame for later re-distorting.
– Frame: Pixels moved outside the frame are discarded.
Camera Settings
The options known from the Camera 3D are duplicated here. They can either be set manually or
connected to an already existing Camera 3D.
Supersampling [HiQ]
Sets the number of samples used to determine each destination pixel. As always, higher
supersampling leads to higher render times. 1×1 bilinear is usually of sufficient quality, but with high
lens distortion near the edges of the lens, there are noticeable differences to higher settings.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The two inputs on the Perspective Positioner node are used to connect a 2D image and an effect
mask, which can be used to limit the transformed area.
– Input: The orange input is used for the primary 2D image that is transformed.
– Effect Mask: The blue input is for a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the
transform to only those pixels within the mask. An effects mask is applied to the tool after
the tool is processed.
Inspector
Mapping Type
The Mapping Type menu is used to select the type of transform used to distort the image. Bi-Linear is
available for support of older projects. Leaving this on Perspective is strongly suggested since the
Perspective setting maps the real world more accurately.
Corners X and Y
There are the four control points of the Perspective Positioner. Interactively drag these in the viewers
to position each corner of the image. You can refine their position using the Top, Bottom, Left, and
Right controls in the Inspector.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
There are three inputs on the Vector Distort node for the primary 2D image, the distort image with
vector channels and an effect mask.
– Input: The orange image input is a required connection for the primary image you wish to
distort. If this image has vector channels, they are used in the distortion.
– Distort: The green input is an optional distort image input used to distort the background image
based on vector channels. Once connected, it overrides vector channels in the input image.
– Effect Mask: The optional blue effect mask input expects a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the displacement to only those pixels within the mask. An effects mask is applied to
the tool after it is processed.
Inspector
Controls Tab
The Controls tab contains parameters for selecting vector channels and controlling how much
distortion they apply to an image.
Scale
Use the Scale slider to apply a multiplier to the values of the distortion reference image.
Center Bias
Use the Center Bias slider to shift or nudge the distortion along a given axis.
Glow
Use this slider to add a glow to the result of the vector distortion.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Vortex [VTX]
Inputs
There are two inputs on the Vortex node for the primary 2D image and the effect mask.
– Input: The orange image input is a required connection for the primary image you wish to swirl.
– Effect Mask: The optional blue effect mask input expects a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this
input limits the swirling vortex to only those pixels within the mask. An effects mask is applied
to the tool after it is processed.
Inspector
Controls Tab
The Controls tab contains parameters for adjusting the position, size, and strength of the Vortex effect.
Center X and Y
This control is used to position the center of the Vortex effect on the image. The default is 0.5, 0.5,
which positions the effect in the center of the image.
Size
Size changes the area affected by the Vortex. You can drag the circumference of the effect in the
viewer or use the Size slider.
Angle
Drag the rotation handle in the viewer or use the thumbwheel control to change the amount of rotation
in the Vortex. The higher the angle value, the greater the swirling effect.
Power
Increasing the Power slider makes the Vortex smaller but tighter. It effectively concentrates it inside
the given image area.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail in the following “The Common Controls” section.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Warp category. The Settings
controls are even found on third-party Warp-type plug-in tools. The controls are consistent and work
the same way for each tool.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this will cause the tool to skip processing entirely, copying the input straight to the output.
Multiply by Mask
Selecting this option will cause the RGB values of the masked image to be multiplied by the mask
channel’s values. This will cause all pixels of the image not included in the mask (i.e., set to 0) to
become black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around the
edge of the object. If this option is disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
“Understanding Image Channels” in the Fusion Reference Manual or Chapter 79 in the
DaVinci Resolve Reference Manual.
Motion Blur
– Motion Blur: This toggles the rendering of Motion Blur on the tool. When this control is toggled
on, the tool’s predicted motion is used to produce the motion blur caused by the virtual camera’s
shutter. When the control is toggled off, no motion blur is created.
– Quality: Quality determines the number of samples used to create the blur. A quality setting of 2
will cause Fusion to create two samples to either side of an object’s actual motion. Larger values
produce smoother results but increase the render time.
– Shutter Angle:L Shutter Angle controls the angle of the virtual shutter used to produce the motion
blur effect. Larger angles create more blur but increase the render times. A value of 360 is the
equivalent of having the shutter open for one full frame exposure. Higher values are possible and
can be used to create interesting effects.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off GPU hardware-
accelerated rendering. Enabled uses the GPU hardware for rendering the node. Auto uses a capable
GPU if one is available and falls back to software rendering when a capable GPU is not available.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Modifiers
This chapter details the modifiers available in Fusion.
Contents
Modifiers 1470
Anim Curves 1470
Bézier Spline 1473
B-Spline 1474
Calculation 1475
CoordTransform Position 1477
Cubic Spline 1478
Expression 1478
From Image 1483
Gradient Color 1484
Key Stretcher Modifier 1486
MIDI Extractor 1486
Natural Cubic Spline 1490
Offset (Angle, Distance, Position) 1490
Path 1493
Perturb 1494
Probe 1496
Publish 1498
Resolve Parameter 1498
Shake 1499
Track 1501
Vector Result 1502
XY Path 1503
NOTE: Text3D and Text+ have additional text-specific modifiers, which are covered in their
nodes’ sections.
Anim Curves
The Animation Curves modifier (Anim Curves) is used to dynamically adjust the timing, values, and
acceleration of an animation, even if you decide to change the duration of a comp. Using this modifier
makes it infinitely easier to stretch or squish animations, create smooth motion, add bouncing
properties, or mirror animations curves without the complexity of manually adjusting splines.
– Input: This dial is only visible when Source is set to Custom. It is used to change the input
keyframe value.
– Curve: The Curve drop-down menu selects the interpolation method used between keyframes.
The three choices are: linear, easing, or custom.
– Linear: The default Linear interpolation method maintains a fixed, consistent acceleration
between keyframes.
– Easing: Displays interpolation menus for both the start of the curve (In) and the end of
the curve (Out).
– Custom: Opens a mini Spline Editor to customize the interpolation from the start of the
animation to the end.
– Mirror: Plays the animation forward, and after reaching the end, it returns to the starting value. This
causes the initial animation to be twice as fast, since the second half of the comp is used for the
reverse animation.
– Invert: Flips the animation curve upside-down so that the values start high and end low.
Scaling
The Scale parameters modify the animation values using relative adjustments.
– Scale: This number is a multiplier applied to the value of the keyframes. If the Scale value is 2 and
a keyframe has a value of 0, it remains 0. If the Scale value is 2 and a keyframe has a value of
10, the result is as if the keyframe is set to 20. This can be thought of as the ending value for the
animation. It is best to set this while viewing the last frame in the comp.
– Offset: The offset is added to the keyframe values and can be thought of as the starting value for
the animation. It is best to set this while viewing the first frame in the comp.
– Clip Low: Ensures the output value never dips below 0.0.
– Clip High: Ensures the output value never exceeds 1.0.
Once you create a Macro from this node tree and save it as a Transition template, you can apply it in
the Edit page Timeline. If you change the transition duration in the Edit page, the animation timing will
update appropriately.
TIP: To view the resulting animation curve in the Spline Editor, select the parameter name in
the Spline Editor’s header. The spline is updated as you change the controls.
Bézier Spline
The Bézier Spline is one of the animation modifiers in Fusion and is typically applied to numerical
values rather than point values. It is automatically applied when you keyframe a parameter or each
time you right-click a number field and select Animate.
Usage
You can add the Bézier Spline to the Spline Editor by right-clicking a number field and selecting
BezierSpline. Since this is the most common choice for animation splines, it is separated from the
Modify With menu for quicker access. Selecting BezierSpline from the menu adds a keyframe at the
current location and displays a Bézier Spline in the Spline Editor.
Unlike most modifiers, this modifier has no actual Controls tab in the Inspector. However, the Spline
Editor displays the Bézier Spline, and it can be controlled there. The Bézier Spline offers individual
control over each control point’s smoothness using Bézier handles. The smoothness is applied in
multiple ways:
– To make the control points smooth, select them, and press Shift-S. The handles can be used to
modify the smoothness further.
Ease In/Out
Traditional Ease In/Out can also be modified by using the number field virtual sliders in the Spline
Editor. Select the control points you want to modify, right-click, and select Ease In/Out... from the
contextual menu. Then use the number field virtual sliders to control the Ease In/Out numerically.
B-Spline
An alternative to the Bézier Spline, B-spline is another animation modifier in Fusion and is typically
applied to numerical values rather than point values. It is applied by right-clicking a parameter and
selecting Modify With > B-Spline.
Usage
B-Spline Editor
– This animation spline modifier has no actual Controls tab. However, the Spline Editor displays
the B-spline, and it can be controlled there. Notice that, though the actual value of the second
keyframe is 0, the value of the resulting spline is 0.33 due to the unique smoothing and weighing
algorithms of a B-spline.
– The weight can be modified by clicking the control point to select it, holding the W key, and
moving the mouse left and right to lower or increase the tension. This is also done with multiple
selected control points simultaneously.
NOTE: The Expression modifier is essentially a more flexible version of the Calculation
modifier, with a single exception. It is far easier to manipulate the timing of the operands
provided in the Calculation modifier than it is to do so with an Expression.
Inspector
Calc Tab
The Calc tab includes two dials used for the connected parameter and value that gets mathematically
combined. The Operator menu selects how the Second Operand value combines with the
parameter’s value.
Operator
Select from the mathematical operations listed in this menu to determine how the two operands are
combined. Clicking the drop-down arrow opens the menu with the following options:
Example
The following example uses a calculation to apply blur to a Text node in inverse proportion to
the size of the text.
1 Open a new composition that starts on frame 0 and ends on frame 100.
2 At frame 0, add a Text+ node to the composition.
3 Enter a small amount of text and set the size to 0.05
4 Click the Keyframe button to the right of the Size slider to add a keyframe.
5 Move to frame 100 and set the Size value to 0.50.
6 Connect a Blur node after the Text+ node.
7 View the Blur node in one of the viewers.
To have the blur decrease in strength as the text gets bigger, a simple “pick whip”-style
parameter linking does not work. The controls cannot be directly connected together
because the values of the Text Size control are getting bigger instead of smaller.
8 Right-click the Blur Size and select Modify With > Calculation from the contextual menu.
This adds a Calculation modifier to the Blur node. At the top of the Inspector, a new set of
controls appears in the Modifiers tab while the Blur node is selected.
9 At the top of the Inspector, select the Modifiers tab (F11).
10 Right-click the First Operand slider and select Connect To > Text 1 > Size from the
contextual menu.
Although the Blur Size is now connected to the Text Size parameter, this connection isn’t
very useful. The maximum value of the Blur Size control is 0.5, which is hardly noticeable
as a blur.
11 Set the Operator drop-down menu to Multiply.
12 Set the Second Operand slider to 100.
Now the first operand is multiplied by 100, and adjusting the dial gives you a much
blurrier blur.
CoordTransform Position
Because of the hierarchical nature of 3D in Fusion, the original position of an object in a 3D scene
often fails to indicate the current position of the object. For example, an image plane might initially
have a position at 1, 2, 1, but then be scaled, offset, and rotated by other tools further downstream in
the node tree, ending up with an absolute location of 10, 20, 5. This can complicate connecting an
object further downstream in the composition directly to the position of an upstream object. The
Coordinate Transform modifier can be added to any set of the XYZ coordinate controls to calculate the
current position of a given object at any point in the scene hierarchy. To add a Coordinate Transform
modifier, right-click the numeric input on any node, and select Modify With > CoordTransform Position
from the contextual menu.
Inspector
Controls Tab
The Controls tab has two fields for the target and scene input. The target is for the node continuing the
original coordinates, while the scene input is used for the scene with the new coordinates.
Target Object
This control is connected to the 3D tool that produces the original coordinates to be transformed. To
connect a tool, drag the node from the Node Editor into the text edit control, or right-click the control
and select the tool from the contextual menu. It is also possible to type the tool’s name directly into
the control.
SubID
The SubID slider can be used to target an individual sub-element of certain types of geometry, such as
an individual character produced by a Text 3D tool or a specific copy created by a Duplicate 3D tool.
Cubic Spline
The Cubic Spline is another animation modifier in Fusion that is normally applied to numerical values
rather than point values. It can be applied by right-clicking a numerical control and selecting Modify
With > Natural Cubic Spline.
Usage
Being an animation spline, this modifier has no actual Controls tab. However, its effect can be seen
and influenced in the Spline Editor.
Expression
An expression is a variable or a mathematical calculation added to a parameter, rather than a straight
numeric value. You can add an expression to any parameter in Fusion, or you can add the Expression
modifier, which adds several tabs to the modifier Inspector. Adding this modifier to a parameter adds
the ability to manipulate that parameter based on any number of controls, either positional or value-
based. This modifier offers exceptional flexibility compared to the more limited Calculation or Offset
modifiers, but it is unable to access values from frames other than the current time.
The Expression modifier accepts nine value inputs and nine position inputs that are used as part of a
user-defined mathematical expression to output a value.
To add the Expression modifier to a parameter, right-click the parameter in the Inspector and choose
Modify With > Expression from the contextual menu. The type of value returned by the Expression
depends entirely on the type of control it is modifying.
When used with a value control (like a slider), the Expression in the Number Out tab is evaluated to
create the result. When used to modify a positional control (like Center), the Point Out tab controls
the result.
The Inspector’s Modifiers tab contains the controls for the Expression modifier, described below.
Controls Tab
This tab provides nine number controls and nine point controls. The values of the number controls can
be referred to in an expression as n1 through n9. The X-coordinate of each point control can be
referred to as p1x through p9x, while the Y-coordinate is p1y through p9y.
These values can be set manually, connected to other parameters, animated, and even connected to
other Expressions or Calculations.
Config Tab
A good expression is reused over and over again. As a result, it can be useful to provide more
descriptive names for each parameter or control and to hide the unused ones. The Config Tab of the
Expressions modifier can customize the visibility and name for each of the nine point and
number controls.
Random Seed
The Random Seed control sets the starting number for the Rand() function. The rand(x, y) function
produces a random value between X and Y, producing a new value for every frame. As long as the
setting of this Random Seed slider remains the same, the values produced at frame x are always the
same. Adjust the Seed slider to a new value to get a different value for that frame.
e The value of e.
dist(x1, y1, x2, y2) The distance between point x1,y2 and x2,y2.
x+y x plus y.
x-y x minus y.
-x (0.0 - x).
xy x multiplied by y.
xy x divided by y.
x && y 1.0 if both x and y are not 0.0, otherwise 0.0 (identical to above).
x|y 1.0 if either x or y (or both) are not 0.0, otherwise 0.0.
x || y 1.0 if either x or y (or both) are not 0.0, otherwise 0.0 (identical to above).
Example 1
To make a numeric control equal to the Y value of a motion path, add an expression to the
desired target control and connect the Path to Point In 1. Enter the formula:
p1y
Example 2
To make the result of the Expression’s Number Out be the largest of Number In 1 and Number
In 2, multiplied by the cosine of Number In 3, plus the X coordinate of Point In 1, enter
the formula:
X-Axis Expression
n1
Y-Axis Expression
0.5 + sin(time*50) 4
Render out a preview and look at the results. (Try this one with motion blur.)
From Image
The From Image modifier only works on gradients, like the gradient on a Background node. It takes
samples of an image along a user-definable line and creates a gradient from those samples.
Unlike other modifiers, From Image is not located in the Modify With menu. This modifier can be
applied by right-clicking a Gradient bar in the Inspector and selecting From Image.
Inspector
Controls Tab
The From Image controls tab in the Inspector is used to select the node that contains the image you
want to sample. It allows you to define the starting and ending point in the image as well as how many
color samples to use in creating the gradient.
Image to Scan
Drop into this box the node from the Node Editor that you want to be color sampled.
Edges
Edges determines how the edges of the image are treated when the sample line extends over the
actual frame of the image to be sampled.
Black
This outputs black for every point on the sample line outside of the image bounds.
Wrap
This wraps the edges of the line around the borders of the image.
Duplicate
This causes the edges of the image to be duplicated as best as possible, continuing the image beyond
its original size.
Color
This outputs a user-definable color instead of black for every point on the sample line outside of the
image bounds.
Example
The source image on the left shows the color selection line in red. The image on the right
shows the resulting gradient from that selection.
Gradient Color
The Gradient Color modifier allows you to create a customized gradient and map it into a specific time
range to control a value. Use the Start and End time controls to set the frames for the animation. If the
Start and End time values are set to 0, then the modifier returns the value at the starting point of the
gradient. You can use the Offset control to animate the gradient manually.
It can be applied by right-clicking a parameter and selecting Modify With > Gradient Color.
Controls Tab
The Controls tab consists of a Gradient bar where you add and adjust points of the gradient. Start
Time and End Time thumbwheels at the bottom of the Inspector determine the time range the gradient
is mapped into.
Gradient
The Gradient control consists of a bar where it is possible to add, modify, and remove points of the
gradient. Each point has its color. It is possible to animate the color as well as the position of the point.
Furthermore, a From Image modifier can be applied to the gradient to evaluate it from an image.
Repeat
Defines how the left and right borders of the gradient are treated.
Gradient Offset
Allows you to pan through the gradient.
Time Controls
The Start Time and End Time thumbwheels determine the time range the gradient is mapped into. This
is set in frames. The same effect can be achieved by setting the Gradient to Once and animating the
offset thumbwheel.
MIDI Extractor
The MIDI Extractor modifier provides the ability to modify the value of a control using the values stored
in a MIDI file. This modifier relies on some knowledge of MIDI, which is beyond the scope of
this manual.
The value produced by the modifier is extracted from the MIDI event selected in the Mode menu. Each
mode can be trimmed so that only specific messages for that event are processed—for example, only
some notes are processed, while others are ignored. The value of the event can be further scaled or
modified by additional factors, such as Scale, Velocity, Attack, and Decay.
It can be applied by right-clicking a parameter and selecting Modify With > MIDI Extractor.
Controls Tab
The Controls tab is used to load the MIDI file, modify its timing, and determine which MIDI messages
and events trigger changes in the Fusion parameter.
MIDI File
This browser control is used to specify the MIDI file that is used as the input for the modifier.
Time Scale
Time Scale is used to specify the relationship between time as the MIDI file defines it and time as
Fusion defines it. A value of 1.0 plays the MIDI events at normal speed, 2.0 plays at double speed,
and so on.
Time Offset
Time Offset adjusts the sync between the MIDI file’s timing and Fusion’s timing. If there is an
unexpected delay, or if the MIDI file should start partway into or before some animation in Fusion, this
control can be used to offset the MIDI data as required.
Mode
This menu provides Beat, Note, Control Change, Poly AfterTouch, Channel AfterTouch, or Pitch Bend,
indicating from which MIDI event the values are being read. Beat mode is slightly different in that it
produces regular pulses based on the tempo of the MIDI file (including any tempo maps).
The Beat mode does not use any specific messages; it bases its event timing on the tempo map
contained in the MIDI file.
Combine Events
This menu selects what happens when multiple events occur at the same time. In Notes mode, this
can happen easily. For other events, this can happen if Multiple Channels are selected.
Use this to take the result from the most recent event to occur, the oldest event still happening, the
highest or lowest valued event, the average, sum, or the median of all events currently occurring.
Channels Tab
The Channels tab is used to select the Channels used in the modifier.
Channels
Channels checkboxes select which of the 16 channels in the MIDI file are actually considered for
events. This is a good way to single out a specific instrument from an arrangement.
About MIDI
A single MIDI interface allows 16 channels. Typically, these are assigned to different
instruments within a device or different devices. Usually, MIDI data is 7 bits, ranging from
0–127. In Fusion, this is represented as a value between 0–1 to be more consistent with how
data is handled in Fusion.
There are numerous different MIDI messages and events, but the ones that are particularly
useful with this modifier are detailed below.
MIDI Messages
– Note On: This indicates that a note (on a specific channel) is being turned on, has a pitch
(0–127, with middle C being 60) and a Velocity (0–127, representing how fast the key or
strings or whatever was hit).
– Note Off: This indicates that a note (on a specific channel) is being turned off, has a pitch
(0–127, with middle C being 60) and a Velocity (0–127, representing how fast the key or
strings or whatever was released).
– Control Change: This message indicates that some controller has changed. There are
128 controllers (0–127), each of which has data from 0–127. Controllers are used to set
parameters such as Volume, Pan, amount of Reverb or Chorus, and generic things like foot
controllers or breath controllers.
MIDI Events
– Channel Aftertouch: This event defines that pressure is being applied to the keys (or
strings or whatever) during a note. This represents general, overall pressure for this
channel, so it simply uses a pressure value (0–127).
– Poly Aftertouch: This event defines that pressure is being applied to the keys (or strings or
whatever) during a note. It is specific to each particular note and therefore contains a note
number as well as a pressure value (0–127).
Pitch Bend
The Pitch Bend controller generally specifies the degree of pitch bending or variation applied
to the note. Because pitch bend values are transmitted as a 14-bit values, this control has a
range between -1 and 1 and a correspondingly finer degree of resolution.
NOTE: Unlike other spline types, Cubic splines have no control handles. They attempt to
automatically provide a smooth curve through the control points.
Usage
Being an animation spline, this modifier has no actual Controls tab. However, its effect can be seen
and influenced in the Spline Editor.
Offset Angle
The Offset Angle modifier outputs a value between 0 and 360 that is based on the angle between
two positional controls. The Position and Offset parameters may be static, connected to other
positional parameters, or connected to paths of their own. All offsets use the same set of controls,
which behave differently depending on the offset type used. These controls are described below.
Offset Distance
The Offset Distance modifier outputs a value that is based on the distance between two positional
controls. This modifier is capable of outputting a value based on a mathematical expression applied to
a position.
Inspector
Offset Tab
The Inspector for all three Offset modifiers is identical. The Offset tab includes Position and Offset
values as well as a Mode menu for selecting the mathematical operation performed by the
offset control.
Position X and Y
The X and Y values are used by the Position to generate the calculation.
Offset X and Y
The X and Y values are used by the Offset to generate the calculation.
Mode
The Mode menu includes mathematical operations performed by the Offset control. Available
options include:
Time Tab
Position Time Scale
This returns the value of the Position at the Time Scale specified (for example, 0.5 is the value at half
the current frame time).
Example
This is a simple comp to illustrate one potential use of offsets.
1 Create a new Comp 100 frames long.
2 Create a node tree consisting of a black background and a Text node foreground
connected to a Merge.
3 In the Text Layout tab, use the Center X control to animate the text from the left side of the
screen to the right.
4 Move to frame 0.
5 In the Text tab in the Inspector, right-click the Size control and select Modify With > Offset
Distance from the contextual menu.
This adds two onscreen controls: a crosshair for the position and an X control for the
offset. These onscreen controls represent the Position and Offset controls displayed in
the Modifiers tab.
The size of the text is now determined by the distance, or offset, between the two
onscreen controls.
6 Drag the X onscreen control in the viewer to see how the distance from the crosshair
changes the size of the merge and by association the text.
Path
The Path modifier uses two splines to control the animation of points: an onscreen motion path
(spacial) and a Time spline visible in the Spline Editor (temporal). To animate an object’s position
control using a Path, right-click the Position control either in the Inspector or in the viewer and select
Path from the contextual menu. This adds a keyframe at the current position. You can begin creating a
path by moving the playhead and dragging the center position control in the viewer. The Spline Editor
shows a displacement spline for editing the temporal value, or “acceleration,” of the path.
Controls
Controls Tab
The Controls tab for the path allows you to scale, reposition, and rotate the path. It also provides the
Displacement parameter, allowing you to control the acceleration of an object attached to the path.
Center
The actual center of the path. This can be modified and animated as well to move the entire
path around.
Size
The size of the path. Again, this allows for later modification of the animation.
Displacement
Every motion path has an associated Displacement spline in the Spline Editor. The Displacement
spline represents the position of the animated element along its path, represented as a value between
0.0 and 1.0. Displacement splines are used to control the speed or acceleration of an object’s
movement along the path.
To slow down, speed up, stop, or even reverse the motion of the control along the path, adjust the
values of the points for the path’s displacement in the Spline Editor or in the Inspector.
– A Displacement value of 0.0 in the Spline Editor indicates that the control is at the very beginning
of a path.
– A value of 1.0 indicates that the control is positioned at the end of the path.
– Each locked point on the motion path in the viewer has an associated point on the
Displacement spline.
– Unlocked points have a control point in the viewer but do not have a corresponding point on the
Displacement spline.
Heading Offset
Connecting to the Heading adjusts the auto orientation of the object along the path. For instance, if a
mask’s angle is connected to the path’s heading, the mask’s angle will adjust to follow the angle
of the path.
Perturb
The Perturb modifier generates smoothly varying random values for a given parameter based on
Perlin noise. It can be used to add jitter, shake, or wobble to any animatable parameter, even if the
parameter is already animated. Its results are similar to those of the Shake modifier, although it uses a
different set of controls that may be more intuitive. Unlike other random modifiers, you can apply the
Perturb modifier to polylines, shapes, grid meshes, and even color gradients.
For example, to add camera shake to an existing path, right-click the crosshair and choose Insert >
Perturb, and then adjust the Strength down to suit. Alternatively, right-clicking the path’s “Right-click
here for shape animation” label at the bottom of the Inspector lets you apply perturb to the path’s
polyline. This works best if the polyline has many points—for example, if it has been tracked or
hand-drawn with the Draw Append pencil tool. A third usage option is to use the Insert contextual
menu to insert the modifier onto the Displacement control. This causes the motion along the path to
jitter back and forth without actually leaving the path.
Inspector
Controls Tab
The Controls tab for Perturb is mainly used for controlling the Strength, Wobble, and Speed
parameters of the random jitter.
Value
The content of this control depends on what type of control the modifier was applied to. If the Perturb
modifier was added to a basic Slider control, the Value is a slider. If it was added to a Gradient control,
then a Gradient control is displayed here. Use the control to set the default, or center value, for the
Perturb modifier to work on.
Phase
(Polylines and meshes only) Animating this can be used to move the ripple of a polyline or mesh along
itself, from end to end. The effect can be most clearly seen when Speed is set to 0.0.
Strength
Use this control to adjust the strength of the Perturb modifier’s output, or its maximum variation from
the primary value specified above.
Wobble
Use the Wobble control to determine how smooth the resulting values are. Less wobble implies a
smoother transition between values, while more wobble produces less predictable results.
Speed
Increasing the Speed slider value speeds up the rate at which the value changes. This can increase
the apparent wobbliness in a more predictable fashion than the Wobble control and make the jitter
more frantic or languorous in nature.
Probe
The Probe modifier is one of the most versatile modifiers in Fusion. It allows you to control any numeric
parameter by the color or luminosity of a specific pixel or rectangular region of an image. Think of
driving the Brightness node by probing the pixel values of flickering lights in a shot, or measuring
graded LUTs to compare values.
It can be applied by right-clicking a parameter and selecting Modify With > Probe.
Inspector
Controls Tab
The Controls tab for the Probe modifier allows you to select the node to probe, define the channel
used to drive the parameter, and control the size of the probed area.
Image to Probe
Drag a node from the Node Editor to populate this field and identify the image to probe.
Luma
Once a Probe modifier is present somewhere in your comp, you can connect other node’s values to its
outputs as well. The Probe allows you to connect to its values individually:
– Result
– Red
– Green
– Blue
– Alpha
Position X Y
The position in the image from where the probe samples the values.
Probe Rectangle
By default, the Probe samples only the value of a single pixel at its position. By using the Probe
Rectangle mode, you can sample from a larger area of pixels based on the Evaluation method.
Evaluation
Sets how the pixels inside the rectangle are computed to generate the output value.
Options include:
– Average: All pixel values inside the rectangle are averaged.
– Minimum: The smallest value of all pixels inside the rectangle is used.
– Maximum: The highest value of all pixels inside the rectangle is used.
Value Tab
The Value tab controls the range or scale of the modifier adjustment, thereby adjusting the sensitivity
of the Probe.
Black Value
The value that is generated by the Probe if the probed area delivers the result set in Scale Input Black.
White Value
The value that is generated by the Probe if the probed area delivers the result set in Scale Input White.
Publish
Only parameters that are animated will be available from the Connect To menu. To connect to non-
animated parameters, you must Publish them first. Animated controls are automatically published,
whereas static controls must be published manually.
To publish a static control, right-click the control and select Publish from the contextual menu.
Controls Tab
The Controls tab shows the published control available for linking to other controls.
Published Value
The display of the published control is obviously dependent on which control is published from
which node.
Resolve Parameter
The Resolve Parameter Modifier is used when creating a transition template in Fusion for use in
DaVinci Resolve’s Edit page or Cut page. When building a transition in Fusion, the Resolve Parameter
modifier is added to any control you want to animate. The Resolve Parameter modifier automatically
animates the parameter for the duration of the transition, allowing you to trim the transition in the Edit
page or Cut page.
8 Quit and reopen DaVinci Resolve to update the list of transitions in the Effects Library.
9 On the Edit page, open the Effects Library. Navigate to Video Transitions Fusion Transitions, and
the custom Fusion transition will be listed.
Shake
The Shake modifier is used to randomize a Position or Value control to create semi-random numeric
inputs. The resulting shake can be entirely random. The motion can also be smoothed for a more
gentle, organic feel.
To add the Shake modifier to a parameter, select Modify With > Shake from the parameter’s contextual
menu. The Shake modifier uses the following controls to achieve its effect. It can be applied by
right-clicking a parameter and selecting Modify With > Shake.
Inspector
Smoothness
This control is used to smooth the overall randomness of the Shake. The higher the value, the
smoother the motion appears. A value of zero generates completely random results, with no
smoothing applied.
Lock X/Y
This checkbox is used to unlock the X- and Y-axis, revealing independent slider controls for each axis.
Example
1 Create a new comp, and then add and view a Text node.
2 Type some text in the Text node.
3 In the viewer, right-click over the Center control of the text and choose Modify With >
Shake Position.
4 In the Inspector, select the Modifiers tab and set the smoothing to 5.0.
5 Set the Minimum to 0.1 and the Maximum to 0.9.
This adds some chaotic movement to the text. However, we can change this over Go to
frame 0 and in the Inspector click the Keyframe button to the right of both the Minimum
and the Maximum controls.
6 Go to frame 0 and in the Inspector click the Keyframe button to the right of both the
Minimum and the Maximum controls.
7 Go to frame 90 and adjust the Minimum to 0.45 and the Maximum to 0.55.
8 View the results.
Now, the text starts out by flying all over the screen and tightens in toward the center of
the screen as the comp plays.
Inspector
For an in-depth explanation of this node, see Chapter 57, “Tracker Nodes” in the Fusion Reference
Manual or Chapter 118 in the DaVinci Resolve Reference Manual.
Vector Result
The Vector Result modifier is used to offset positional controls, such as crosshairs, by distance and
angle. These can be static or animated values.
It can be applied by right-clicking a control and selecting Modify With > Vector.
Inspector
Controls Tab
Origin
This control is used to represent the position from which the vector’s distance and angle values
originate.
Distance
This slider control is used to determine the distance of the vector from the origin.
Angle
This thumbwheel control is used to determine the angle of the vector relative to the origin.
Example
1 Create a 100-frame comp.
2 Create a simple node tree consisting of a black background and a Text node foreground
connected to a Merge.
3 Enter some text in the Text node.
4 Select the Merge node.
5 In the viewer, right-click the Center control of the Merge and choose Modify With >
Vector Result.
This adds a crosshair onscreen control for the Vector distance and angle. The onscreen
control represents the Distance and Angle controls displayed in the Modifiers tab.
6 In the Modifiers tab of the Inspector, drag the Distance control to distance the text from
the Vector origin.
7 Drag the Angle thumbwheel to rotate the text around the Vector origin.
This is different from changing a pivot point, since the text itself is not rotating.
These points are animatable and can be connected to other controls.
8 In the Inspector, right-click the Origin control and choose a path to add a motion path
modifier to the Origin control.
9 Verify that the current frame is set to frame 0 (zero) and use the Origin controls in the
Inspector or drag the Vector Origin crosshair to the bottom-left corner of the screen.
10 On the Vector Angle thumbwheel, click the Keyframe button to animate this control.
11 Set the Angle thumbwheel to a value of 10.
12 Go to frame 100 and click at the top-left corner of the screen to move the Vector Origin
crosshair.
13 Set the Vector Angle thumbwheel to a value of 1000.
14 Play the comp to see the results.
This causes the text to orbit around the path just created.
XY Path
The XY Path type uses two separate splines for the position along the X-axis and for the position
along the Y-axis.
To animate a coordinate control using an XY path, right-click the control and select Modify With > XY
Path from the contextual menu.
At first glance, XY paths work like Displacement paths. To describe the path, change frames and
position the control where it should be on that frame, and then change frames again and move the
control to its new position. Fusion automatically interpolates between the points. The difference is that
no keyframes are created on the onscreen path.
Inspector
X Y Z Values
These reflect the position of the animated control using X, Y, and Z values.
Center
The actual center of the path. This can be modified and animated as well to move the entire
path around.
Size
The size of the path. Again, this allows for later modification of the animation.
Angle
The angle of the path. Again, this allows for later modification of the animation.
Heading Offset
If another control (for example, a mask’s Angle) is connected to the path’s heading, this control allows
for adding or subtracting from the calculated angle.
Other Information
DaVinci Resolve 17
Regulatory Notices,
Safety Information
and Warranty
Contents
Regulatory Notices 1507
Safety Information 1509
Warranty 1510
Disposal of Waste of Electrical and Electronic Equipment Within the European Union.
The symbol on the product indicates that this equipment must not be disposed of with other waste
materials. In order to dispose of your waste equipment, it must be handed over to a designated
collection point for recycling. The separate collection and recycling of your waste equipment at the
time of disposal will help conserve natural resources and ensure that it is recycled in a manner that
protects human health and the environment. For more information about where you can drop off your
waste equipment for recycling, please contact your local city recycling office or the dealer from whom
you purchased the product.
This equipment has been tested and found to comply with the limits for a Class A digital device,
pursuant to Part 15 of the FCC rules. These limits are designed to provide reasonable protection
against harmful interference when the equipment is operated in a commercial environment.
This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used
in accordance with the instructions, may cause harmful interference to radio communications.
Operation of this product in a residential area is likely to cause harmful interference, in which case the
user will be required to correct the interference at personal expense.
Operation is subject to the following two conditions:
1 This device may not cause harmful interference.
2 This device must accept any interference received,
including interference that may cause undesired operation.
Bluetooth®
The DaVinci Resolve Speed Editor is a Bluetooth wireless technology enabled product.
Contains transmitter module FCC ID: QOQBGM113
This equipment complies with FCC radiation exposure limits set forth for an uncontrolled environment.
Contains transmitter module IC: 5123A-BGM113
This device complies with Industry Canada’s license-exempt RSS standards and exception from routine
SAR evaluation limits given in RSS-102 Issue 5.
Certified for Japan, certificate number: 209-J00204. This equipment contains specified radio equipment
that has been certified to the technical regulation conformity certification under the radio law.
This module has certification in South Korea, KC certification number: MSIP‑CRM-BGT-BGM113
Cerified for Mexico (NOM), for Bluetooth module manufactured by Silicon Labs, model number BGM113A
Includes transmitter module certified in Mexico IFT: RCBSIBG20-2560
Hereby, Blackmagic Design declares that the product (DaVinci Resolve Speed Editor) is using wideband
transmission systems in 2.4 GHz ISM band is in compliance with directive 2014/53/EU.
The full text of the EU declaration of conformity is available from [email protected]
Blackmagic Design recommends appointing a qualified and licenced electrician to install, test
and commission this wiring system.
Blackmagic Design does not accept responsibility for the safety, reliability, damage or personal
injury caused to, or by, any third-party equipment fitted into the console.
For protection against electric shock, the equipment must be connected to a mains socket outlet
with a protective earth connection. In case of doubt contact a qualified electrician.
To reduce the risk of electric shock, do not expose this equipment to dripping or splashing.
Product is suitable for use in tropical locations with an ambient temperature of up to 40°C.
Ensure that adequate ventilation is provided around the product and that it is not restricted.
When rack mounting, ensure that the ventilation is not restricted by adjacent equipment.
No operator serviceable parts inside product. Refer servicing to your local Blackmagic Design
service center.
The DaVinci Resolve Speed Editor contains a single cell Lithium battery. Keep lithium batteries away
from all sources of heat, do not use the product in temperatures greater than 40°C.
Use only at altitudes not more than 2000m above sea level.