0% found this document useful (0 votes)
130 views

Computer Graphics Assignment

Texture mapping is a technique used in computer graphics to add surface texture and detail to 3D models. It works by mapping pixel images called texture maps onto the surfaces of 3D geometry. Early techniques simply wrapped textures around objects, but modern methods use more complex mappings like normal mapping to simulate near-photorealistic detail. Texture maps store color and other data that is sampled and combined during rendering to add surface variation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
130 views

Computer Graphics Assignment

Texture mapping is a technique used in computer graphics to add surface texture and detail to 3D models. It works by mapping pixel images called texture maps onto the surfaces of 3D geometry. Early techniques simply wrapped textures around objects, but modern methods use more complex mappings like normal mapping to simulate near-photorealistic detail. Texture maps store color and other data that is sampled and combined during rendering to add surface variation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 32

COMPUTER GRAPHICS

TEXTURE MAPPING
Texture mapping

 Texture mapping is a method for defining high


frequency detail, surface texture, or color
information on a computer-generated graphic
or 3D model. The original technique was
pioneered by Edwin Catmull in 1974.
 Texture mapping originally referred to diffuse
mapping, a method that simply mapped pixels from
a texture to a 3D surface ("wrapping" the image
around the object). In recent decades, the advent of
multi-pass rendering, multi texturing, mip maps, and
more complex mappings such as height mapping,
bump mapping, normal mapping, displacement
mapping, reflection mapping, specular mapping,
occlusion mapping, and many other variations on the
technique (controlled by a materials system) have
made it possible to simulate near-photorealism in
real time by vastly reducing the number of polygons
and lighting calculations needed to construct a
realistic and functional 3D scene.
Texture Map

 A texture map is an image applied (mapped) to


the surface of a shape or polygon. This may be
a bitmap image or a procedural texture. They
may be stored in common image file formats,
referenced by 3d model formats or material
definitions, and assembled into resource
bundles.
 They may have 1-3 dimensions, although 2
dimensions are most common for visible
surfaces. For use with modern hardware,
texture map data may be stored in swizzled or
tiled orderings to improve cache coherency.
Rendering APIs typically manage texture map
resources (which may be located in device
memory) as buffers or surfaces, and may allow
'render to texture' for additional effects such as
post processing or environment mapping.
 They usually contain RGB color data (either
stored as direct color, compressed formats, or
indexed color), and sometimes an additional
channel for alpha blending (RGBA) especially
for billboards and decal overlay textures. It is
possible to use the alpha channel (which may
be convenient to store in formats parsed by
hardware) for other uses such as specularity.
 Multiple texture maps (or channels) may be
combined for control over specularity,
normals, displacement, or subsurface
scattering e.g. for skin rendering.
 Multiple texture images may be combined in
texture atlases or array textures to reduce state
changes for modern hardware. (They may be
considered a modern evolution of tile map
graphics). Modern hardware often supports
cube map textures with multiple faces for
environment mapping.
Creation

 Texture maps may be acquired by


scanning/digital photography, designed in
image manipulation software such as GIMP,
Photoshop, or painted onto 3D surfaces
directly in a 3D paint tool such as Mudbox or
zbrush.
Texture Application
 This process is akin to applying patterned paper to a plain white
box. Every vertex in a polygon is assigned a texture coordinate
(which in the 2d case is also known as UV coordinates).[8] This
may be done through explicit assignment of vertex attributes,
manually edited in a 3D modeling package through UV
unwrapping tools. It is also possible to associate a procedural
transformation from 3d space to texture space with the material.
This might be accomplished via planar projection or,
alternatively, cylindrical or spherical mapping. More complex
mappings may consider the distance along a surface to
minimize distortion. These coordinates are interpolated across
the faces of polygons to sample the texture map during
rendering. Textures may be repeated or mirrored to extend a
finite rectangular bitmap over a larger area, or they may have a
one-to-one unique "injective" mapping from every piece of a
surface (which is important for render mapping and light
mapping, also known as baking).
Texture Space

 Texture mapping maps the model surface (or


screen space during rasterization) into texture
space; in this space, the texture map is visible
in its undistorted form. UV unwrapping tools
typically provide a view in texture space for
manual editing of texture coordinates. Some
rendering techniques such as subsurface
scattering may be performed approximately by
texture-space operations.
Multitexturing
 Multitexturing is the use of more than one texture at a time on a
polygon.[9] For instance, a light map texture may be used to
light a surface as an alternative to recalculating that lighting
every time the surface is rendered. Microtextures or detail
textures are used to add higher frequency details, and dirt maps
may add weathering and variation; this can greatly reduce the
apparent periodicity of repeating textures. Modern graphics may
use more than 10 layers, which are combined using shaders, for
greater fidelity. Another multitexture technique is bump
mapping, which allows a texture to directly control the facing
direction of a surface for the purposes of its lighting calculations;
it can give a very good appearance of a complex surface (such as
tree bark or rough concrete) that takes on lighting detail in
addition to the usual detailed coloring. Bump mapping has
become popular in recent video games, as graphics hardware has
become powerful enough to accommodate it in real-time
Texture Filtering

 The way that samples (e.g. when viewed as pixels on


the screen) are calculated from the texels (texture
pixels) is governed by texture filtering. The cheapest
method is to use the nearestneighbour interpolation,
but bilinear interpolation or trilinear interpolation
between mipmaps are two commonly used
alternatives which reduce aliasing or jaggies. In the
event of a texture coordinate being outside the
texture, it is either clamped or wrapped. Anisotropic
filtering better eliminates directional artefacts when
viewing textures from oblique viewing angles.
Texture streaming

 Texture streaming is a means of using data streams


for textures, where each texture is available in two or
more different resolutions, as to determine which
texture should be loaded into memory and used
based on draw distance from the viewer and how
much memory is available for textures. Texture
streaming allows for rendering engine to use low
resolution textures for objects far away from the
viewer's camera, and resolve those into more
detailed textures, read from a data source, as the
point of view nears the objects.
Baking
 As an optimization, it is possible to render detail from a complex,
high-resolution model or expensive process (such as global
illumination) into a surface texture (possibly on a low-resolution
model). Baking is also known as render mapping. This technique is
most commonly used for light maps, but may also be used to
generate normal maps and displacement maps. Some computer
games (e.g. Messiah) have used this technique. The original Quake
software engine used on-the-fly baking to combine light maps and
colour maps ("surface caching").
 Baking can be used as a form of level of detail generation, where a
complex scene with many different elements and materials may be
approximated by a single element with a single texture, which is
then algorithmically reduced for lower rendering cost and fewer
draw calls. It is also used to take high-detail models from 3D
sculpting software and point cloud scanning and approximate them
with meshes more suitable for real-time rendering.
Rasterization algorithms
Various techniques have evolved in software and
hardware implementations. Each offers different
trade-offs in precision, versatility and performance.
Forward texture mapping
Some hardware systems e.g. Sega Saturn and the
NV1 traverse texture coordinates directly,
interpolating the projected position in screen space
through texture space and splatting the texels into a
frame buffer. (in the case of the NV1, quadratic
interpolation was used allowing curved rendering).
Sega provided tools for baking suitable per-quad
texture tiles from UV mapped models.
 This has the advantage that texture maps are read in a
simple linear fashion.
 Forward texture mapping may also sometimes produce
more natural looking results than affine texture
mapping if the primitives are aligned with prominent
texture directions (e.g. road markings or layers of
bricks). This provides a limited form of perspective
correction. However, perspective distortion is still
visible for primitives near the camera (e.g. the Saturn
port of Sega Rally exhibited texture-squashing artifacts
as nearby polygons were near clipped without UV
coordinates).
 This technique is not used in modern hardware because
UV coordinates have proved more versatile for
modelling and more consistent for clipping.
Inverse texture mapping

 Most approaches use inverse texture mapping, which


traverses the rendering primitives in screen space whilst
interpolating texture coordinates for sampling. This
interpolation may be affine or perspective correct. One
advantage is that each output pixel is guaranteed to only
be traversed once; generally the source texture map data
is stored in some lower bit-depth or compressed form
whilst the frame buffer uses a higher bit-depth. Another
is greater versatility for UV mapping. A texture cache
becomes important for buffering reads, since the
memory access pattern in texture space is more complex.
Affine texture mapping
 Affine texture mapping linearly interpolates
texture coordinates across a surface, and so is
the fastest form of texture mapping. Some
software and hardware (such as the original
PlayStation) project vertices in 3D space onto
the screen during rendering and linearly
interpolate the texture coordinates in screen
space between them ("inverse texture
mapping"). This may be done by incrementing
fixed point UV coordinates, or by an
incremental error algorithm akin to
Bresenham's line algorithm.
 In contrast to perpendicular polygons, this leads to
noticeable distortion with perspective transformations
(see figure: the checker box texture appears bent),
especially as primitives near the camera. Such distortion
may be reduced with the subdivision of the polygon into
smaller ones.
Restricted camera rotation

 The Doom engine restricted the world to vertical


walls and horizontal floors /ceilings, with a
camera that could only rotate about the vertical
axis. This meant the walls would be a constant
depth coordinate along a vertical line and the
floors/ceilings would have a constant depth
along a horizontal line. A fast affine mapping
could be used along those lines because it would
be correct. Some later renderers of this era
simulated a small amount of camera pitch with
shearing which allowed the appearance of
greater freedom whilst using the same
rendering technique.
 Some engines were able to render texture
mapped Height maps (e.g. Nova Logic's Voxel
Space, and the engine for Outcast) via
Bresenham's-like incremental algorithms,
producing the appearance of a texture mapped
landscape without the use of traditional
geometric primitives
Subdivision for perspective correction

 Every triangle can be further subdivided into


groups of about 16 pixels in order to achieve
two goals. First, keeping the arithmetic mill
busy at all times. Second, producing faster
arithmetic results.
World space subdivision

 For perspective texture mapping without


hardware support, a triangle is broken down into
smaller triangles for rendering and affine
mapping is used on them. The reason this
technique works is that the distortion of affine
mapping becomes much less noticeable on
smaller polygons. The Sony PlayStation made
extensive use of this because it only supported
affine mapping in hardware but had a relatively
high triangle throughput compared to its peers.
Screen space subdivision
 Software renderers generally preferred screen subdivision
because it has less overhead. Additionally, they try to do
linear interpolation along a line of pixels to simplify the
set-up (compared to 2d affine interpolation) and thus
again the overhead (also affine texture-mapping does not
fit into the low number of registers of the x86 CPU; the
68000 or any RISC is much more suited).
 A different approach was taken for Quake, which would
calculate perspective correct coordinates only once every
16 pixels of a scanline and linearly interpolate between
them, effectively running at the speed of linear
interpolation because the perspective correct calculation
runs in parallel on the co-processor.[14] The polygons are
rendered independently,
hence it may be possible to switch between
spans and columns or diagonal directions
depending on the orientation of the polygon
normal to achieve a more constant z but the
effort seems not to be worth it.
Other techniques

 Another technique was approximating the perspective


with a faster calculation, such as a polynomial. Still
another technique uses 1/z value of the last two drawn
pixels to linearly extrapolate the next value. The
division is then done starting from those values so that
only a small remainder has to be divided[15] but the
amount of bookkeeping makes this method too slow on
most systems.
 Finally, the Build engine extended the constant distance
trick used for Doom by finding the line of constant
distance for arbitrary polygons and rendering along it.
Hardware implementations
 Texture mapping hardware was originally developed for
simulation (e.g. as implemented in the Evans and Sutherland
ESIG image generators), and professional graphics
workstations such as Silicon Graphics, broadcast digital
video effects machines such as the Ampex ADO and later
 Screen space sub division techniques. Top left: Quake-like,
top right: bilinear, bottom left: const-z appeared in Arcade
cabinets, consumer video game consoles, and PC video
cards in the mid 1990s. In flight simulation, texture mapping
provided important motion cues.
 Modern graphics processing units (GPUs) provide
specialised fixed function units called texture samplers, or
texture mapping units, to perform texture mapping, usually
with trilinear filtering or
better multi-tap anisotropic filtering and hardware for
decoding specific formats such as DXTn. As of 2016,
texture mapping hardware is ubiquitous as most SOCs
contain a suitable GPU.
 Some hardware combines texture mapping with hidden-
surface determination in tile based deferred rendering or
scanline rendering; such systems only fetch the visible
texels at the expense of using greater workspace for
transformed vertices. Most systems have settled on the
Z-buffering approach, which can still reduce the texture
mapping workload with front-to-back sorting.
Applications

Beyond 3D rendering, the availability of


texture mapping hardware has inspired its use
for accelerating other tasks:
 Tomography
It is possible to use texture mapping
hardware to accelerate both the reconstruction
of voxel data sets from tomographic scans, and
to visualize the results.
User interfaces
Many user interfaces use texture mapping to
accelerate animated transitions of screen
elements, e.g. Exposé in Mac OS X.

You might also like