100% found this document useful (7 votes)
79 views51 pages

Introduction to color imaging science Hsien-Che Lee 2024 Scribd Download

color

Uploaded by

beasyjappe16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (7 votes)
79 views51 pages

Introduction to color imaging science Hsien-Che Lee 2024 Scribd Download

color

Uploaded by

beasyjappe16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Visit https://ebookgate.

com to download the full version and


explore more ebooks

Introduction to color imaging science Hsien-Che Lee

_____ Click the link below to download _____


https://ebookgate.com/product/introduction-to-color-
imaging-science-hsien-che-lee/

Explore and download more ebooks at ebookgate.com


Here are some recommended products that might interest you.
You can download now and explore!

Introduction to Color Imaging Science 1 Reissue Edition


Hsien-Che Lee

https://ebookgate.com/product/introduction-to-color-imaging-
science-1-reissue-edition-hsien-che-lee/

ebookgate.com

Digital Color Imaging Handbook 1st Edition Gaurav Sharma

https://ebookgate.com/product/digital-color-imaging-handbook-1st-
edition-gaurav-sharma/

ebookgate.com

Introduction to Subsurface Imaging 1st Edition Bahaa Saleh

https://ebookgate.com/product/introduction-to-subsurface-imaging-1st-
edition-bahaa-saleh/

ebookgate.com

Che Wants to See You The Untold Story of Che Guevara 1st
Edition Ciro Bustos

https://ebookgate.com/product/che-wants-to-see-you-the-untold-story-
of-che-guevara-1st-edition-ciro-bustos/

ebookgate.com
Introduction to Flat Panel Displays Jiun-Haw Lee

https://ebookgate.com/product/introduction-to-flat-panel-displays-
jiun-haw-lee/

ebookgate.com

Introduction to Empirical Legal Research 1st Edition Lee


Epstein

https://ebookgate.com/product/introduction-to-empirical-legal-
research-1st-edition-lee-epstein/

ebookgate.com

Introduction to Physical Science 12th Edition James


Shipman

https://ebookgate.com/product/introduction-to-physical-science-12th-
edition-james-shipman/

ebookgate.com

Introduction to management science 12th, global Edition


Taylor

https://ebookgate.com/product/introduction-to-management-science-12th-
global-edition-taylor/

ebookgate.com

World of Sports Science K. Lee Lerner

https://ebookgate.com/product/world-of-sports-science-k-lee-lerner/

ebookgate.com
This page intentionally left blank
Introduction to Color Imaging Science
Color imaging technology has become almost ubiquitous in modern life in the form of
color photography, color monitors, color printers, scanners, and digital cameras. This book
is a comprehensive guide to the scientific and engineering principles of color imaging.
It covers the physics of color and light, how the eye and physical devices capture color
images, how color is measured and calibrated, and how images are processed. It stresses
physical principles and includes a wealth of real-world examples. The book will be of value
to scientists and engineers in the color imaging industry and, with homework problems, can
also be used as a text for graduate courses on color imaging.

H SIEN -C HE L EE received his B.S. from National Taiwan University in 1973 and Ph.D. in
electrical engineering from Purdue University in 1981. He then worked for 18 years at
Kodak Research Laboratories in Rochester, New York. There he did research on digital
color image processing, color science, human color vision, medical imaging, and computer
vision. He is now Senior Vice President of Advanced Imaging at Foxlink Peripherals,
Inc., Fremont, California. With more than 20 years of research and product development
experience in imaging science, he has given many lectures and short courses on color
imaging, color science, and computer vision at various universities and research institutes.
He has published many technical papers and has 14 US patents in inventions related to color
imaging science.
Introduction to
Color Imaging Science
HSIEN-CHE LEE
  
Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo

Cambridge University Press


The Edinburgh Building, Cambridge  , UK
Published in the United States of America by Cambridge University Press, New York
www.cambridge.org
Information on this title: www.cambridge.org/9780521843881

© Cambridge University Press 2005

This book is in copyright. Subject to statutory exception and to the provision of


relevant collective licensing agreements, no reproduction of any part may take place
without the written permission of Cambridge University Press.

First published in print format 2005

- ---- eBook (MyiLibrary)


- --- eBook (MyiLibrary)

- ---- hardback


- --- hardback

Cambridge University Press has no responsibility for the persistence or accuracy of


s for external or third-party internet websites referred to in this book, and does not
guarantee that any content on such websites is, or will remain, accurate or appropriate.
This book is dedicated with love and gratitude to my mother,
my wife Hui-Jung, and my daughter Joyce
for their many, many years of help, support, patience, and understanding
Contents

Preface xix

1 Introduction 1
1.1 What is color imaging science? 1
1.2 Overview of the book 2
1.2.1 Measurement of light and color 2
1.2.2 Optical image formation 3
1.2.3 In the eye of the beholder 4
1.2.4 Tools for color imaging 5
1.2.5 Color image acquisition and display 5
1.2.6 Image quality and image processing 6
1.3 The International System of Units (SI) 6
1.4 General bibliography and guide to the literatures 8
1.5 Problems 12

2 Light 13
2.1 What is light? 13
2.2 Wave trains of finite length 15
2.3 Coherence 15
2.3.1 Temporal coherence 16
2.3.2 Spatial coherence 17
2.4 Polarization 20
2.4.1 Representations of polarization 20
2.4.2 Stokes parameters 23
2.4.3 The Mueller matrix 26
2.4.4 The interference of polarized light 28
2.5 Problems 28

3 Radiometry 29
3.1 Concepts and definitions 29
3.2 Spectral radiometry 39
3.3 The International Lighting Vocabulary 40
3.4 Radiance theorem 40
3.5 Integrating cavities 42

vii
viii Contents

3.6 Blackbody radiation 43


3.6.1 Planck’s radiation law 44
3.6.2 Blackbody chromaticity loci of narrow-band systems 46
3.7 Problems 47

4 Photometry 49
4.1 Brightness matching and photometry 49
4.2 The spectral luminous efficiency functions 52
4.3 Photometric quantities 54
4.4 Photometry in imaging applications 58
4.4.1 Exposure value (EV) 59
4.4.2 Guide number 59
4.4.3 Additive system of photographic exposure (APEX) 61
4.5 Problems 62

5 Light–matter interaction 63
5.1 Light, energy, and electromagnetic waves 63
5.2 Physical properties of matter 64
5.3 Light and matter 66
5.3.1 Optical properties of matter 67
5.3.2 Light wave propagation in media 69
5.3.3 Optical dispersion in matter 72
5.3.4 Quantum mechanics and optical dispersion 76
5.4 Light propagation across material boundaries 76
5.4.1 Reflection and refraction 77
5.4.2 Scattering 81
5.4.3 Transmission and absorption 83
5.4.4 Diffraction 84
5.5 Problems 87

6 Colorimetry 89
6.1 Colorimetry and its empirical foundations 89
6.2 The receptor-level theory of color matching 90
6.3 Color matching experiments 93
6.4 Transformation between two sets of primaries 95
6.5 The CIE 1931 Standard Colorimetric Observer (2◦ ) 97
6.6 The CIE 1964 Supplementary Standard Colorimetric Observer (10◦ ) 102
6.7 Calculation of tristimulus values 104
6.8 Some mathematical relations of colorimetric quantities 104
6.9 Cautions on the use of colorimetric data 106
6.10 Color differences and uniform color spaces 107
6.10.1 CIE 1976 UCS diagram 109
6.10.2 CIELUV color space 110
6.10.3 CIELAB color space 111
Contents ix

6.10.4 The CIE 1994 color-difference model (CIE94) 113


6.10.5 CIE2000 color-difference formula: CIEDE2000 113
6.11 CIE terms 115
6.12 The CIE standard light sources and illuminants 116
6.13 Illuminating and viewing conditions 119
6.14 The vector space formulation of color calculations 121
6.15 Applications of colorimetry 124
6.15.1 The NTSC color signals 124
6.15.2 Computer graphics 126
6.15.3 Digital color image processing 127
6.16 Default color space for electronic imaging: sRGB 128
6.17 Problems 130

7 Light sources 132


7.1 Natural sources 132
7.1.1 Sunlight and skylight 132
7.1.2 Moonlight 135
7.1.3 Starlight 136
7.2 Artificial sources: lamps 137
7.2.1 Incandescent lamps 137
7.2.2 Fluorescent lamps 139
7.2.3 Electronic flash lamps 140
7.2.4 Mercury lamps, sodium lamps, and metal halide lamps 141
7.2.5 Light-emitting diodes (LEDs) 141
7.3 Color-rendering index 142
7.4 Problems 144

8 Scene physics 145


8.1 Introduction 145
8.2 General description of light reflection 145
8.2.1 The bidirectional reflectance distribution function (BRDF) 147
8.2.2 Interface reflection 150
8.2.3 Body reflection 158
8.2.4 Empirical surface reflection models 160
8.3 Radiative transfer theory and colorant formulation 164
8.3.1 Transparent media 164
8.3.2 Turbid media 167
8.4 Causes of color 173
8.4.1 Selective absorption 174
8.4.2 Scattering 175
8.4.3 Interference 175
8.4.4 Dispersion 175
8.5 Common materials 175
8.5.1 Water 175
x Contents

8.5.2 Metals 176


8.5.3 Minerals 176
8.5.4 Ceramics and cements 176
8.5.5 Glass 178
8.5.6 Polymers 178
8.5.7 Plants 179
8.5.8 Animals 180
8.5.9 Humans 181
8.5.10 Pigments and dyes 185
8.5.11 Paints 186
8.5.12 Paper 187
8.5.13 Printing inks 188
8.6 Statistics of natural scenes 190
8.6.1 Colors tend to integrate to gray 190
8.6.2 Log luminance range is normally distributed 191
8.6.3 Log radiances tend to be normally distributed 191
8.6.4 Color variations span a low-diemsional space 191
8.6.5 Power spectra tend to fall off as (1/ f )n 191
8.7 Problems 192

9 Optical image formation 193


9.1 Geometrical and physical optics 193
9.2 The basis of geometrical optics 194
9.3 Projective geometry 196
9.4 The geometrical theory of optical imaging 199
9.5 Conventions and terminology in optical imaging 204
9.6 Refraction at a spherical surface 206
9.6.1 On-axis imaging by a spherical surface 209
9.6.2 Off-axis imaging by a spherical surface 211
9.7 Matrix method for paraxial ray tracing 211
9.8 Matrix description of Gaussian optical imaging systems 219
9.9 Generalized ray tracing 221
9.10 Physical optics 222
9.10.1 Scalar and vector theories of diffraction 223
9.10.2 The field impulse response of an imaging system 226
9.10.3 The optical transfer function (OTF) 229
9.11 Problems 231

10 Lens aberrations and image irradiance 234


10.1 Introduction 234
10.2 Radiometry of imaging 235
10.2.1 On-axis image irradiances 237
10.2.2 Off-axis image irradiances 238
10.2.3 General image irradiances 238
Contents xi

10.3 Light distribution due to lens aberrations 239


10.3.1 Monochromatic aberrations 239
10.3.2 Depth of field 252
10.3.3 Sine condition 256
10.3.4 Chromatic aberration 257
10.4 Optical blur introduced by the camera 258
10.4.1 The real lens 258
10.4.2 The diaphragm 260
10.4.3 The shutter 260
10.4.4 Effects of object motion 263
10.5 Camera flare 267
10.6 Problems 269

11 Eye optics 271


11.1 Anatomy of the eye 271
11.2 Reduced eye and schematic eyes 274
11.3 Conversion between retinal distance and visual angle 278
11.4 Retinal illuminance 279
11.5 Depth of focus and depth of field 279
11.6 Focus error due to accommodation 280
11.7 Pupil size 282
11.8 Stiles–Crawford effect 282
11.9 Visual acuity 283
11.10 Measurements and empirical formulas of the eye MTF 284
11.11 Method of eye MTF calculation by van Meeteren 286
11.12 Problems 288

12 From retina to brain 289


12.1 The human visual system 290
12.2 The concepts of receptive field and channel 292
12.3 Parallel pathways and functional segregation 294
12.4 The retina 294
12.4.1 Photoreceptors: rods and cones 297
12.4.2 Horizontal cells 305
12.4.3 Bipolar cells 305
12.4.4 Amacrine cells 307
12.4.5 Ganglion cells 307
12.5 Lateral geniculate nucleus (LGN) 309
12.5.1 Color-opponent encoding 310
12.6 Visual areas in the human brain 312
12.6.1 Primary visual cortex 313
12.6.2 Other cortical areas 316
12.7 Visual perception and the parallel neural pathways 317
12.8 Problems 319
xii Contents

13 Visual psychophysics 321


13.1 Psychophysical measurements 322
13.1.1 Measurement scales 322
13.1.2 Psychometric methods 323
13.1.3 Data interpretation 324
13.2 Visual thresholds 326
13.2.1 Absolute thresholds 327
13.2.2 Contrast thresholds 327
13.2.3 Contrast sensitivity functions (CSFs) 329
13.2.4 Photochromatic interval 334
13.2.5 Thresholds of visual blur 334
13.3 Visual adaptation 334
13.3.1 Achromatic adaptation 334
13.3.2 Chromatic adaptation 335
13.4 Eye movements and visual perception 337
13.5 Perception of brightness and lightness 341
13.5.1 Brightness perception of a uniform visual field (ganzfeld) 342
13.5.2 Brightness perception of an isolated finite uniform area 343
13.5.3 Brightness perception of two adjacent uniform areas 345
13.5.4 Brightness and lightness perception depends on the perceived
spatial layout 347
13.6 Trichromatic and opponent-process theories 347
13.7 Some visual phenomena 349
13.7.1 Brilliance as a separate perceptual attribute 349
13.7.2 Simultaneous perception of illumination and objects 351
13.7.3 Afterimages 351
13.7.4 The Mach band 352
13.7.5 The Chevreul effect 353
13.7.6 Hermann–Hering grids 353
13.7.7 The Craik–O’Brien–Cornsweet effect 354
13.7.8 Simultaneous contrast and successive contrast 354
13.7.9 Assimilation 355
13.7.10 Subjective (illusory) contours 355
13.7.11 The Bezold–Brücke effect 356
13.7.12 The Helmholtz–Kohlrausch effect 356
13.7.13 The Abney effect 356
13.7.14 The McCollough effect 357
13.7.15 The Stiles–Crawford effect 357
13.7.16 Small field tritanopia 357
13.7.17 The oblique effect 357
13.8 Problems 357

14 Color order systems 359


14.1 Introduction 359
Contents xiii

14.2 The Ostwald system 360


14.2.1 The Ostwald color order system 361
14.2.2 The Ostwald color atlas 362
14.3 The Munsell system 362
14.3.1 The Munsell color order system 362
14.3.2 The Munsell color atlas 363
14.4 The NCS 365
14.4.1 The NCS color order system 365
14.4.2 The NCS color atlas 366
14.5 The Optical Society of America (OSA) color system 366
14.5.1 The OSA color order system 366
14.5.2 The OSA color atlas 367
14.6 Color harmony 367
14.7 Problems 368

15 Color measurement 369


15.1 Spectral measurements 369
15.1.1 Spectroradiometer 369
15.1.2 Spectrophotometer 371
15.1.3 Factors to consider 371
15.2 Gonioreflectometers 372
15.3 Measurements with colorimetric filters 373
15.4 Computation of tristimulus values from spectral data 374
15.5 Density measurements 374
15.5.1 Reflection density, Dρ and D R 376
15.5.2 Transmission density 378
15.6 Error analysis in calibration measurements 381
15.6.1 Error estimation 381
15.6.2 Propagation of errors 382
15.7 Expression of measurement uncertainty 384
15.8 Problems 385

16 Device calibration 387


16.1 Colorimetric calibration 388
16.1.1 Input calibration 388
16.1.2 Output calibration 390
16.1.3 Device model versus lookup tables 392
16.2 Computational tools for calibration 394
16.2.1 Interpolation 395
16.2.2 Tetrahedral interpolation 401
16.2.3 Regression and approximation 403
16.2.4 Constrained optimization 406
16.3 Spatial calibration 410
xiv Contents

16.3.1 Resolution calibration 411


16.3.2 Line fitting on a digital image 413
16.4 Problems 414

17 Tone reproduction 415


17.1 Introduction 415
17.2 TRCs 417
17.3 The concept of reference white 419
17.4 Experimental studies of tone reproduction 420
17.4.1 Best tone reproduction depends on scene contents 422
17.4.2 Best tone reproduction depends on luminance levels 423
17.4.3 Best tone reproduction depends on viewing surrounds 423
17.4.4 Best tone reproduction renders good black 424
17.5 Tone reproduction criteria 425
17.5.1 Reproducing relative luminance 426
17.5.2 Reproducing relative brightness 427
17.5.3 Reproducing visual contrast 428
17.5.4 Reproducing maximum visible details 429
17.5.5 Preferred tone reproduction 431
17.6 Density balance in tone reproduction 431
17.7 Tone reproduction processes 432
17.8 Flare correction 437
17.9 Gamma correction 438
17.10 Problems 440

18 Color reproduction 442


18.1 Introduction 442
18.2 Additive and subtractive color reproduction 442
18.3 Objectives of color reproduction 443
18.3.1 Appearance color reproduction 444
18.3.2 Preferred color reproduction 444
18.4 Psychophysical considerations 446
18.4.1 The effect of the adaptation state 446
18.4.2 The effect of viewing surrounds 449
18.4.3 The effect of the method of presentation 450
18.5 Color balance 450
18.5.1 Problem formulations 451
18.5.2 Color cues 453
18.5.3 Color balance algorithms 454
18.6 Color appearance models 459
18.6.1 Color appearance attributes 460
18.6.2 Descriptions of the stimuli and the visual field 461
18.6.3 CIECAM97s 461
18.6.4 CIECAM02 and revision of CIECAM97s 465
Contents xv

18.7 Theoretical color gamut 468


18.8 Color gamut mapping 470
18.8.1 Selection of color space and metrics 471
18.8.2 Computing the device color gamut 472
18.8.3 Image-independent methods for color gamut mapping 472
18.9 Using more than three color channels 474
18.10 Color management systems 474
18.11 Problems 476

19 Color image acquisition 477


19.1 General considerations for system design and evaluation 477
19.1.1 Considerations for input spectral responsivities 478
19.1.2 Calibration, linearity, signal shaping, and quantization 479
19.1.3 Dynamic range and signal-to-noise ratio 479
19.2 Photographic films 480
19.2.1 The structure of a black-and-white film 480
19.2.2 The latent image 481
19.2.3 Film processing 481
19.2.4 Color photography 482
19.2.5 Subtractive color reproduction in photography 483
19.2.6 Color masking 484
19.2.7 Sensitometry and densitometry 485
19.3 Color images digitized from photographic films 486
19.3.1 The effective exposure MTF approach 487
19.3.2 The nonlinear model approach 488
19.3.3 Interimage effects 491
19.4 Film calibration 492
19.5 Solid-state sensors and CCD cameras 494
19.5.1 CCD devices 495
19.5.2 CCD sensor architectures 495
19.5.3 CCD noise characteristics 497
19.5.4 CMOS sensors 502
19.5.5 Exposure control for CCD and CMOS sensors 503
19.5.6 CCD/CMOS camera systems 504
19.5.7 CCD/CMOS camera calibrations 507
19.6 Scanners 512
19.6.1 Scanner performance and calibration 515
19.7 A worked example of 3 × 3 color correction matrix 515
19.8 Problems 521

20 Color image display 523


20.1 CRT monitors 523
20.1.1 Cathode current as a function of drive voltage 525
20.1.2 Conversion of electron motion energy into light 526
xvi Contents

20.1.3 CRT phosphors and cathodoluminescence 527


20.1.4 CRT tone transfer curve 528
20.1.5 CRT colorimetry 529
20.2 LCDs 532
20.2.1 Properties of liquid crystals 532
20.2.2 The structures of LCDs and how they work 532
20.2.3 LCD calibration 536
20.3 PDPs 537
20.4 Electroluminescent displays 539
20.4.1 OLED and PLED 539
20.5 Printing technologies 540
20.5.1 Offset lithography 541
20.5.2 Letterpress 542
20.5.3 Gravure 542
20.5.4 Screen printing 543
20.5.5 Silver halide photography 543
20.5.6 Electrophotography (xerography) 545
20.5.7 Inkjet printing 546
20.5.8 Thermal printing 547
20.6 Half-toning 548
20.6.1 Photomechanical half-tone screens and screen angles 548
20.6.2 Screen ruling, addressability, resolution, and gray levels 549
20.6.3 Digital half-toning 550
20.7 Printer calibration 557
20.7.1 Calibration of RGB printers 558
20.7.2 Four-color printing 560
20.8 Problems 562

21 Image quality 564


21.1 Objective image quality evaluation 564
21.1.1 Detector efficiency 565
21.1.2 Spatial frequency analysis 565
21.1.3 Image noise 567
21.2 Subjective image quality evaluation 571
21.2.1 Contrast 572
21.2.2 Sharpness 573
21.2.3 Graininess and noise perception 574
21.2.4 Tonal reproduction 575
21.2.5 Color reproduction 576
21.2.6 Combined effects of different image attributes 577
21.2.7 Multi-dimensional modeling of image quality 578
21.3 Photographic space sampling 579
21.4 Factors to be considered in image quality evaluation 580
Contents xvii

21.4.1 Observer screening 581


21.4.2 Planning of experiments 581
21.5 Image fidelity and difference evaluation 582
21.5.1 Perceptible color differences 583
21.5.2 Visible difference prediction 583
21.6 Problems 584

22 Basic concepts in color image processing 585


22.1 General considerations 585
22.2 Color spaces and signal representations 587
22.2.1 Signal characteristics 588
22.2.2 Noise statistics 590
22.2.3 System constraints 591
22.3 Color image segmentation 591
22.3.1 Color space for image segmentation 592
22.3.2 Comparison of linear and logarithmic spaces 593
22.3.3 Method for partitioning the color space 597
22.3.4 The distance metric 598
22.4 Color gradient 600
22.5 Color edge detection 601
22.5.1 Derivative of a color image 602
22.5.2 Statistics of noise in a boundary detector 603
22.5.3 Detection of a step boundary 606
22.6 Statistics of directional data 608
22.6.1 Representation and descriptive measures 608
22.6.2 Model distributions for directional data 609
22.7 Denoising 611

Appendix Extended tables 614


A.1 CIE 1931 color matching functions and corresponding chromaticities 614
A.2 CIE 1964 10-degree color matching functions 616
A.3 Cone fundamentals 618
A.4 Judd’s modified VM (λ) (CIE 1988) and scotopic V  (λ) (CIE 1951) 619
A.5 Standard illuminants 620
A.6 CIE daylight vectors 622
A.7 Pointer’s gamut of real surfaces 623

Glossary 625
References 635
Index 689
Preface

To understand the capturing, the processing, and the display of color images requires knowl-
edge of many disciplines, such as image formation, radiometry, colorimetry, psychophysics,
and color reproduction, that are not parts of the traditional training for engineers. Yet, with
the advance of sensor, computing, and display technologies, engineers today often have to
deal with aspects of color imaging, some more frequently than others. This book is intended
as an introduction to color imaging science for engineers and scientists. It will be useful
for those who are preparing to work or are already working in the field of color imaging
or other fields that would benefit from the understanding of the fundamental processes of
color imaging.
The sound training of imaging scientists and engineers requires more than teaching
practical knowledge of color signal conversion, such as YIQ to RGB. It also has to impart
good understanding of the physical, mathematical, and psychophysical principles underlying
the practice. Good understanding ensures correct usage of formulas and enables one to come
up with creative solutions to new problems. The major emphasis of this book, therefore,
is to elucidate the basic principles and processes of color imaging, rather than to compile
knowledge of all known systems and algorithms. Many applications are described, but they
serve mainly as examples of how the basic principles can be used in practice and where
compromises are made.
Color imaging science covers so many fields of research that it takes much more than
one book to discuss its various aspects in reasonable detail. There are excellent books on
optics, radiometry, photometry, colorimetry, color science, color vision, visual perception,
pigments, dyes, photography, image sensors, image displays, image quality, and graphic arts.
Indeed, the best way to understand the science of color imaging is to read books on each of
these topics. The obvious problem is the time and effort required for such an undertaking,
and this is the main motivation for writing this book. It extracts the essential information
from the diverse disciplines to present a concise introduction to the science of color imaging.
In doing so, I have made unavoidable personal choices as to what should be included. I have
covered most of the topics that I considered important for a basic understanding of color
imaging. Readers, who want to know more on any topic, are strongly encouraged to study
the books and articles cited in the reference list for further information.
I would like to thank Professor Thomas S. Huang of University of Illinois, for his won-
derful lectures and his suggestion of writing a book on color imaging. I would also like to
thank Professor Thomas W. Parks of Cornell University for his numerous suggestions on
how to improve the presentation of the material and for his help in constructing homework

xix
xx Preface

problems for students. During the time he and I cotaught a course on color imaging science
at Cornell, I learned a lot from his many years of teaching experience. My career in imaging
science began under Mr. James S. Alkofer and Dr. Michael A. Kriss. They let me wander
around in the interesting world of color imaging under their experienced guidance. I appre-
ciate their encouragement, friendship, and wisdom very much. I am also very grateful to
my copy-editor, Maureen Storey, for her patient and meticulous editing of my manuscript.
During the preparation of this book, my wife took care of the family needs and all the
housework. Her smiles brightened my tired days and her lively description of her daily
activities kept me in touch with the real world. She loves taking pictures and her casual
comments on image quality serve as reality checks of all the theories I know. My book-
writing also required me to borrow many weekends from my daughter. Her witty and funny
remarks to comfort me on my ever increasing time debt just made it more difficult for me
to figure out how much I owe her. Certain things cannot be quantified.
1 Introduction

1.1 What is color imaging science?

Color imaging science is the study of the formation, manipulation, display, and evaluation of
color images. Image formation includes the optical imaging process and the image sensing
and recording processes. The manipulation of images is most easily done through computers
in digital form or electronic circuits in analog form. Conventional image manipulation in
darkrooms accounts only for a very small fraction of the total images manipulated daily. The
display of color images can use many different media, such as CRT monitors, photographic
prints, half-tone printing, and thermal dye-transfer prints, etc. The complete imaging chain
from capture, through image processing, to display involves many steps of degradation,
correction, enhancement, and compromise. The quality of the final reproduced images has
to be evaluated by the very subjective human observers. Sometimes, the evaluation process
can be automated with a few objectively computable, quantitative measurements.
The complexity of color imaging science stems from the need to understand many
diverse fields of engineering, optics, physics, chemistry, and mathematics. Although it
is not required for us to be familiar with every part of the process in detail before we
can work in and contribute to the color imaging science field, it is often necessary for
us to have a general understanding of the entire imaging chain in order to avoid making
unrealistic assumptions in our work. For example, in digital image processing, a frequently
used technique is histogram-equalization enhancement, in which an input image is mapped
through a tonal transformation curve such that the output image has a uniformly distributed
histogram of image values. However, the technique is often applied without knowing what
the units of the digital images really are. The same image can be digitized in terms of film
density or image exposure. Depending on which way it is digitized, the resulting histogram
can differ widely. Writing that an image has been processed by the “histogram-equalization”
technique without saying in which metric the histogram was equalized does not allow the
reader to draw any meaningful conclusion. If we have a general understanding of the practice
of image scanning and display, we can easily avoid this type of error. Sometimes, causes of
errors can be more subtle and it requires understanding of a different kind to avoid them. For
example, the geometrical theory of optical imaging tells us that the out-of-focus point spread
function is a uniform disk. However, if we understand that the fundamental assumption of
geometrical optics is not valid around the image focus area, we are more careful in using the
uniform disk as a blur model. In this case, basic knowledge of the assumptions underlying
various approximations made by theories lets us watch out for potential pitfalls. For these

1
2 1 Introduction

reasons, this book aims at providing the needed general understanding of the entire color
imaging chain whilst making the various assumptions and approximations clear.

1.2 Overview of the book

This book is written based on the belief that for a beginning color imaging scientist or
engineer, a basic, broad understanding of the physical principles underlying every step in
the imaging chain is more useful than an accumulation of knowledge about details of various
techniques. Therefore, on the one hand, some readers may be surprised by many of the topics
in the book that are not traditionally covered by textbooks on color science and imaging
science. On the other hand, some readers may be disappointed that no comprehensive
surveys are provided for various algorithms or devices. If we truly understand the nature
of a problem, we can often come up with very creative and robust solutions after some
careful thinking. Otherwise, even if we know all the existing tricks and methods to solve a
problem, we may be at a loss when some critical constraints are changed. The following is
an overview of the book.

1.2.1 Measurement of light and color


Since color images are formed by light, we first describe, in Chapter 2, the nature and
properties of light as we understand them today. The history of how we came to achieve that
understanding is fascinating, but it would take up too much space to give a full account of the
intellectual struggle which involved some of the brightest minds in human history. However,
properties of light, such as the wave train, its quantum nature, coherence, and polarization,
come up in color imaging frequently enough that we have to at least understand these
basic concepts involved in characterizing the light. Before we explain in Chapter 5 how
light interacts with matter, we have to understand how we quantify and measure the energy
propagation of light.
The scientific basis of color imaging starts from the defining concepts of how light can
be measured. These are the topics of radiometry (Chapter 3) and photometry (Chapter 4). In
these two chapters, we describe how the flow of light energy can be quantified in a physical
system and how our “brightness” sensation can be related to the measurement. With proper
knowledge of radiometry, we then come back to study the light–matter interaction, which is
often very complex from a theoretical point of view and, in its full detail, not easy to com-
prehend. We, therefore, have to treat many aspects of the interaction phenomenologically.
Thus, in Chapter 5, we discuss dispersion, refraction, reflection, scattering, transmission,
absorption, and diffraction, basically following the traditional and historical development.
In Chapter 6, we cover the topic of colorimetry, which starts with the physical specifi-
cation of stimuli that our visual system perceives as colors. The word color, as we use it in
our daily conversation, implicitly refers to human color vision. In studying color imaging
systems, a spectrum of incident light can be specified with respect to any physical sensing
system that can sense more than one spectral component. Colorimetry can be established
for any such system. For example, when we wish to study how other animals or insects
1.2 Overview of the book 3

see the world, separate colorimetric systems can be constructed according to their spectral
sensing mechanisms. From this perspective, we can appreciate how color imaging can be
thought of as a branch of science that relates different physical systems with the same basic
laws. For human color perception, the colorimetry system established by the Commission
Internationale de l’Eclairage (CIE) is the most widely accepted system today. Much of
Chapter 6 is devoted to explaining what the CIE system is and how it was derived. It is of
fundamental importance that we understand this system thoroughly.
Since the spectral composition of the light reflected from an object surface is the product
of the spectral composition of the light incident on the surface and the spectral reflectance
factor of the surface, the spectral characteristics of light sources directly (through direct
illumination) or indirectly (through mutual reflection) affect the spectral composition of the
optical image formed at the sensor(s) of a color imaging system. Therefore, it is necessary
for us to have a good knowledge of the nature of the various light sources that are involved
in color imaging applications. This is the subject of Chapter 7.
The colorful contents of natural scenes are the results of the complex interaction of light
and objects. The quantitative description of such interactions is called scene physics, and
is the subject of Chapter 8. It is important to note that such quantitative description is a
very difficult problem to formulate. The concept of the bidirectional reflectance distribution
function (BRDF) is one formulation that has been widely accepted because of its practical
applicability and usefulness, although it certainly is not valid for every conceivable light–
matter interaction. Various models for reflective and transmissive materials are discussed
following this basic concept. In addition to color imaging applications, these models often
find use in color image synthesis, colorant formulation, the printing industry, and computer
vision. These fields are closely related to color imaging and color imaging research benefits
from ideas and results from them. The chapter also includes a general overview of the
physical and optical properties of some of the common materials that we encounter in color
imaging applications. The chapter ends with a summary of some statistical properties of
natural scenes. These properties are empirical, but they are useful for at least two purposes:
(1) Many practical color imaging problems, such as white balance and exposure determi-
nation, are open research problems that seem to have no provable, deterministic solutions.
Statistical properties of natural scenes can be used as a priori knowledge in any Bayesian
estimate. (2) The statistical properties reveal certain regularities in the natural scenes and
thus form a very rich source of research topics that will increase our understanding of how
the physical world behaves.

1.2.2 Optical image formation


The next component in the events of an imaging chain is the formation of the optical images
on the sensor. Within the visible wavelength range, optical imaging can be very well de-
scribed by treating light as rays, neglecting its wave and photon characteristics most of the
time. Such an approximation is called geometrical optics, in which Snell’s law plays the most
important role. However, when discontinuous boundaries exist, such as an aperture stop in a
camera, light’s wave nature (diffraction) becomes an important factor to consider. For exam-
ple, in geometrical optics, the image of an object point in an aberration-free system is always
4 1 Introduction

assumed to be an ideal image point, independently of the aperture size. This is simply not
true. From electromagnetic wave theory, we can derive the so-called “diffraction-limited”
point spread function, which turns out to have a fairly complicated spatial distribution. The
description of the optical image formation through wave theory is called wave optics or
physical optics. Chapters 9 and 10 cover the basic concepts in both geometrical optics and
physical optics. The geometrical theory of optical imaging is quite general and, as far as
color imaging science is concerned, the most interesting result is that the mapping between
the object space and the image space is a projective transformation. This leads naturally to
the matrix method for paraxial ray tracing that allows us to do quick and simple calculations
of the basic characteristics of most optical imaging systems. The most fundamental tool for
analyzing the image quality of an imaging system is the optical transfer function (OTF). The
relationship between the OTF and the wavefront aberration can be derived from diffraction
theory, which is the foundation of physical optics for image formation.
In the sensing and recording of optical images, it is very important to calculate how much
light (image irradiance) is collected on the sensor plane, as a function of focal length, object
distance, and aperture size. In Chapter 10, the image irradiance equations, like the theory
of radiometry, are derived from geometrical optics. These equations are very important
for all practical optical imaging systems and should be understood well. A more detailed
desciption of the light distribution in the image space has to be derived from physical optics.
The results from geometrical optics and physical optics are compared using a case study of
the blur caused by defocus. The conclusion is that when the defocus is severe, the predictions
of both theories are quite similar. However, when the defocus is slight, the predictions are
very different. Physical optics even predicts, against our intuition, that the center of the point
spread function can become zero at a certain defocus distance. This rather counterintuitive
prediction has been confirmed by experiments.

1.2.3 In the eye of the beholder


The beauty of color images is in the eye of the beholder. Thus, it is necessary for us to
understand the function and the characteristics of the human visual system, so that color
imaging systems can be efficiently optimized. We examine the optics of the eye in Chapter
11. Basic anatomical structures and optical models of the eye are described. Its optical
properties, such as the modulation transfer function (MTF), acuity, and accommodation,
are summarized. A computational model of the OTF of the eye as a function of viewing
parameters is then discussed, based on the wavefront aberrations of the pupil function.
This type of approach is very useful when we want to model the optical performance of
the eye under viewing conditions very different from the laboratory settings. In Chapter
12, we discuss how the visual signal is sensed and processed in our visual pathways from
retina to brain. The physiology and the anatomy of the visual system presented in this
chapter help us to understand the practical constraints and the general features of our visual
perception. In Chapter 13, we shift our attention to the basic issues in the psychophysics of
visual perception. Here we try to clarify one of the most confused areas in color imaging.
The basic approach is to describe how psychophysical experiments are conducted, so that
color imaging scientists and engineers will think more carefully about how to apply the
psychophysical data to their work. In this chapter, we have chosen to discuss the concepts
1.2 Overview of the book 5

of brightness and lightness in some detail because they show us how complicated the
computation can be even for some things that sound intuitively obvious. We also discuss
at length the perception of images when they are stabilized on our retinas. The finding that
the perceived images quickly fade when they are stabilized on the observer’s retina clearly
demonstrates that the visual perception is more a task of reconstruction from visual features
than a job of mapping the optical images directly to our mind.
After we have studied the human visual system in Chapters 11–13, we are well prepared
to explore the basic ideas and theories behind the various color order systems in Chapter
14. We have delayed the discussion of this subject until now so that we can appreciate
the motivation, the limitations, and the difficulties involved in any color order system.
(For example, the concept of opponent color processes was developed to explain many
psychophysical observations, and therefore it also plays an important role in the Ostwald
and the NCS color order systems.) The idea of using a color atlas for everyday color
specification seems an intuitive thing to do, but from the perspective of colorimetry, a color
atlas may be a useless thing to have because the everyday illuminants are almost never as
specified by the atlas. It is the powerful color processing of our visual system that does all
the “auto” compensations that make a color atlas of any practical use.

1.2.4 Tools for color imaging


The practice of color imaging science requires physical and perceptual evaluation of color
images. The tools for physical evaluation are spectroradiometers, spectrophotometers, den-
sitometers, and other electrical and optical instruments. Chapter 15 covers physical color
measurement tools and Chapter 16 mathematical tools.
The tools for perceptual evaluation are less well developed, but they fall into the general
categories of tone reproduction (Chapter 17), color reproduction (Chapter 18), and image
quality (mostly ad hoc measures of sharpness, resolution, noise, and contrast, as discussed
in Chapter 21). There have been quite extensive studies of tone and color reproduction,
and some general principles can be systematically summarized. Good tone reproduction is
the number one requirement in the perceived image quality. In the past, research has been
focused on the tone reproduction characteristics of an imaging system as a whole. As digital
processing becomes common practice for most imaging applications, a general theory of
image-dependent tone reproduction is needed. On the subject of color reproduction, there
are fewer definitive studies. The major effort seems to be in working out a usable color
appearance model. Although the current model is incomplete because it does not explicitly
take spatial and temporal variations into consideration, it seems to produce reasonable
measures for color reproduction.

1.2.5 Color image acquisition and display


Tremendous progress has been made since the 1970s in the development of new image
acquisition and display devices. Photographic films and papers are still quite important, but
in many applications, they are being replaced by digital cameras, scanners, and many print-
ing/display devices. Chapter 19 discusses various color image acquisition media, devices,
and systems, while Chapter 20 covers those for color image display. Basic understanding
6 1 Introduction

of the characteristics and working principles of the various input/output systems is very
important in the practice of color imaging science. Even if we do not directly work on a
particular device or medium, it is very likely we will encounter images that are acquired by
that device or are to be displayed on that medium. Often, the solution to a color imaging
problem for a given device may have been worked out for other devices. Understanding the
problems and technology behind one type of system often helps us to solve problems in
another type of system. A good example is the unsharp masking method for image enhance-
ment, which has long been practised in photographic dark rooms. The same technique is
now used extensively in digital imaging as well.

1.2.6 Image quality and image processing


Every imaging system, from its design to the finished product, involves many cost, schedule,
and performance trade-offs. System optimization and evaluation are often based on the
image quality requirements of the product. The metrics that are used to evaluate image
quality, therefore, play a very important role in the system design and testing processes.
In Chapter 21, we present the basic attributes and models for image quality. Objective
measures, such as camera/sensor speed, image noise, and spatial resolution, are quickly
becoming standardized. Subjective measures, such as contrast, sharpness, tone, and color
reproduction, are less well defined physically and rely more on psychophysical experiments
with human observers. It is necessary to understand what procedures can be used and what
precautions we need to take.
Digital image processing can be used to correct some deficiencies in current imaging
systems, such as noise reduction, image sharpening, and adaptive tone adjustment. Fur-
thermore, algorithms can be used to increase the system performance, such as autofocus,
autoexposure, and auto-white-balance. Basic algorithms in digital image processing are
very well covered in existing textbooks. Unfortunately, some algorithms are too slow to be
practical, and many others too fragile to be useful. Most good and fast algorithms tend to
be proprietary and not in the public domain. They also tend to be hardware specific.
Color image processing is not simply repeating the same processing for three monochro-
matic images, one for each color channel. There are many new concepts and new problems
that we do not encounter in gray scale images for two basic reasons: (1) color images are vec-
tor fields, and (2) our color vision has its own idiosyncracy – color information is represented
and processed by our visual system in a specific way. In Chapter 22, we concentrate on only
a few selected concepts, such as color space design, vector gradient, color segmentation,
and statistics of directional data. These are concepts that have not received much discussion
in the literature, but are very important for many practical applications to take into account.
It is easy to be misled if we have not thought about the various issues by ourselves first.

1.3 The International System of Units (SI)

In this book, we use the terminology and units in the International System of Units (SI) and
those recommended by the Commission Internationale de l’Eclairage (CIE). When there are
1.3 The International System of Units (SI) 7

Table 1.1. SI prefixes ( from [942])

Factor Prefix Symbol Factor Prefix Symbol

1024 yotta Y 10−1 deci d


1021 zetta Z 10−2 centi c
1018 exa E 10−3 milli m
1015 peta P 10−6 micro µ
1012 tera T 10−9 nano n
109 giga G 10−12 pico p
106 mega M 10−15 femto f
103 kilo k 10−18 atto a
102 hecto h 10−21 zepto z
101 deca da 10−24 yocto y

conflicts in symbols, we will use the CIE symbols for the units in radiometry, colorimetry,
and photometry. The International System of Units is described in many standard documents
(such as [942]) and the book by Ražnjević [787] provides good explanations. The CIE
system is well described in its publication: International Lighting Vocabulary [187]. The
International System of Units (SI) adopted by CGPM1 is composed of basic units, derived
units, and supplementary units. There are seven basic units: meter [m] for length, kilogram
[kg] for mass, second [s] for time, ampere [A] for electric current, kelvin [K] for temperature,
candela [cd] for luminous intensity, and mole [mol] for amount of substance. The meter
is defined as the length of the path traveled by light in vacuum during a time interval of
1/299 792 458 second. The unit of plane angle, radian [rad], and the unit of solid angle,
steradian [sr], are two of the supplementary units. Since they are dimensionless derived
units, they do not need to be defined as a separate class of unit. Many SI derived units, such
as watt [W], volt [V], hertz [Hz], and joule [J], are quite familiar to us. Other SI derived
units, such as lux [lx] and lumen [lm], that we are going to use frequently in the book will be
defined in detail later. When the numerical values are too large or too small, the SI prefixes
in Table 1.1 can be used to form multiples and submultiples of SI units. It is a convention
that a grouping formed by a prefix symbol and a unit symbol is a new inseparable symbol.
Therefore, cm (centimeter) is a new symbol and can be raised to any power without using
parentheses. For example, 2 cm2 = 2 (cm)2 . Convention also requires that unit symbols are
unaltered in the plural and are not followed by a period unless at the end of a sentence.
Unfortunately, there are many instances when one standard symbol could represent
more than one physical quantity. For example, E is used both for the electric field strength
[V m−1 ] and for irradiance [W m−2 ]. Similarly, H is used for the magnetic field strength
[A m−1 ] and also for exposure [J m−2 ]. Since this happens very frequently and since chang-
ing standard symbols for various physical quantities can create more confusion, we decided
that the best way to avoid ambiguity is to specify the units when it is not clear from the
context which physical quantity is used. This will free us to use the same, widely accepted,
standard symbol for different physical quantities in our discussion throughout the book. In

1 CGPM stands for Conférence Générale des Poids et Measures. Its English translation is: General Conference
on Weights and Measures. It is the decision-making body of the Treaty of the Meter, signed in 1875. The decisions
by CGPM legally govern the international metrology system among all the countries that signed the Treaty.
8 1 Introduction

Table 1.2. Some important physical constants used in this book

Quantity Symbol Value

speed of light in vacuum c 299 792 458 m s−1


permeability of vacuum µ0 4π × 10−7 m kg s−2 A−2
1
permittivity of vacuum 0 m−3 kg−1 s4 A2
µ0 c 2
Planck constant h 6.626 075 5 × 10−34 J s
Boltzmann constant k 1.380 658 × 10−23 J K−1
electron volt eV 1.602 177 33 × 10−19 J

almost all cases, the context and the name of the physical quantity will make the meaning
clear. The physical constants shown in Table 1.2 will be useful in our later discussion.

1.4 General bibliography and guide to the literatures

Color imaging science cuts across many different disciplines. For further details on any
specific topic, the reader is encouraged to consult books and papers in that field. There are
many excellent books in each field. Since every person has a different style of learning and a
different background of training, it is difficult to recommend books that will be both useful
and interesting to everyone. A short bibliography is compiled here. No special criteria
have been used for selection and the list represents only a tiny fraction of the excellent
books available on the various topics. Hopefully, it may be useful for you. If you know
some experts in the field you are interested in, you should ask them for more personalized
recommendations.

Radiometry and photometry


Radiometry and the Detection of Optical Radiation, by R.W. Boyd [127].
Optical Radiation Measurements, Volume I, by F. Grum and R.J. Becherer [369].
Reliable Spectroradiometry, by H.J. Kostkowski [525].
Illumination Engineering – From Edison’s Lamp to the Laser, by J.B. Murdoch [687].
Self-Study Manual on Optical Radiation Measurements, edited by F.E. Nicodemus [714].
Geometrical Considerations and Nomenclature for Reflectance, by F.E. Nicodemus,
J.C. Richmond, J.J. Hsia, I.W. Ginsberg, and T. Limperis [715].
Thermal Radiation Heat Transfer, by R. Siegel and J.R. Howell [872].
Introduction to Radiometry, by W.L. Wolfe [1044].

Color science
Billmeyer and Saltzman’s Principles of Color Technology, 3rd edition, by R.S. Berns [104].
Principles of Color Technology, 2nd edition, by F.W. Billmeyer and M. Saltzman [111].
Measuring Colour, by R.W.G. Hunt [430].
Color: An Introduction to Practice and Principles, by R.G. Kuehni [539].
Color Measurement, by D.L. MacAdam [620].
Colour Physics for Industry, 2nd edition, edited by R. McDonald [653].
1.4 General bibliography and guide to the literatures 9

Handbook of Color Science, 2nd edition, edited by Nihon Shikisaigakkai (in Japanese)
[716].
The Science of Color, 2nd edition, edited by S.K. Shevell [863].
Industrial Color Testing: Fundamentals and Techniques, by H.G. Völz [989].
Color Science, 2nd edition, by G. Wyszecki and W.S. Stiles [1053].

Human visual perception


The Senses, edited by H.B. Barlow and J.D. Mollon [54].
Handbook of Perception and Human Performance, Volumes I and II, edited by K.R. Boff,
L. Kaufman, and J.P. Thomas [118, 119].
Vision, by P. Buser and M. Imbert [153].
The Visual Neurosciences, edited by L.M. Chalupa and J.S. Werner [167].
Visual Perception, by T.N. Cornsweet [215].
The Retina: An Approachable Part of the Brain, by J.E. Dowling [264].
An Introduction to Color, by R.M. Evans [286].
Color Vision: From Genes to Perception, edited by K.R. Gegenfurtner and L.T. Sharpe
[337].
Eye, Brain, and Vision, by D.H. Hubel [418].
Color Vision, by L.M. Hurvich [436].
Human Color Vision, 2nd edition, by P.K. Kaiser and R.M. Boynton [480].
Visual Science and Engineering: Models and Applications, by D.H. Kelly [500].
Vision: A Computational Investigation into the Human Representation and Processing of
Visual Information, by D. Marr [636].
Images of Mind, by M.I. Posner and M.E. Raichle [774].
The First Steps in Seeing, by R.W. Rodieck [802].
Visual Perception: The Neurophysiological Foundations, edited by L. Spillmann and J.S.
Werner [892].
Foundations of Vision, by B.A. Wandell [1006].
A Vision of the Brain, by S. Zeki [1066].

Optics
Handbook of Optics, Volumes I and II, edited by M. Bass [84].
Principles of Optics, 7th edition, by M. Born and E. Wolf [125].
Introduction to Matrix Methods in Optics, by A. Gerrard and J.M. Burch [341].
Statistical Optics, by J.W. Goodman [353].
Introduction to Fourier Optics, by J.W. Goodman [354].
Optics, 2nd edition, by E. Hecht [385].
Lens Design Fundamentals, by R. Kingslake [508].
Optics in Photography, by R. Kingslake [509].
Optics, 2nd edition, by M.V. Klein and T.E. Furtak [512].
Physiological Optics, by Y. Le Grand and S.G. El Hage [580].
Aberration Theory Made Simple, by V.N. Mahajan [626].
Optical Coherence and Quantum Optics, by L. Mandel and E. Wolf [631].
Geometrical Optics and Optical Design, by P. Mouroulis and J. Macdonald [682].
10 1 Introduction

Introduction to Statistical Optics, by E.L. O’Neil [729].


Elements of Modern Optical Design, by D.C. O’Shea [733].
Applied Photographic Optics, 3rd edition, by S.F. Ray [786].
States, Waves and Photons: A Modern Introduction to Light, by J. W. Simmons and M.J.
Guttmann [877].
The Eye and Visual Optical Instruments, by G. Smith and D.A. Atchison [884].
Modern Optical Engineering, 3rd edition, by W.J. Smith [887].
The Optics of Rays, Wavefronts, and Caustics by O.N. Stavroudis [899].

Scene physics
Absorption and Scattering of Light by Small Particles, by C.F. Bohren and D.R. Huffman
[120].
The Cambridge Guide to the Material World, by R. Cotterill [217].
Light by R.W. Ditchburn [258].
Sensory Ecology, by D.B. Dusenbery [269].
Seeing the Light, by D.S. Falk, D.R. Brill, and D.G. Stork [297].
Color in Nature, by P.A. Farrant [301].
Color and Light in Nature, by D.K. Lynch and W. Livingston [615].
The Colour Science of Dyes and Pigments, by K. McLaren [654].
Light and Color in the Outdoors, by M. Minnaert [667].
The Physics and Chemistry of Color, by K. Nassau [693].
Light and Color, by R.D. Overheim and D.L. Wagner [736].
Introduction to Materials Science for Engineers, 4th edition, by J.F. Shackelford
[853].
Colour and the Optical Properties of Materials, by R.J.D. Tilley [952].
Light and Color in Nature and Art, by S.J. Williamson and H.Z. Cummins [1036].
Color Chemistry, 2nd edition, by H. Zollinger [1071].

Image science
Foundations of Image Science, by H.H. Barrett and K.J. Meyers [64].
Image Science, by J.C. Dainty and R. Shaw [232].
Principles of Color Photography, by R.M. Evans, W.T. Hanson, and W.L. Brewer
[289].
The Theory of the Photographic Process, 4th edition, edited by T.H. James [459].
Handbook of Image Quality, by B.W. Keelan [494].
Science and Technology of Photography, edited by K. Keller [495].
Image Technology Design: A Perceptual Approach, by J.-B. Martens [642].
Handbook of Photographic Science and Engineering, 2nd edition, edited by C.N. Proudfoot
[779].
Fundamentals of Electronic Imaging Systems, 2nd edition, by W.F. Schreiber [841].
Imaging Processes and Materials, edited by J. Sturge, V. Walworth, and A. Shepp
[923].
Photographic Sensitivity: Theory and Mechanisms, by T. Tani [936].
1.4 General bibliography and guide to the literatures 11

Digital image processing


Digital Image Processing: A System Approach, 2nd edition, by W.B. Green [363].
Digital Image Processing, by R.C. Gonzalez and R.E. Woods [351].
Digital Image Processing: Concepts, Algorithms, and Scientific Applications, 4th edition,
by B. Jähne [456].
Fundamentals of Digital Image Processing, by A.K. Jain [457].
Two-dimensional Signal and Image Processing, by J.S. Lim [594].
A Wavelet Tour of Signal Processing, by S. Mallat [628].
Digital Pictures: Representation and Compression, by A.N. Netravali and B.G. Haskell
[708].
Digital Image Processing, 2nd edition, by W.K. Pratt [776].
Digital Picture Processing, 2nd edition, by A. Rosenfeld and A.C. Kak [807].
The Image Processing Handbook, by J.C. Russ [814].
Digital Color Imaging Handbook, edited by G. Sharma [857].

Color reproduction
Color Appearance Models, by M.D. Fairchild [292].
Color and Its Reproduction, by G.G. Field [309].
Digital Color Management, by E.J. Giorgianni and T.E. Madden [347].
Colour Engineering, edited by P.J. Green and L.W. MacDonald [362].
The Reproduction of Colour in Photography, Printing, and Television, 5th edition, by R.W.G.
Hunt [433].
Color Technology for Electronic Imaging Devices, by H.R. Kang [483].
Colour Imaging: Vision and Technology, edited by L.W. MacDonald and M.R. Luo [622].
Colour Image Science: Exploiting Digital Media, edited by L.W. MacDonald and M.R. Luo
[623].
Introduction to Color Reproduction Technology, (in Japanese) by N. Ohta [726].
Colour Science in Television and Display Systems, by W.N. Sproson [895].
Principles of Color Reproduction, by J.A.C. Yule [1065].

Color imaging input/output devices


Television Engineering Handbook, edited by K.B. Benson [97].
CCD Astronomy: Construction and Use of an Astronomical CCD Camera, by C. Buil [147].
Scientific Charge-Coupled Devices, by J.R. Janesick [461].
Digital Color Halftoning, by H.R. Kang [484].
Handbook of Print Media, edited by H. Kipphan [511].
Display Systems: Design and Applications, edited by L.W. MacDonald and A.C. Lowe
[621].
Digital Video and HDTV: Algorithms and Interfaces, by C. Poynton [775].
Flat-Panel Displays and CRTs, edited by L.E. Tannas Jr. [938].
Solid-State Imaging with Charge-Coupled Devices, by A.J.P. Theuwissen [948].
Color in Electronic Displays, edited by H. Widdel and D.L. Post [1029].
12 1 Introduction

1.5 Problems

1.1 Let X = g(Y ) be the input/output characteristic response function of an image capture
device (say, a scanner), where Y is the input signal (reflectance) and X is the output
response (output digital image from the scanner). Let y = f (x) be the input/output
characteristic function of an image display device (say, a CRT monitor), where x is
the input digital image and y is the luminance of the displayed image. Assume that
both g and f are one-to-one functions. If our objective is to make the displayed im-
age y proportional to the scanned target reflectance Y , what should be the functional
transformation on X before it is used as the input, x, to the display?
1.2 A monitor has two gray squares, A and B, displayed on its screen. When the room
light is on, the amounts of light from the two squares are L A and L B , where L A ≥ L B .
When the room light is off, the amounts of light become D A and D B , where D A ≥ D B .
Which of the contrast ratios is higher, L A /L B or D A /D B ?
1.3 The Poynting vector, E × H, is a very useful quantity in the study of electromagnetic
waves, where E is the electric field strength [V m−1 ] and H is the magnetic field strength
[A m−1 ]. By analyzing its unit, can you guess the physical meaning of the Poynting
vector?
2 Light

Within our domain of interest, images are formed by light and its interaction with matter.
The spatial and spectral distribution of light is focused on the sensor and recorded as an
image. It is therefore important for us to first understand the nature and the properties of
light. After a brief description of the nature of light, we will discuss some of its basic
properties: energy, frequency, coherence, and polarization. The energy flow of light and the
characterization of the frequency/wavelength distribution are the subjects of radiometry,
colorimetry, and photometry, which will be covered in later chapters. The coherence and
the polarization properties of light are also essential for understanding many aspects of the
image formation process, but they are not as important for most color imaging applications
because most natural light sources are incoherent and unpolarized, and most imaging sensors
(including our eyes) are not sensitive to polarization. Therefore, we will discuss these two
properties only briefly. They are presented in this chapter. Fortunately there are excellent
books [208, 631, 871] covering these two topics (also, see the bibliography in Handbook
of Optics [84]). From time to time later in the book, we will need to use the concepts we
develop here to help us understand some of the more subtle issues in light–matter interaction
(such as scattering and interference), and in the image formation process (such as the
OTFs).

2.1 What is light?

The nature of light has been one of the most intensively studied subjects in physics. Its
research has led to several major discoveries in human history. We have now reached a
stage where we have an extremely precise theory of light, quantum electrodynamics (QED)
[307, 602, 760] that can explain all the physical phenomena of light that we know about and
its interaction with matter, from diffraction, interference, blackbody radiation, the laser, and
the photoelectric effect, to Compton scattering of x-rays [211]. However, the nature of light
as described by QED is quite abstract. It is so different from our everyday experience that
no simple mental model or intuition, such as waves or particles, can be developed in our
understanding to comprehend its nature. A fair statement to make about the nature of light
is that we do not really “understand” it, but we have a very precise theory for calculating
and predicting its behavior. Since the nature of light is literally beyond our comprehension,
the most fundamental description of light has to rely on experimental facts – phenomena
that are observable. For example:

13
14 2 Light

1. Due to its wave nature, light has different temporal frequencies. By saying this, we
are implying that light is described as periodic functions, at least over a very short
period of time. The spectrum of a beam of sunlight as produced by a prism has many
different colors, each associated with light of different frequency ν. The word “light”
usually refers to the frequency range that is visible (approximately, from 4.0 × 1014 Hz
to 7.8 × 1014 Hz).
2. Light carries energy (we feel heat from sunlight) and when it is absorbed, it is always
in discrete amounts. The unit energy of the discrete amounts is hν, where h is Planck’s
constant and ν is the frequency of the light.
3. Light (photon) has linear momentum, hν/c, and therefore exerts force on a surface it
illuminates.
4. Light of the same frequency can have different characteristics (called polarizations)
that can be separated out by certain materials called polarizers. In quantum mechanics,
a photon can have one of two different spins (angular momentum): ±h/(2π ).

Because of its complexity and its nonintuitive nature, QED theory is rarely used to explain
“simpler” light behavior, such as interference, or to design optical imaging systems, such
as a camera or a scanner. Fortunately, for these applications, we have alternative theories or
models. The two most valuable ones are the ray model (geometrical optics) and the wave
model (physical optics). Both models are incapable of explaining or predicting certain
phenomena, but within their domains of validity, they are much simpler and more intuitive,
and therefore, very useful.
The wave model is based on the Maxwell equations for classical electromagnetic theory.
The velocity of the electromagnetic wave was shown to be the same as that of light. By
now, it is well accepted (with the knowledge that the description is not complete) that
light is an electromagnetic wave, as are the microwave used for cooking, the radio-wave
used for communications, and the x-ray used for medical imaging. The light ray in the
simpler geometric optics is often thought of as the surface normal to the wavefront of
the electromagnetic wave, although this simple interpretation does not always work well,
especially when the wave is not a simple plane wave or spherical wave. The connection
between the electromagnetic wave and the photon in QED theory is not as straightforward
to make. Quantum theory uses two objects to describe a physical system: the operator for the
physical variables, such as the electric field intensity, and the Schrödinger wave function, ψ,
for the state of the system. The Schrödinger wave function, ψ, is usually a complex function
and its product with its complex conjugate, ψψ ∗ , gives the probability of finding photons
at a point in space and time. It should be pointed out that the Schrödinger wave function, ψ,
as solved in QED is not the electromagnetic wave as described by the Maxwell equations.
The connection between the two waves is a statistical one: for classical phenomena, such as
interference, the time-averaged Poynting vector, E × H [W m−2 ] as calculated from the
Maxwell equations, predicts the average number of photons per unit time per unit area at
that point in space, as calculated from QED.
For the majority of the optical applications that are of interest to us in this book, we will
treat light as electromagnetic waves described by the Maxwell equations. The wavelength
(in vacuum) range of the light that is visible to our eyes is approximately from 380 nm
2.3 Coherence 15

(7.89 × 1014 Hz) to 740 nm (4.05 × 1014 Hz). The sources of light relevant to color imaging
are mostly thermal sources, such as the sun, tungsten lamps, and fluorescent lamps. For these
sources, light is incoherent and unpolarized – these two concepts can be treated within the
electromagnetic wave model.

2.2 Wave trains of finite length

When we treat light as electromagnetic waves, we need to realize that the waves are of finite
length. When we turn on a light lamp at time t1 , light is emitted from the lamp, and when we
turn off the lamp at time t2 , the emission of light stops (approximately, because the tungsten
filament does not cool down instantly). In this case, the duration of each of the trains of
electromagnetic waves cannot be much longer than t2 − t1 . In fact, they are all many orders of
magnitude shorter than t2 − t1 . When an electron of an atom or a molecule makes a transition
from a higher energy state to a lower one, a photon is emitted. The time it takes for the electron
to make the transition is very short and so is the length of the wave train of the light emitted.
Although we have not measured the transition time directly, there are measurements that
give us good estimates of the approximate length of the wave train for several light sources
(e.g., [258, Chapter 4]). If the transition is spontaneous, the phase is often random, and the
length of the wave train is short (on the order of 10−8 s [258, p. 93, 306,Volume I, p. 33–2,
631, p. 150]). If the transition is induced by an external field, such as in a laser, then the wave
train can be much longer (as long as 10−4 s). However, for light with a wavelength of 500 nm,
even a 10−8 s wave train contains 6 million wave cycles! There are two implications from
the result of this simple calculation. (1) For most theoretical derivations concerning phase
relations on a spatial scale in the range of a few wavelengths, such as light reflection from a
smooth surface, we can approximate the light as a sinusoidal wave (such as a plane wave).
(2) For most measurements of light, the integration time for sensing is much longer than
10−8 s, and the finite length of a wave train cannot be neglected. From the theory of Fourier
analysis, a sine wave of duration t has a frequency bandwidth ν ≈ 1/t. Therefore, there
is no such thing as a monochromatic (single-frequency) light wave. When the frequency
bandwidth of radiation is very narrow, ν/ν 1, we call it a quasi-monochromatic wave.
Conventional wave analysis relies heavily on Fourier analysis, which has the disadvantage
of having a very sharp frequency resolution, but very poor spatial or time resolution (i.e.,
the sine and cosine functions can have a single frequency, but then they extend to infinity
spatially or temporally). A new mathematical tool called wavelet analysis allows us to
decompose any signal into wavelets that are more localized in time. It can be shown that
wavelet solutions to the Maxwell equations can be found [478] and they may provide a more
natural description for wave trains of finite length.

2.3 Coherence

The electromagnetic fields at two different points in space-time can fluctuate completely
independently. In this case, we can say that they are completely incoherent. If the fluctuations
16 2 Light

of the fields at these two points are not completely independent of each other, then they are
partially or completely coherent with each other. The degree of independence or the degree
of coherence can be measured by statistical correlation [631, Chapters 4 and 6, 742]. Two
special cases of coherence theory are temporal coherence (field fluctuation measured at the
same spatial location) and spatial coherence (field fluctuation measured at the same time
instant). Let us first consider the case of the temporal coherence in the famous Michelson
interferometer.

2.3.1 Temporal coherence


Figure 2.1 shows a schematic diagram of a Michelson interferometer [661]. Let us use a
partially silvered mirror, M, to split a light wave train into two components: one transmitted
through the mirror and the other one reflected away from the mirror. In electromagnetic
theory, a wave train, W , of finite duration, T , is split into two finite-length wave trains,
Wa and Wb , of smaller amplitudes. If these two wave trains are then brought back together
(say, by two other mirrors, A and B, properly positioned), the two wave trains combine into
a single wave train, W  . If there is no relative time delay between the two wave trains, the
resulting combined wave, W  , looks like the original wave train, W (assuming little loss and
distortion in between the splitting and the recombination). In this case, at the spatial point of
combination, the two wave trains are said to be temporally coherent. Now let us introduce
a time delay t in one of the wave trains, say Wa , by making it travel a slightly longer
distance. If t < T , the two wave trains, Wa and Wb , partially overlap and the resulting
wave train, W  , looks different from W . If t > T , the two wave trains, Wa and Wb , have
no overlap and they are incoherent. Instead of a single wave train as the source, we can use

A
B′

Wa

W Wb
S
M B

W′

plane of observation

Figure 2.1. The Michelson interferometer.


2.3 Coherence 17

a light source that generates line spectra, such as a sodium lamp or a mercury arc lamp. For
these sources, we can imagine that many wave trains are emitted randomly, but each wave
train, W , is split into a pair of trains, Wa and Wb , which are later brought back together
at the plane of observation, which is set up somewhere along the path that the combined
light beam travels. Instead of making the reflecting mirrors, A and B, perfectly parallel with
respect to the images as seen by the beam splitter M, we introduce a minutely small tilt
angle on mirror B. As a result of this tilt, the wave trains arriving at different points on the
plane of observation are out of phase by different amounts and thus produce interference
fringes. At the points where the pair of wave trains Wa and Wb differ in relative phase by
integer multiples of the wavelength, the field amplitudes add exactly constructively and the
radiant flux density [W · m−2 ] reaches the maximum, E max . At the points where the relative
phase differs by an odd multiple of half the wavelength, the field amplitudes cancel each
other, and the light flux density falls to the minimum, E min . Michelson [661, p. 36] defined
the fringe visibility (also known as Michelson contrast), V , as:
E max − E min
V = (2.1)
E max + E min
and he showed that it varies as a function of the time delay t introduced between the
two paths for Wa and Wb , or equivalently as a function of the optical path difference,
d = vt, where v is the velocity of the light in the medium. By analyzing the visibility
V as a function of d, he was able to estimate the spectral distribution of the light source.
For example, the cadmium red line at 643.8 nm was shown to have a half-width (at the
half-height) of 0.000 65 nm [258, p. 80], which can be used to deduce that the duration of
the wave train emitted by the cadmium is on the order of 10−8 s.
Our immediate interest is that Michelson interference as described above occurs only
when the relative time delay between the two wave trains is less than the duration of the
original wave train. This time duration, T is called the coherent time of the light. Its
corresponding optical path difference, l = vT , is called the longitudinal coherence
length [631, pp. 148–9]. For the cadmium red line at 643.8 nm, the coherent time is about
10−8 s and the corresponding longitudinal coherence length is about 3 m in the air.
There is another interesting aspect of Michelson interference. If we consider a wave train
as a sine wave of frequency ν windowed (multiplied) by a rectangle function of width T ,
from Fourier analysis, the resulting frequency spectrum of the wave train is a sinc function,
centered at ν, whose main lobe has a half-width of 1/T . If the sine wave is windowed by
a Gaussian function with a standard deviation of T , the resulting frequency spectrum is
also a Gaussian function, centered at ν, with a standard deviation of 1/(2π T ). Experi-
mentally, one finds that the Michelson interference fringes appear only when νT ≤ 1
approximately, where ν is the bandwidth of the light source. Therefore, the coherent time
T is approximately inversely proportional to the bandwidth of the light beam ν.

2.3.2 Spatial coherence


Young’s famous two-slit interference experiment demonstrates spatial coherence. Figure 2.2
shows a schematic drawing of the typical setup of such an experiment. According to
18 2 Light

x B C

∆s z
S d
D

A b
R R′
Figure 2.2. Young’s two-slit interference experiment.

Ditchburn [258, p. 119], Grimaldi was among the first to attempt to observe interference.
He used a (thermal) light source S (without screen A) in front of a screen (B) with two slits
and observed fringes on a screen (C) some distance behind the slits. However, it was Young
who discovered that the light source size had to be made very small for the interference
fringes to be observed. He used an additional screen (A) with a small hole (s) to let
the light through and projected the light onto the two slits. This small hole thus serves to
reduce the size of the light source. In our later analysis, we will see that this was the critical
modification that made him successful. Young reported his results in 1802 in front of the
Royal Society, but was met with great ridicule because Newton’s particle model of light
was the dominant theory at that time. However, regardless of how the phenomenon should
be explained, the experiment was a very important one in presenting the very basic nature
of light (see the interesting discussion in [306, Volume III]).
The two light beams that pass through the two slits to produce the interference are
separated spatially although they come from the same small thermal source. The fact that
the spatially separated light beams can produce interference means that the field fluctuations
in the two spatially-separated slits are correlated. This is easy to imagine if one thinks of
a spherical wave propagating from the small source towards the two slits on the screen.
However, this is only a mental model and in reality we know that this wave model is not true
because it does not explain many phenomena, such as the photoelectric effect. Therefore,
the experimental facts alone force us to describe the light going through the two slits as
having spatial coherence.
Experiments have shown that whether interference fringes are observed or not depends
critically on some of the experimental parameters. Let the source (the tiny hole on the screen
A) be at the origin of the coordinate system, and the positive z-axis go through the middle
of the two slits on the screen B, intersecting with the observation screen C at the point D.
(The widths of the two slits affect the modulation of the interference pattern because of
diffraction, but as long as they are very narrow compared with the distance between them,
we can ignore this in what we would like to discuss below. We can make sure that the fringe
2.3 Coherence 19

pattern that we are seeing is due to interference, not diffraction, by covering one of the slits
when the pattern should disappear.) Let the x-axis be parallel with the line connecting the
two slits on the screen B. As mentioned above, the size of the source along the x-direction,
s, should be small, because different points on the source along that direction generate
interference fringes that are offset from each other and therefore smear out the interference
pattern. Also, R, the distance from screen A to screen B, and R  , the distance from screen
B to screen C, both should be much larger than d, the distance between the two slits on the
screen B, because the angular subtenses from the source to the slits and from the slits to the
observation screen determine the optical path difference. If the optical path difference is too
long (say, longer than the typical duration of the wave train of the light), the interference does
not occur. Experiments (as well as theoretical calculation based on optical path difference
[258, pp. 120–1]) show that the interference fringes are observable when
d
s ≈ sθ ≤ λ, (2.2)
R
where θ ≈ d/R is the angle formed by d the distance between the two slits relative to the
source, and λ is the wavelength of the light from the source. The width of the interference
band, b (the distance from maximum to maximum), on the observation plane C can also
be calculated from the optical path difference between the two slit paths: b = R  λ/d [208,
Section 2.3]. In a typical experiment, the two slits are separated by about 1 mm, the screen
distances, R and R  , are about 1 m, and the wavelength is about 500 nm. Therefore the
width of the interference band, b, is about 0.5 mm, which is observable by the naked
eye.
The above experimental results allow us to define a few terms regarding spatial coherence.
The two beams passing through the two slits are separated by a distance d and they are located
at a distance R away from the source of dimension s. In order for the interference fringes
to be observable, the spatial separation d has to satisfy the following relation:

d≤ . (2.3)
s
Therefore, we can define Rλ/s as the transverse coherence length, and its square,
R 2 λ2 /(s)2 , as the coherence area, A. If we take the product of the longitudinal coher-
ence length l and the coherence area, A, we get the coherence volume, V = lA.
From the uncertainty principle of quantum mechanics, we can show that photons in the
coherence volume are not distinguishable from each other [631, pp. 155–9]. Although we
have derived the concepts of coherence length, coherence area, and coherence volume from
the electromagnetic wave models, they are consistent with quantum theory as well.
It is instructive to calculate the coherence area of some common light sources that we
see in our imaging applications. The sun has an angular subtense (s/R) of about 0.5◦ .
The middle of the visible spectrum is at about 500 nm. Therefore, the coherence area of
sunlight at 500 nm is about 3.3 × 10−3 mm2 and the transverse coherence length is about
0.057 mm. This is so small that we can treat sunlight reflected from any two points of
an object surface as incoherent for all practical purposes. On the other hand, light from a
distant star has a relatively large coherent area on the earth’s surface and starlight needs to
be treated with its coherence property in mind. For example, the red giant star Betelgeuse
20 2 Light

in the constellation of Orion has an angular subtense of 0.047 arcsec [660]. Assuming that
its effective wavelength is 575 nm, then its transverse coherence length is about 2.52 m!
Images of the stars do look like images of coherent sources.

2.4 Polarization

The constraints imposed by Maxwell’s equations require that far from their source electric
and magnetic fields are orthogonal to each other and to the direction of the propagation.
Since the magnetic field can be determined from the electric field, we will discuss only the
behavior of the electric field. The electric field, ξ, of the electromagnetic wave is a vector
that has a magnitude as well as a direction, which can vary in the plane perpendicular to the
vector of wave propagation. Therefore, there are two degrees of freedom in the direction
of the electric field and these can be represented by two basis vectors. The variation of the
electric vector direction as a function of time is called the polarization.

2.4.1 Representations of polarization


Referring to Figure 2.3, let us assume that a monochromatic light beam is traveling along
the z-axis towards the positive z-direction and that we are looking at the beam directed
towards us. The x-axis is directed horizontally to the right and the y-axis directed vertically
to the top. Let us further assume that the electromagnetic wave has an electric field vector
with the following x- and y-components at a given position on the z-axis:

ξx (t) = A x cos (2π νt + δx ),


(2.4)
ξ y (t) = A y cos (2π νt + δ y ),

where ν is the frequency [s−1 ], A x and A y are the amplitudes [V m−1 ], and δx and δ y are the
phases [rad]. For the following discussion, the important parameter is the phase difference
δ = δ y − δx .
From electromagnetic theory [512, p. 70], the radiant flux density [W m−2 ] of the wave
is given by the magnitude of the Poynting vector, P:
n 2
P(t) = ξ (t), (2.5)

where ξ 2 (t) = ξx2 (t) + ξ y2 (t), n is the index of refraction, µ is the magnetic permeability,
and c is the velocity of light in vacuum. For visible light, the frequency is on the order of
1014 Hz, too fast to be measured by almost all instruments that measure energy flux. What
is measured is the time-averaged radiant flux density, P(t), [W m−2 ]. Since the averaged
value of cosine squared is 1/2,
n
P(t) = (A2 + A2y ) = η(A2x + A2y ), (2.6)
2cµ x
where η = n/(2cµ).
2.4 Polarization 21

E
x

z
Y

E E

Figure 2.3. The convention of the coordinate system for polarization.

The electric field vector varies continuously as a function of the phase of the wave within
the duration of the wave train. When δ = 0, the direction of the vector remains constant, the
light is said to be linearly polarized (or plane polarized). When δ = ±π/2 and A x = A y ,
the direction of the vector varies and traces out a circle, and the light is said to be circularly
polarized. In the most general case, the direction of the vector traces out an ellipse and
the light is said to be elliptically polarized. The circularly (or elliptically) polarized light is
further divided into the right-hand circular (RHC) (or elliptic) polarization and the left-hand
circular (LHC) (or elliptic) polarization. The handedness convention is to observe the light
coming to us. If the electric field vector rotates in the clockwise direction, i.e., δ > 0, the
light is said to be right-hand circularly (or elliptically) polarized. If the electric field vector
rotates in the counterclockwise direction, i.e., δ < 0, then the light is said to be left-hand
circularly (or elliptically) polarized.
Another important representation of polarization is to use the the RHC polarization and
the LHC polarization as the two basis vectors. It can be shown that the electric field vector
22 2 Light

represented by Eqs. (2.4) can be expressed as the sum of a RHC wave with amplitude AR
and phase δR and a LHC wave with amplitude AL and phase δL . At the same point on the
z-axis as in Eqs. (2.4), the RHC wave is represented as

ξx = AR cos (δR − 2π νt), (2.7)


ξ y = AR sin (δR − 2π νt), (2.8)

and the LHC wave as

ξx = AL cos (δL + 2π νt), (2.9)


ξ y = AL sin (δL + 2π νt). (2.10)

The parameters in the (x, y) and the (RHC, LHC) representations are related by the following
equations:
1 2
A2R = (A + A2y + 2A x A y sin δ), (2.11)
4 x
1
A2L = (A2x + A2y − 2A x A y sin δ), (2.12)
4
A y cos δ y − A x sin δx
tan δR = , (2.13)
A x cos δx + A y sin δ y
A x sin δx + A y cos δ y
tan δL = . (2.14)
A x cos δx − A y sin δ y
It should be pointed out that at a given point on the z-axis, the magnitude of the electric field
of the circularly polarized wave remains the same for the duration of the wave train, but
its direction is changing around a circle. The averaged radiant flux density of a circularly
polarized wave thus does not have the 1/2 factor from the averaged value of cosine squared,
and the magnitude of the Poynting vector is 2η A2R for the RHC wave, and 2η A2L for the
LHC wave. The total radiant flux density [W m−2 ] for the wave is

P(t) = 2η A2R + 2η A2L = η(A2x + A2y ). (2.15)

As we discussed in the previous section, light emitted from thermal sources consists of
short wave trains of duration on the order of 10−8 s. Each wave train has its polarization, but
it varies so rapidly (108 times a second) and randomly that most instruments cannot detect
any effects due to polarization (assuming they average out in all directions). This type of
light is said to be completely unpolarized. If the averaged polarization does not completely
cancel out in all directions and the light is not of any single polarization, the light is said to
be partially polarized. These concepts will be defined more quantitatively later.
The polarization of light is treated in the quantum theory in a very different way con-
ceptually. A photon is a two-state system. The two base states are often taken as the RHC
polarization and the LHC polarization. The reason is that each base state is then associated
with a spin number +1 or −1, with an angular momentum of h/2π or −h/2π, where h is
Planck’s constant. The state of a given photon can be any linear combination of these two
base states. For the linearly polarized light, the coefficients (or amplitudes) of the two states
are equal. For the elliptically polarized light, one coefficient is greater than the other.
2.4 Polarization 23

2.4.2 Stokes parameters


Among the various polarizations discussed above, elliptic polarization is the most general
case. Linear polarization corresponds to a zero eccentricity and circular polarization to unit
eccentricity. To characterize the polarization of a quasi-monochromatic light wave, we can
specify its eccentricity, handedness, and azimuth angle. However, there are two drawbacks
in using these parameters: (1) they are not easily measurable, and (2) they are not additive
when many independent wave trains of light are mixed together. For these reasons, we will
introduce the Stokes parameters [913] as a means for characterizing polarized light. One of
the major advantages of using the Stokes parameters is that they are directly measurable and
can be defined operationally with simple linear and circular polarizers. Let us first introduce
these polarizers.
An ideal linear polarizer is an optical element that has a transmission axis. Light lin-
early polarized along this axis is transmitted without any attenuation. If the incident light
is linearly polarized at a 90◦ angle from this axis, the transmittance is 0. If the incident
light is linearly polarized at an angle θ from this axis, the transmittance is cos2 θ because
the electric field component projected onto the axis is reduced by cos θ and the trans-
mitted light is proportional to the square of the electric field. This is called the law of
Malus. Very-high-quality linear polarizers are commercially available. A retarder is an
optical device that resolves an incident polarized beam of light into two orthogonally po-
larized components, retards the phase of one component relative to the other, and then
recombines the two components into one emerging light beam. The polarization state of
the two resolved components can be linear, circular, or elliptical depending on the type
of the retarder. A retarder transmits the two components without changing their state of
polarization. It just changes their relative phases. For example, an ideal linear retarder is
an optical element that resolves an incident beam into two linearly polarized components,
one along a fast axis and the other along a slow axis. Light that is polarized in the di-
rection of the fast axis is transmitted faster than light polarized along the slow axis. As
a consequence, the phase of the slow-axis component is retarded relative to that of the
fast-axis component. If the thickness of the retarder is such that a phase shift of a quar-
ter cycle is introduced between the two components, the retarder is called a quarter-wave
plate. A quarter-wave plate can convert linearly polarized light into circularly polarized
light. As will be shown shortly, a quarter-wave plate followed by a linear polarizer can be
used to measure the circularly polarized component of an incident ray. A linear polarizer
followed by a quarter-wave plate can be used to turn incident light into circularly polarized
light.
With an ideal linear polarizer, an ideal quarter-wave plate, and a light flux detector, we
can make the following six radiant flux density [W m−2 ] measurements:
1. E h : radiant flux density measured with a horizontal linear polarizer;
2. E v : radiant flux density measured with a vertical linear polarizer;
3. E 45 : radiant flux density measured with a 45◦ linear polarizer;
4. E 135 : radiant flux density measured with a 135◦ linear polarizer;
5. E R : radiant flux density measured with a right circular polarizer;
6. E L : radiant flux density measured with a left circular polarizer.
Random documents with unrelated
content Scribd suggests to you:
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

ebookgate.com

You might also like