Skip to content

Camera UV coordinates — handle images with width or height of one #682

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ghost opened this issue Jul 27, 2020 · 5 comments
Closed

Camera UV coordinates — handle images with width or height of one #682

ghost opened this issue Jul 27, 2020 · 5 comments
Assignees
Milestone

Comments

@ghost
Copy link

ghost commented Jul 27, 2020

I'm not quite sure how the current equation was derived. But the case clearly fails for the case that the width and height are both equal to one, for example.

int x = 0;
int y = 0;
int width = 1;
int height = 1;
float u = (x / (width - 1)); // 0 / (1 - 1) = 0 / 0 = NaN
float v = (y / (height - 1));

Should be changed to:

float u = (x + 0.5f) / width;
float v = (y + 0.5f) / height;

The result can be seen clearly when you can a 2x2 image as an example.

The UV coordinates previously would be:

(0, 0) (1, 0)
(0, 1) (1, 1)

The correct result would be:

(0.25, 0.25) (0.75, 0.25)
(0.25, 0.75) (0.75, 0.75)
@hollasch hollasch added this to the Backlog milestone Sep 11, 2020
@hollasch hollasch self-assigned this Oct 12, 2020
@hollasch hollasch modified the milestones: Backlog, v4.0.0 Oct 12, 2020
@hollasch
Copy link
Collaborator

hollasch commented Oct 29, 2020

Will add a special case to handle single-pixel images. I definitely do not want to promote the idea that a pixel is little square, or has any specific shape, or indeed even that it has a finite kernel.

I'll also revisit the text to ensure that we at least hint that square pixels are a poor (but cheap) model.

@trevordblack
Copy link
Collaborator

I forget where I read it, but the correct uv should be:

float u = floor((x + 0.5f) / width);
float v = floor((y + 0.5f) / height);

@tay10r
Copy link

tay10r commented Mar 15, 2021

@hollasch

This was my issue from a previous account. I wasn't proposing to tell readers that a pixel is a square, just offering a more simpler and commonly used expression of UV coordinates that does not require any special corner cases. You're free to reject it, that's totally fine.

@tay10r
Copy link

tay10r commented Mar 15, 2021

@trevordblack

Thanks for the suggestion! This:

float u = floor((x + 0.5f) / width);
float v = floor((y + 0.5f) / height);

Is not right because it will round to either 0 or 1 for all UV coordinates. So unless you have a 2x2 image, then you would get incorrect results. Using floor is more likely to be useful when converting from UV coordinates to image coordinates. For example:

int x = floor(uv_x * width);
int y = floor(uv_y * height);

But you can just use native floating-point to integer conversion instructions, which performs the truncation along with the conversion.

int x = uv_x * width;
int y = uv_y * height;

@hollasch hollasch changed the title Camera UV Coordinates Camera UV coordinates — handle images with width or height of one Aug 5, 2022
hollasch added a commit that referenced this issue May 31, 2023
This change does several things. Primarily, it's a refinement of the
division between the scene class and the camera class.

The scene class holds the world description (which includes all scene
geometry and lighting) and the camera. It is responsible for using the
camera to interrogate the world, using the main render loop (all samples
for all pixels).

The camera class is responsible for all 2D rendered image parameters,
and uses these to generate rays one at a time.

The net effect of these changes are:

1. The image_width and aspect_ratio member variables move from the scene
   class to the camera class.

2. camera::initialize() no longer needs to take the aspect ratio
   argument.

3. camera::get_image_height() is a new public function that returns the
   image height, computed from the image width and the desired aspect
   ratio.

4. `camera::aperture` is now `camera::defocus_diameter`.
   `camera::lens_radius` is now `camera::defocus_radius`.

5. The scene and camera parameter assignments have been reordered to
   logical groupings based on the updated mental model.

6. In TheNextWeek/main.cc, we assigned background color, camera up, and
   focus distance before handing the scene off to the scene generators.
   There's been at least one case where a reader was confused about the
   state of the scene object because values were set in two different
   locations. This saved repeating some lines of code, but the
   simplicity of assigning everything in one place for each scene is
   better.

These two changes solidify the responsibilities of the two classes in
preparation for future changes to the camera class, addressing the
following issues:

- #546 Confusion surrounding use of the word "aperture"
- #682 Camera UV coordinates - handle images with width or height of one
- #858 Improve camera.h naming, commentary
- #1042 camera::get_ray() function should perform all sample jittering
- #1076 Book 1 Chapter 13.2 dev-major: Generating Sample Rays

One question left is that now that everything's assigned in the scene
generator functions, why not just have them allocate and return the
generated scene? I've decided to leave the current approach of mutating
the scene -- passed by reference -- purely because it's simpler code.
In my opinion, there's not a whole lot of advantage to the returned
value approach.
@hollasch hollasch modified the milestones: v4.0.0, v4.0.0-book1 Jun 19, 2023
@hollasch
Copy link
Collaborator

Fixed in code in PR #1154 (text update pending).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants