Notes on testing and simulating rendering limits #18973
greeble-dev
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
These are some notes on how Bevy tests and simulates rendering limits in partnership with
wgpu
, and why the current behaviour can be problematic. I don't have any good solutions, so I'm writing up what I know in case someone else can move it forward.Background
All GPUs have limits, and these limits vary across GPU hardware and also OS, API and driver versions.
Exceeding these limits will cause bugs ranging from visual glitches to panics. Users expect Bevy to either stay within the limits automatically, or gracefully report what limit has been exceeded and why.
There are many combinations of hardware and software, so it's hard to predict when an app or a new engine feature will exceed rendering limits. And testing real combinations is expensive in all kinds of ways. So, users want to simulate rendering limits that their GPU doesn't actually have - for example, simulating a GPU without compute shaders.
Problems
wgpu
exposes rendering limits through two APIS:Adapter
: The limits of the real GPU/driver/etc. Decided bywgpu
and can't be simulated.Device
: The limits thatwgpu
will actually enforce. Exceeding device limits will trigger panics or warnings, even if the real GPU's limits are higher. These limits can be overridden through thewgpu
API, allowing simulation.Adapters and devices expose limits through the same two structs -
Limits
andFeatures
. These align with the WebGPU spec.So, Bevy simulates limits through the
Device
(seeWGPU_SETTINGS_PRIO
andbevy_render::WgpuSettings::limits
). Plugins have to be careful to check limits on theDevice
and not theAdapter
.But there's a catch!
Adapter
exposes a third struct:DownlevelCapabilities
. This covers limits that are not part of the WebGPU spec, and so are not covered byLimits
andFeatures
. And it can't be simulated - the real limits always show throughDownlevelCapabilities
.This mismatch has caused bugs in the past. In one example, compute shaders were effectively disabled by setting
Limits::max_compute_workgroup_storage_size = 0
. But a Bevy plugin was checkingDownlevelFlags::COMPUTE_SHADERS
, which was still true. It's likely that more bugs are lurking.So, what to do?
Option 1: Make it a
wgpu
problem?What if Bevy could tell
wgpu
to use a particularDownlevelCapabilities
just likeLimits
andFeatures
? I don't know enough aboutwgpu
to say if this is possible. I imagine the main issue is not exposing the data, but adding new logic to report if the limits have been exceeded.Option 2: Don't use
DownlevelCapabilities
?At least some checks against
DownlevelCapabilities
can done by looking at the other structs. Maybe there's equivalents for just the features Bevy is using?Here's the flags Bevy currently uses and possible alternatives:
DownlevelFlags::COMPUTE_SHADER
->Limits::max_compute_workgroup_storage_size == 0
.DownlevelFlags::FRAGMENT_WRITABLE_STORAGE
-> maybeLimits::max_storage_buffers_per_shader_stage == 0
?DownlevelFlags::VERTEX_AND_INSTANCE_INDEX_RESPECTS_RESPECTIVE_FIRST_VALUE_IN_INDIRECT_DRAW
-> ???DownlevelFlags::BASE_VERTEX
-> ???Beta Was this translation helpful? Give feedback.
All reactions