-
-
Notifications
You must be signed in to change notification settings - Fork 22.8k
[DRAFT] DrawableTextures #105701
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
[DRAFT] DrawableTextures #105701
Conversation
@@ -1370,6 +1594,15 @@ Vector<Ref<Image>> TextureStorage::texture_3d_get(RID p_texture) const { | |||
return ret; | |||
} | |||
|
|||
void TextureStorage::texture_drawable_generate_mipmaps(RID p_texture) { | |||
// TODO: Generate mipmaps code |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just call glGenerateMipmap ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just call glGenerateMipmap ;)
Isn't that pretty slow? We typically need a dedicated approach to generate mipmaps in real-time with reasonable performance. This is the reason why ViewportTexture doesn't feature mipmap generation, but screen_texture
does.
This also applies to modern low-level APIs which have no direct equivalent of glGenerateMipmap()
to my knowledge.
@@ -1414,6 +1586,11 @@ void TextureStorage::texture_proxy_update(RID p_texture, RID p_proxy_to) { | |||
} | |||
} | |||
|
|||
// Output textures in p_textures must ALL BE THE SAME SIZE | |||
void TextureStorage::texture_drawable_blit_rect(const TypedArray<RID> &p_textures, const Rect2i &p_rect, RID p_material, const Color &p_modulate, const TypedArray<RID> &p_source_textures, int p_to_mipmap) { | |||
// TODO: Write this code for Renderer_RD |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So for RD this is a little more involved than with OpenGL but the functionality already exists. If I recall correctly it's a matter of adding some extra logic when initialising the texture to create texture views for each miplayer.
Then here you can loop through the layers and use the copyeffect to scale down each layer. There might even be a function that already handles this loop for you. I'll see if I can find an example of this, pretty sure there are a few examples around the radiance map code.
@@ -390,7 +394,10 @@ const ShaderLanguage::KeyWord ShaderLanguage::keyword_list[] = { | |||
{ TK_HINT_SCREEN_TEXTURE, "hint_screen_texture", CF_UNSPECIFIED, {}, {} }, | |||
{ TK_HINT_NORMAL_ROUGHNESS_TEXTURE, "hint_normal_roughness_texture", CF_UNSPECIFIED, {}, {} }, | |||
{ TK_HINT_DEPTH_TEXTURE, "hint_depth_texture", CF_UNSPECIFIED, {}, {} }, | |||
|
|||
{ TK_HINT_BLIT_SOURCE, "hint_blit_source", CF_UNSPECIFIED, {}, {} }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just thinking out load, there might simply be too much of an issue in the shader compiler not being able to handle this. But could we just have one hint (e.g. hint_blit_source
) and have it work on a first come, first serve basis?
E.g. the first uniform with hint_blit_source
becomes our first source, the second uniform becomes our second source, etc.
With a compiler error if more than 4 exist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, the relevant code to try that approach is in shader_compiler.cpp,
_dump_node_code, lines 931->939
Which at a cursory glance only operates on a single token or line at a time? Those lines would need to read/write to some context variable that says how many 'sources' have already been set, and I'm unfamiliar with whether that function is supposed to access any outside context.
I had a first cursory glance at the implementation, already witnessing this being build as I was helping @ColinSORourke out, in broad strokes this looks really good. I need time to dive into the detail and do some proper stress testing with it and see if I can find any issues. @RodZill4 , you might want to take a look at this as well, I think this would be very valuable for Material Maker. That would be a good stress test for writing to the 4 textures as I can imagine that generating texture ORMs this way could be nice. @ColinSORourke one thing I'm wondering about as well, but this may be worth a follow up PR, instead of supplying 4 drawable textures to write too, we should also support supplying a texture array of up to 4 layers. |
@BastiaanOlij Just making sure I understand correctly, do you mean Edit: I'll also note that the user exposed abstraction is currently always called from 1 of the output textures in the first place, and accepts up to 3 'other' output textures. So we would either need to perform the operation to merge the 'original' output into the TextureLayered 3 others, or design a new abstraction that can be called independently from any target. |
This one, though it's really an edge case we might want to reserve until we have a proper use case for it. I can see there might be some benefits to doing so when manipulating stereo images. |
After some testing on the example project, firstly awesome work! second are uniforms meant to be unsupported? I tested around with sending custom data to COLOR instead of the blited texture, and one thing I tried was a uniform of source color, and that threw an error. shader_type texture_blit;
render_mode blend_mix;
uniform vec4 ColorMod : source_color;
uniform sampler2D source_texture : hint_blit_source;
uniform sampler2D source_texture2 : hint_blit_source2;
uniform sampler2D source_texture3 : hint_blit_source3;
uniform sampler2D source_texture4 : hint_blit_source4;
void blit() {
// Copies from each whole source texture to a rect on each output texture.
//COLOR = texture(source_texture, UV) * MODULATE;
//COLOR = vec4(UV.x, UV.y, 0.0, 1.0);
COLOR = ColorMod;
COLOR2 = texture(source_texture2, UV);
COLOR3 = texture(source_texture3, UV);
COLOR4 = texture(source_texture4, UV);
} Errors:
As an aside, would it be possible to sample the current texture/destination UV we are copying into inside of the blit shader? or is that a no go? I assume it's not possible but I just wanted to be sure. EDIT: EDIT2: |
Hmm I moved all Material Maker rendering to compute shaders, and already write to several textures at once (when painting). |
Yes they are meant to be supported, great catch. I do see the bits of GLSL I'm missing, and am going to get to work on that.
I can definitely see how that would be helpful, I don't think it would be too difficult, I'll just investigate the best way to surface those values to the user.
Funnily enough, currently it's Compatibility renderer only! Learning how to correctly compile Shaders in Forward+ tripped me up a scooch, so I focused on implementing in Compatibility Renderer to get a working implementation sooner. By the end, this feature should be equally available in both Render pipelines. |
a822c8a
to
45b21e2
Compare
Just pushed (4/26)
|
Looks good, uniforms working just fine, as well as blend modes. Did notice one problem, I was unable to build until I added a return value on line 1727 in I just added this: |
45b21e2
to
5f162a0
Compare
Just pushed (5/16) Forward+ Renderer implementation! Demo Project is unchanged, you should be able to run the same demo project on either renderer by swapping Renderer in Project settings. Note:
Still to-do for 'completion'
A few more talking points (I'll bring these up next Rendering Meeting as well)
|
5f162a0
to
b331c88
Compare
I tried this PR and the demo project. It works normally. I'm bit hard to understand the usage of Also why does DrawableTexture2D only accept 4 sources instead of using them as material uniforms? Custom uniform textures seem don't work and are always black. |
Performance reasons - faster to make a single call to the GPU with 8 textures, than 4 calls to the GPU with 2 textures each. You can still call 4x Blit_Rects individually if you so choose
Extra Material Uniforms with textures should work - I'll investigate and fix. |
b331c88
to
cff480f
Compare
Just Pushed (5/17) Fixes for binding custom Texture Uniforms from User Materials. |
cff480f
to
1faef3f
Compare
Would it be possible to add a way to erase from the textures? Erasing from textures is nearly impossible in whole of Godot sans using blend_mode substract which is sadly unusable in my case. I would have usecase for that right now. I am building custom light system for my game because the built-in one is ugly, lacks features and is buggy and I am constantly running into same issue of needing to remove alpha from texture. It's really hard, Godot offers almost no ways of achieving that. Godot_v4.4.1-stable_win64_0FnPjEcd77.mp4If I could erase alpha from the drawable texture, I could build lights instead of shadows. which would stop the shadow leaking. |
You may be interested in #102366, which exposes many more blend factors for use in custom shaders. |
I think we could expose a argument like RDPipelineColorBlendState (or a new data class for this) godotengine/godot-proposals#12443 (comment). So you can do custom blend by recreating the pipeline instead of creating a new shader. |
Actually, as far as I understand shaders, blend modes are completely useless for me in this regard as they affect whole texture, I need to partially erase texture which is possible only with viewports or canvas groups. Having viewport per light is not good. I need to be able to carve out polygons from the texture's alpha, using blend_mode would affect other textures which is not desired. Notice in the following demonstration that I can change background color. Photoshop_Tr98OoMy7R.mp4In this image, the mask in photoshop is applied AFTER the texture. Erasing the alpha. This is not replicable with clip_children. Here is video demonstration of the desired and undesired behavior. Godot still does not have an answer for it. Photoshop_ernf5Y6nzY.mp4Here are the lights. |
Implementing DrawableTextures based on: godotengine/godot-proposals#7379
1faef3f
to
b111406
Compare
godotengine/godot-proposals#7379
This PR is an implementation of @reduz's proposal for Drawable Textures in the Godot Engine. Authored primarily by me, with some much appreciated assistance from @clayjohn and @BastiaanOlij. This is still a WORK-IN-PROGRESS, very open to feedback and suggestions.
See Demo Vid:
DrawableTexturesDemoVid.mov
Download Demo Project: DrawableTextureDemoProj.zip
The implementation follows Reduz’s specification pretty strictly, with a few departures based on feedback from Rendering Contributor Meetings. Most importantly, we decided to leave implementation of Blit_Polygon functions for a future pull request due to complexity. We also added a p_material parameter to Blit_Rect & Blit_Rect_Multi on the user exposed resource to allow users to pass their own materials into the resource (if left blank, still defaults to 'default material').
Currently this PR implements:
The Rendering Server API for DrawableTextures
The new ShaderType, TextureBlit Shaders
And several user-exposed Resource classes to interact with DrawableTextures, most importantly of course, DrawableTexture2D
However all in the Compatibility Renderer (Drivers/GLES3) only. All of the code for the Forward+ Renderer (Servers/Rendering/Renderer_RD) is non-functional in several ways at the moment, do not expect it to work or look correct.
The files with the most relevant new code are:
But I made a detailed listing of the currently modified files in a spreadsheet here
TODO Before I think this can be considered complete:
Bugsquad edit: Implements and fixes godotengine/godot-proposals#7379