Add Renderer.render_geometry
#3431
Labels
enhancement
New API
This pull request may need extra debate as it adds a new class or function to pygame
Performance
Related to the speed or resource usage of the project
render/_sdl2
pygame._render or former pygame._sdl2
This is the second New API idea for the new _render.Renderer module. Please check out the other one's notes for context (#3430)
Even if we give a way to render filled polygons, we are still not wrapping everything that SDL can do with a Renderer.
SDL_RenderGeometry
is extremely powerful as it can draw ANY 2D mesh with a single draw calls. this goes from an object, to an entire 2D minecraft chunk, all in one call, because each vertex can have a different color and its own texture coordinates, letting tiles and objects look differently in the same batch. It's very close to actual GPU render calls, that's why. Rendering things with it is not as intuitive as the rest of pygame, because you need to create the vertices, setup the texture coordinates, and (if you want it to be faster) also setup the indices, which isn't straight forward. But pygame isn't only used by beginners, and someone that knows what they are doing might want all of SDL capabilities, just like with the possibly futurepygame.gpu
module.This is why I think
Renderer.render_geometry
should exist. If you have objections, I can listen. The thing is, you would use this methods to have FASTER results, since you could otherwise achieve it by stacking a very large amount of python calls to renderer methods. The thing is, your vertices and indices are in python structures, while SDL only likes C structures. If we were to loop the vertices every frame (they could be 500000!) the whole purpose of doing everything in one draw call would be pointless and it would still be very slow. Since this is usually used with static meshes, the conversion python array -> C array should only happen once or when the mesh changes. render_geometry should use already-existing C arrays and feed them to SDL, having the best performance possible.And what's the only way to achieve this? That's right, you need an intermediate object that does the conversions and heap-stores the C arrays waiting for them to get used. I propose this object to be called
GeometryMesh
. I added the geometry prefix to connect it to render_geometry and not to confuse it with a commonly known 3D mesh. The mesh would take a sequence of vertices and an optional sequence of indices. for now each vertex is represented by a sequence of position, color and texture coordinate. Tell me if you have objections.This are the stubs for it, so you understand my design:
What do you think? With my mediocre CPU I could draw 300k vertices at 300 FPS. It is impossible to achieve with any existing solution! Open for feedback. Keep in mind this code already exists and works, but I can modify it.
The text was updated successfully, but these errors were encountered: