-
Notifications
You must be signed in to change notification settings - Fork 12.3k
OpenCL: add conv2d kernel #14403
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
OpenCL: add conv2d kernel #14403
Conversation
It seems that your kernel is the opencl vectorized version of the vulkan kernel I proposed, but I do not see this kind of performance improvement on Vulkan over the indirect impl. You might want to disable vectorized access to see what causes the improvement. |
I have taken inspiration from your CUDA implementation, thanks for it!, so it's pretty much similar approach After disabling vectorization, the scalar kernel achieves 182.18 GFLOPS on the Adreno 830. I think the significant speedup over the indirect implementation is mainly due to the current OpenCL backend being unoptimized, rather than any specific feature of the new kernel. The |
It's good to know -- that could be the reason. I also observed this in Vulkan: the direct kernel is faster because the mul_mat kernel is not optimized well enough (at least not to my device) while the direct kernel is more of less more optimized to my device. I also ported the direct kernel to CUDA and found that the indirect im2col&cuBLAS based mul_mat is ~33% faster than my direct kernel on Turing (the cuBLAS matmul is very highly optimized). I found this promising because there are lots of opportunities for optimization in the direct kernel (eliminating bank conflicts, warp-tiling, double buffering, faster computation of the offsets), so the direct kernel could become on par with the highly optimized indirect kernel in performance while not wasting lots of memory as im2col does. |
Following up on #14316 and #14388, this PR adds a direct conv2d kernel for OpenCL. To maximize performance, this kernel uses a mixed-precision approach: data is stored in local memory as FP16 to save bandwidth and the core operations are vectorized using float4 for higher throughput.
Because of this, a comparison with an indirect conv2d implementation is not based on identical precision and it's not a fair comparison. I thought that since this is mainly designed for Adreno GPUs, we could sacrifice some accuracy for the benefit of maximum performance, which is a significant bottleneck on these devices. As a result, some tests fail by a small margin due to the precision differences, hope it's still okay!
I am opening this PR to gather feedback and to see if this performance/accuracy trade-off is acceptable or not
Performance:
@lhez @max-krasnyansky