-
Notifications
You must be signed in to change notification settings - Fork 11.8k
rpc : add rpc_msg_set_tensor_hash_req #13353
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Use a dedicated struct for the request of RPC_CMD_SET_TENSOR_HASH which makes the code cleaner.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces a dedicated struct (rpc_msg_set_tensor_hash_req) to improve clarity for handling the RPC_CMD_SET_TENSOR_HASH request.
- Replaces manual serialization of tensor, offset, and hash with a structured request.
- Updates both the client-side sending and server-side handling functions to use the new struct.
Comments suppressed due to low confidence (1)
ggml/src/ggml-rpc/ggml-rpc.cpp:1128
- The variable 'size' is used in the debug message but is not defined in the updated function context. Either include the size in the rpc_msg_set_tensor_hash_req struct or compute it appropriately before using it.
GGML_PRINT_DEBUG("[%s] buffer: %p, data: %p, offset: %" PRIu64 ", size: %zu, hash: %" PRIx64 "\n", __func__, (void*)tensor->buffer, tensor->data, request.offset, size, request.hash);
return false; | ||
} | ||
} | ||
ggml_backend_tensor_set(tensor, cached_file.data(), offset, size); | ||
ggml_backend_tensor_set(tensor, cached_file.data(), request.offset, size); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function call passes 'size' as a parameter, but the variable 'size' is undeclared in the context after switching to the structured request. Consider adding a size field to the new request struct or determining the correct size value.
Copilot uses AI. Check for mistakes.
* origin/master: (39 commits) server : vision support via libmtmd (ggml-org#12898) sycl : implementation of reordered Q4_0 MMVQ for Intel GPUs (ggml-org#12858) metal : optimize MoE for large batches (ggml-org#13388) CUDA: FA support for Deepseek (Ampere or newer) (ggml-org#13306) llama : do not crash if there is no CPU backend (ggml-org#13395) CUDA: fix crash on large batch size for MoE models (ggml-org#13384) imatrix : Add --parse-special for enabling parsing of special tokens in imatrix calculation (ggml-org#13389) llama-run: add support for downloading models from ModelScope (ggml-org#13370) mtmd : fix batch_view for m-rope (ggml-org#13397) llama : one-off chat template fix for Mistral-Small-2503 (ggml-org#13398) rpc : add rpc_msg_set_tensor_hash_req (ggml-org#13353) vulkan: Allow up to 4096 elements for mul_mat_id row_ids (ggml-org#13326) server : (webui) rename has_multimodal --> modalities (ggml-org#13393) ci : limit write permission to only the release step + fixes (ggml-org#13392) mtmd : Expose helper_decode_image_chunk (ggml-org#13366) server : (webui) fix a very small misalignment (ggml-org#13387) server : (webui) revamp the input area, plus many small UI improvements (ggml-org#13365) convert : support rope_scaling type and rope_type (ggml-org#13349) mtmd : fix the calculation of n_tokens for smolvlm (ggml-org#13381) context : allow cache-less context for embeddings (ggml-org#13108) ...
Use a dedicated struct for the request of RPC_CMD_SET_TENSOR_HASH which makes the code cleaner.