Skip to content

PR: Refine ggml-qnn backend(QNN, Qualcomm Neural Network,aka Qualcomm AI Engine Direct) for latest ggml,whisper.cpp,llama.cpp #12049

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 37 commits into from
Closed
Changes from 1 commit
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
74029f3
ggml-qnn: add Qualcomm QNN backend for GGML
zhouwg Feb 14, 2025
986a37d
ggml-qnn: santiy check
zhouwg Feb 15, 2025
af604d5
ggml-qnn: update script build-run-android.sh to compare peformance of…
zhouwg Feb 16, 2025
816ebb9
ggml-qnn: fix minor issue in test-backend-ops.cpp
zhouwg Feb 17, 2025
2a8020b
ggml-qnn: merge QNN RPC feature from https://github.com/zhouwg/kantv/…
zhouwg Feb 18, 2025
da4d007
ggml-qnn: sync from branch kantvai-ggmlqnn-npurpc
zhouwg Feb 18, 2025
7cb1a86
ggml-qnn: a concise approach to offload mulmat to QNN backend(sync fr…
zhouwg Feb 19, 2025
c8cf291
ggml-qnn: remove redundant codes
zhouwg Feb 20, 2025
84317c7
ggml-qnn: sync from branch kantvai-ggmlqnn-npurpc
zhouwg Feb 20, 2025
c6a04c6
ggml-qnn: sync from branch kantvai-ggmlqnn-npurpc
zhouwg Feb 20, 2025
59a2fbe
ggml-qnn: sync from branch kantvai-ggmlqnn-npurpc
zhouwg Feb 21, 2025
1e6f4a7
ggml-qnn: add Qualcomm QNN backend for GGML
zhouwg Feb 14, 2025
6974079
ggml-qnn: santiy check
zhouwg Feb 15, 2025
ea970f9
ggml-qnn: update script build-run-android.sh to compare peformance of…
zhouwg Feb 16, 2025
d0c01c0
ggml-qnn: fix minor issue in test-backend-ops.cpp
zhouwg Feb 17, 2025
b48ad85
ggml-qnn: merge QNN RPC feature from https://github.com/zhouwg/kantv/…
zhouwg Feb 18, 2025
5ac113b
ggml-qnn: sync from branch kantvai-ggmlqnn-npurpc
zhouwg Feb 18, 2025
31152be
ggml-qnn: a concise approach to offload mulmat to QNN backend(sync fr…
zhouwg Feb 19, 2025
e16dd3c
ggml-qnn: remove redundant codes
zhouwg Feb 20, 2025
1d56350
ggml-qnn: sync from branch kantvai-ggmlqnn-npurpc
zhouwg Feb 20, 2025
12f4911
ggml-qnn: sync from branch kantvai-ggmlqnn-npurpc
zhouwg Feb 20, 2025
37985f9
ggml-qnn: sync from branch kantvai-ggmlqnn-npurpc
zhouwg Feb 21, 2025
9fa0765
rebase to the latest upstream
zhouwg Feb 21, 2025
60ca941
ggml-qnn: fix a minior typo in internal doc
zhouwg Feb 23, 2025
d5d110d
ggml-qnn: refine function ggml_qnn_create_general_tensor() to avoid c…
zhouwg Feb 23, 2025
c687f26
ggml-qnn: fix a minor typo in source code
zhouwg Feb 24, 2025
d1b9d1b
build: avoid ggml-qnn backend breaking other backend's builds
zhouwg Feb 24, 2025
35a289a
ggml-qnn: remove redundant codes to make PR reviewers happy
zhouwg Feb 25, 2025
71dae47
ggml-qnn: refine code format
zhouwg Feb 25, 2025
d80b289
ggml-qnn: offload quantized type mulmat to QNN backend
zhouwg Feb 26, 2025
eb47de0
ggml-qnn: benchmark of real LLM inference on a Snapdragon 8 Gen3 phone
zhouwg Feb 26, 2025
36b58e3
ggml-qnn: refine source code structure to make code more clearly
zhouwg Feb 27, 2025
302e014
ggml-qnn: refine code
zhouwg Feb 27, 2025
a134884
ggml-qnn: enable release build with necessary logs to make reviewers …
zhouwg Feb 27, 2025
137b347
ggml-qnn: enable all quantize type with 2d mulmat
zhouwg Feb 27, 2025
653bc33
ggml-qnn: enable log output of GGMLQNN_LOG_INFO in command line mode …
zhouwg Feb 28, 2025
9d10e4f
ggml-qnn: Windows port --- step2
zhouwg Feb 28, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
ggml-qnn: sync from branch kantvai-ggmlqnn-npurpc
  • Loading branch information
zhouwg committed Feb 21, 2025
commit 37985f9ee34b79226155c4da8086b5ae27ff4b44
13 changes: 6 additions & 7 deletions ggml/src/ggml-qnn/ggml-qnn.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1132,8 +1132,8 @@ struct ggml_backend_qnn_context {
struct qcom_socinfo socinfo;
} ;

//TODO: the following global vars and three helper funcs should be removed in the future
static int32_t g_ggmltensor_idx = 0;
//the following helper funcs are used to ensure every QNN tensor name is unique
static std::atomic<int32_t> g_ggmltensor_idx(0);
static void reset_idx() {
g_ggmltensor_idx = 0;
}
Expand All @@ -1143,7 +1143,7 @@ static void inc_idx() {
}

static int32_t get_idx() {
return g_ggmltensor_idx;
return g_ggmltensor_idx.load();
}

// file:///opt/qcom/aistack/qairt/2.31.0.250130/docs/QNN/general/quantization.html
Expand Down Expand Up @@ -1474,7 +1474,7 @@ static Qnn_Tensor_t * ggml_qnn_create_general_tensor(const ggml_tensor * tensor,
Qnn_ErrorHandle_t error = QNN_SUCCESS;
char tensor_name[GGML_MAX_NAME] = {0};

//TODO:remove get_idx() and inc_idx() in the future but ensure the tensor name is unique
//ensure the tensor name is unique
if (nullptr != name) {
snprintf(tensor_name, GGML_MAX_NAME, "tensor_%-8d", get_idx());
} else {
Expand Down Expand Up @@ -2762,7 +2762,6 @@ int qnn_instance::qnn_finalize() {
Qnn_ErrorHandle_t error = QNN_SUCCESS;

GGMLQNN_LOG_DEBUG("enter %s\n", __func__);
//TODO:should be removed in the future
reset_idx();

free_rpcmem();
Expand Down Expand Up @@ -3451,7 +3450,7 @@ static void ggml_qnn_mul_mat(ggml_backend_t backend, ggml_tensor * op) {
}
};

Qnn_Tensor_t out_0_inputs[] = {*p_tensor0,*p_tensor1};
Qnn_Tensor_t out_0_inputs[] = {*p_tensor0, *p_tensor1};
Qnn_Tensor_t out_0_outputs[] = {*p_tensor2_transpose};
Qnn_OpConfig_t out_0 = {
QNN_OPCONFIG_VERSION_1, .v1 =
Expand Down Expand Up @@ -3488,7 +3487,7 @@ static void ggml_qnn_mul_mat(ggml_backend_t backend, ggml_tensor * op) {

//step-6: finalize qnn graph and execute qnn graph
CHECK_QNN_API(error, qnn_raw_interface.graphFinalize(graph_handle, NULL, NULL));
Qnn_Tensor_t input_tensors_0[] = {*p_tensor0,*p_tensor1};
Qnn_Tensor_t input_tensors_0[] = {*p_tensor0, *p_tensor1};
Qnn_Tensor_t output_tensors_0[] = {*p_tensor2};
CHECK_QNN_API(error, qnn_raw_interface.graphExecute(graph_handle,
input_tensors_0, 2,
Expand Down