Skip to content

Wq/fix tensor dispatchkey #35

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

wanfengcxz
Copy link
Collaborator

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily receiving feedbacks. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

Motivation

In ascend dev, dynamo will catch multi graph, not the two expected graph(prefill, decode). With export TORCH_LOGS="dynamo,guards,bytecode", I can get multi guards. The diff in two guards is shown in the following figure:
image
By debugging, I find there are two places where there is no context "@torch.inference_mode()" when creating tensors.
Here is the minimal reproduction code:

import torch
import torch_npu
import numpy as np

# 禁止in-place操作 避免为中间结果保留内存引用 完全禁用梯度计算
@torch.inference_mode()
def test_dispatchkey1():
    print(f"----- infermode tensor dispatchkey:")
    
    x = torch.tensor([1,2])
    print(torch._C._dispatch_keys(x))
    x = torch.randn(2,3)
    print(torch._C._dispatch_keys(x))
    a = np.array([1,2,3])
    b = torch.from_numpy(a)
    print(torch._C._dispatch_keys(b))
    
    
    
def test_dispatchkey2():
    print(f"----- tensor dispatchkey:")
    
    x = torch.tensor([1,2])
    print(torch._C._dispatch_keys(x))
    x = torch.randn(2,3)
    print(torch._C._dispatch_keys(x))
    a = np.array([1,2,3])
    b = torch.from_numpy(a)
    print(torch._C._dispatch_keys(b))

if __name__ == "__main__":
    test_dispatchkey1()
    test_dispatchkey2()

"""
----- infermode tensor dispatchkey:
DispatchKeySet(CPU, AutocastCPU)
DispatchKeySet(CPU, AutocastCPU)
DispatchKeySet(CPU, AutocastCPU)
----- tensor dispatchkey:
DispatchKeySet(CPU, ADInplaceOrView, AutogradCPU, AutocastCPU)
DispatchKeySet(CPU, ADInplaceOrView, AutogradCPU, AutocastCPU)
DispatchKeySet(CPU, ADInplaceOrView, AutogradCPU, AutocastCPU)
"""

Modification

Please briefly describe what modification is made in this PR.

BC-breaking (Optional)

Does the modification introduce changes that break the backward-compatibility of the downstream repositories?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

Checklist

  1. Pre-commit or other linting tools are used to fix the potential lint issues.
  2. The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness.
  3. If the modification has a dependency on downstream projects of a newer version, this PR should be tested with all supported versions of downstream projects.
  4. The documentation has been modified accordingly, like docstring or example tutorials.

yao-fengchen and others added 2 commits April 10, 2025 11:39
* add update_weights for ascend moe

* remove useless cast

* add some params for mla attention

* modify kv_cache layout for ascend graph_mode

* fix dlinfer flash_attention kernel

* remove head_size in attention

* unified kv_cache layout

* update code
@wanfengcxz wanfengcxz closed this Apr 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants