Skip to content

self.attn_probs in ResidualAttentionBlock() causes problems - how to make explainability work with mlfoundations / open_clip model #32

@tahirmiri

Description

@tahirmiri

Hi,

I am running experiments using OpenCLIP model from mlfoundations. . It turns out that openCLIP's architecture is different from CLIP's architecture you defined. I encountered the same error as mentioned here, then tried to add missing parts but somehow still get attn_probs = None at the end, instead of being a tensor calculated along the way.

Could you please guide me on how to make your code work with openCLIP? In particular, what should be enough to modify in respective modules of openCLIP ( transformer.py is the equivalent module to yours model.py).

P.S. if you are interested in working on this for an hourly-based compensation, I would gladly discuss it. Please let me know.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions