Lists (9)
Sort Name ascending (A-Z)
Stars
Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, B…
A fast, bump-allocated virtual DOM library for Rust and WebAssembly.
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-…
A Collection of Variational Autoencoders (VAE) in PyTorch.
DVIB is an information bottleneck method that tries to disentangle multiview data into shared and private representations.
Implementation (with some experimentation) of the paper titled "VARIATIONAL DISCRIMINATOR BOTTLENECK: IMPROVING IMITATION LEARNING, INVERSE RL, AND GANS BY CONSTRAINING INFORMATION FLOW" (arxiv -> …
Code for image generation of Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow
(WWW'20) Official codes of paper "multimodal deep variational information bottleneck for micro-video popularity prediction".
a python implementation of various versions of the information bottleneck, including automated parameter searching
Pytorch implementation of Deep Variational Information Bottleneck
Implementation of Multi-View Information Bottleneck
Code for ICML2020 paper - CLUB: A Contrastive Log-ratio Upper Bound of Mutual Information
an implementation of Deep Variational Informational Bottleneck in pytorch (https://arxiv.org/pdf/1612.00410.pdf)
PromptKG Family: a Gallery of Prompt Learning & KG-related research works, toolkits, and paper-list.
Pytorch Implementation of Tensor Fusion Networks for multimodal sentiment analysis.
codes for: Modality to Modality Translation: An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion
The Video-based Cross-modal AuxIliary Network (VCAN) is proposed for Multimodal Sentiment Analysis
code for the paper: Interactive Fusion Network with Recurrent Attention for Multimodal Aspect-based Sentiment Analysis
[COLING 2022] Learning from Adjective-Noun Pairs: A Knowledge-enhanced Framework for Target-Oriented Multimodal Sentiment Classification
Code for MMLatch: Bottom-up Top-down Fusion for Multimodal Sentiment Analysis https://arxiv.org/abs/2201.09828 (to be presented in ICASSP 2022)
Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal and Multimodal Representations (MIB for multimodal sentiment analysis)
This repository contains the implementation of the paper -- Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis
Group Gated Fusion on Attention-based Bidirectional Alignment for Multimodal Emotion Recognition
Unimodal/Multimodal Sentiment Analysis、Emotion Recognition
A lightweight, scalable, and general framework for visual question answering research
Code and Splits for the paper "A Fair and Comprehensive Comparison of Multimodal Tweet Sentiment Analysis Methods", In Proceedings of the 2021 Workshop on Multi-Modal Pre-Training for Multimedia Un…
This project is out of date, I don't remember the details inside...