- Beijing
Stars
The pytorch implement of papar《AI-accelerated Discovery of Altermagnetic Materials》
Pytorch implementation of paper: Small Pre-trained Language Models Can be Fine-tuned as Large Models via Over-Parameterization.
This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data
[KDD'22] Official PyTorch implementation for "Towards Universal Sequence Representation Learning for Recommender Systems".
CodeGen is a family of open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Data and software for building the ACL Anthology.
A concise but complete full-attention transformer with a set of promising experimental features from various papers
Dataset Condensation (ICLR21 and ICML21)
Repository containing code for "How to Train BERT with an Academic Budget" paper
keras implement of transformers for humans
From Big to Small: Multi-Scale Local Planar Guidance for Monocular Depth Estimation
ShadowWalker1995 / hosts
Forked from googlehosts/hosts镜像:https://coding.net/u/scaffrey/p/hosts/git
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Scripts for Imagenet 32 dataset
Repo about neural networks for images handling
Replace FC2, LeNet-5, VGG, Resnet, Densenet's full-connected layers with MPO