You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<li><strong>[2024-11-08]</strong> We are training a scaled-up version with ten times the dataset. Updates: EVA ViT-E, InternVL-300M, SigCLIP-SO-400M, and more VLLM results. Stay tuned for the most powerful CLIP models. Thanks for your star!</li>
78
-
<li><strong>[2024-11-06]</strong> OpenAI's CLIP and EVA02's ViT models are now available on <ahref="https://huggingface.co/collections/microsoft/llm2clip-672323a266173cfa40b32d4c" target="_blank">HuggingFace</a>.</li>
79
-
<li><strong>[2024-11-01]</strong> Our paper was accepted at the NeurIPS 2024 SSL Workshop!</li>
77
+
<li><strong>[2024-11-08]</strong> We are training a scaled-up version with ten times the dataset. Updates: EVA ViT-E, InternVL-300M, SigCLIP-SO-400M, and more VLLM results. Stay tuned for the most powerful CLIP models. Thanks for your star!</li>
78
+
<li><strong>[2024-11-06]</strong> OpenAI's CLIP and EVA02's ViT models are now available on <ahref="https://huggingface.co/collections/microsoft/llm2clip-672323a266173cfa40b32d4c" target="_blank">HuggingFace</a>.</li>
79
+
<li><strong>[2024-11-01]</strong> Our paper was accepted at the NeurIPS 2024 SSL Workshop!</li>
80
80
</ul>
81
81
</div>
82
82
</div>
@@ -404,8 +404,8 @@ <h2 class="title is-3 has-text-centered"><span class="dvima">LLM2CLIP can make S
0 commit comments