Skip to content

Commit 71adea8

Browse files
committed
update name
1 parent ee23852 commit 71adea8

File tree

1 file changed

+19
-10
lines changed

1 file changed

+19
-10
lines changed

docs/index.html

Lines changed: 19 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -71,12 +71,12 @@ <h2 class="subtitle is-4">Weiquan Huang<sup>1*</sup>, Aoqi Wu<sup>1*</sup>, Yifa
7171
<!-- News Section -->
7272
<section class="section">
7373
<div class="container">
74-
<h2 class="title is-3 has-text-centered">News <span class="icon"><i class="fas fa-rocket"></i></span></h2>
75-
<div class="content has-text-centered">
74+
<h2 class="title is-3 has-text-centered">News <span class="icon"><i class="fas fa-rocket"></i></span></h2>
75+
<div class="content has-text-left">
7676
<ul>
77-
<li><strong>[2024-11-08]</strong> We are training a scaled-up version with ten times the dataset. Updates: EVA ViT-E, InternVL-300M, SigCLIP-SO-400M, and more VLLM results. Stay tuned for the most powerful CLIP models. Thanks for your star!</li>
78-
<li><strong>[2024-11-06]</strong> OpenAI's CLIP and EVA02's ViT models are now available on <a href="https://huggingface.co/collections/microsoft/llm2clip-672323a266173cfa40b32d4c" target="_blank">HuggingFace</a>.</li>
79-
<li><strong>[2024-11-01]</strong> Our paper was accepted at the NeurIPS 2024 SSL Workshop!</li>
77+
<li><strong>[2024-11-08]</strong> We are training a scaled-up version with ten times the dataset. Updates: EVA ViT-E, InternVL-300M, SigCLIP-SO-400M, and more VLLM results. Stay tuned for the most powerful CLIP models. Thanks for your star!</li>
78+
<li><strong>[2024-11-06]</strong> OpenAI's CLIP and EVA02's ViT models are now available on <a href="https://huggingface.co/collections/microsoft/llm2clip-672323a266173cfa40b32d4c" target="_blank">HuggingFace</a>.</li>
79+
<li><strong>[2024-11-01]</strong> Our paper was accepted at the NeurIPS 2024 SSL Workshop!</li>
8080
</ul>
8181
</div>
8282
</div>
@@ -404,8 +404,8 @@ <h2 class="title is-3 has-text-centered"><span class="dvima">LLM2CLIP can make S
404404
<td>3.7</td>
405405
<td>7.3</td>
406406
</tr>
407-
<tr>
408-
<td>&#8195;<strong>+ ourmethod</strong></td>
407+
<tr style="background-color: #E8F4FF;">
408+
<td>&#8195;<strong> &emsp; + LLM2CLIP</strong></td>
409409
<td><strong>86.9</strong></td>
410410
<td><strong>98.1</strong></td>
411411
<td><strong>99.3</strong></td>
@@ -506,7 +506,7 @@ <h2 class="title is-3 has-text-centered"><span class="dvima">LLM2CLIP can make m
506506
<td><strong>39.71</strong></td>
507507
</tr>
508508
<tr style="background-color: #E8F4FF;">
509-
<td>&#8195;<strong>+ ourmethod</strong></td>
509+
<td>&#8195;<strong> &emsp; + LLM2CLIP</strong></td>
510510
<td><strong>79.80</strong></td>
511511
<td><strong>63.15</strong></td>
512512
<td><strong>52.37</strong></td>
@@ -537,7 +537,7 @@ <h2 class="title is-3 has-text-centered"><span class="dvima">LLM2CLIP can make m
537537
</section>
538538

539539
<section class="section" id="BibTeX">
540-
<div class="container is-max-desktop content has-text-centered">
540+
<div class="container is-max-desktop content">
541541
<h2 class="title">BibTeX</h2>
542542
<pre><code>@misc{huang2024llm2clippowerfullanguagemodel,
543543
title={LLM2CLIP: Powerful Language Model Unlock Richer Visual Representation},
@@ -557,4 +557,13 @@ <h2 class="title">BibTeX</h2>
557557
<div class="column">
558558
<div class="content has-text-centered">
559559
<p>
560-
© 2024 Microsoft <
560+
© 2024 Microsoft <br/>
561+
</p>
562+
</div>
563+
</div>
564+
</div>
565+
</div>
566+
</footer>
567+
568+
</body>
569+
</html>

0 commit comments

Comments
 (0)