Skip to content

Commit 4f5125a

Browse files
committed
update readme
1 parent 5eec912 commit 4f5125a

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,12 +33,13 @@ Note the number in Table 1 in main paper is after task specific finetuning.
3333

3434
## :fire: News
3535

36+
* **[2023.07.19]** :roller_coaster: We are providing new demo command/code for inference ([DEMO.md](asset/DEMO.md))!
3637
* **[2023.07.19]** :roller_coaster: We are excited to release the x-decoder training code ([INSTALL.md](asset/INSTALL.md), [DATASET.md](asset/DATASET.md), [TRAIN.md](asset/TRAIN.md), [EVALUATION.md](asset/EVALUATION.md))!
3738
* **[2023.07.10]** We release [Semantic-SAM](https://github.com/UX-Decoder/Semantic-SAM), a universal image segmentation model to enable segment and recognize anything at any desired granularity. Code and checkpoint are available!
3839
* **[2023.04.14]** We are releasing [SEEM](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once), a new universal interactive interface for image segmentation! You can use it for any segmentation tasks, way beyond what X-Decoder can do!
3940

4041
<p align="center">
41-
<img src="inference_demo/images/teaser_new.png" width="90%" height="90%">
42+
<img src="inference/images/teaser_new.png" width="90%" height="90%">
4243
</p>
4344

4445
* **[2023.03.20]** As an aspiration of our X-Decoder, we developed OpenSeeD ([[Paper](https://arxiv.org/pdf/2303.08131.pdf)][[Code](https://github.com/IDEA-Research/OpenSeeD)]) to enable open-vocabulary segmentation and detection with a single model, Check it out!

0 commit comments

Comments
 (0)