Skip to content

Commit 611fbab

Browse files
committed
fix typo
1 parent 1f3aaf9 commit 611fbab

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ by Yibo Yang, Zhisheng Zhong, Tiancheng Shen, and [Zhouchen Lin](http://www.cis.
88
### citation
99
If you find CliqueNet useful in your research, please consider citing:
1010

11-
@inproceedings{yang18,
11+
@article{yang18,
1212
author={Yibo Yang and Zhisheng Zhong and Tiancheng Shen and Zhouchen Lin},
1313
title={Convolutional Neural Networks with Alternately Updated Clique},
1414
journal={arXiv preprint arXiv:1802.10419},
@@ -50,7 +50,7 @@ python train.py --gpu [gpu id] --dataset [cifar-10 or cifar-100 or SVHN] --k [fi
5050

5151
## Ablation experiments
5252

53-
With the feedback connections, CliqueNet alternately re-update previous layers with updated layers, to enable refined features. The weights among layers are re-used for multiple times, so that a deeper representation space can be attained with a fixed number of parameters. In order to test the effectiveness of CliqueNet's feature refinement, we analyze the features generated in different stages by conducting experiments using different versions of CliqueNet. As illustrated by Fig2, the CliqueNet(I+I) only uses Stage-I feature. The CliqueNet(I+II) uses Stage-I feature concatenated with input layer as the block feature, but transits Stage-II feature into the next block. The CliqueNet(II+II) only uses refined features.
53+
With the feedback connections, CliqueNet alternately re-update previous layers with updated layers, to enable refined features. The weights among layers are re-used for multiple times, so that a deeper representation space can be attained with a fixed number of parameters. In order to test the effectiveness of CliqueNet's feature refinement, we analyze the features generated in different stages by conducting experiments using different versions of CliqueNet. As illustrated by Fig 2, the CliqueNet(I+I) only uses Stage-I feature. The CliqueNet(I+II) uses Stage-I feature concatenated with input layer as the block feature, but transits Stage-II feature into the next block. The CliqueNet(II+II) only uses refined features.
5454

5555
<div align=left><img src="https://raw.githubusercontent.com/iboing/CliqueNet/master/img/fig3.JPG" width="55%" height="55%">
5656

@@ -74,7 +74,7 @@ from models.cliquenet_I_II import build_model
7474
```
7575
for CliqueNet(I+II).
7676

77-
We further consider a situation where the feedback is not processed entirely. Concretely, when k=64 and T=15, we use the Stage-II feature, but only the first `X` steps, see Tab1. Then `X=0` is just the case of CliqueNet(I+I), and `X=5` corresponds to CliqueNet(II+II).
77+
We further consider a situation where the feedback is not processed entirely. Concretely, when k=64 and T=15, we use the Stage-II feature, but only the first `X` steps, see Tab 1. Then `X=0` is just the case of CliqueNet(I+I), and `X=5` corresponds to CliqueNet(II+II).
7878

7979

8080
|Model|CIFAR-10 | CIFAR-100|
@@ -108,7 +108,7 @@ The results listed below demonstrate the superiority of CliqueNet over DenseNet
108108

109109
Tab 2. Main results on CIFAR and SVHN without data augmentation.
110110

111-
Because larger T would lead to higher computation cost and slightly more parameters, we prefer using a larger k in our experiments. To make comparisons more fair, we also consider the situation where k and T of DenseNets and CliqueNets are exactly the same, see Tab3.
111+
Because larger T would lead to higher computation cost and slightly more parameters, we prefer using a larger k in our experiments. To make comparisons more fair, we also consider the situation where k and T of DenseNets and CliqueNets are exactly the same, see Tab 3.
112112

113113
|Model|Params|CIFAR-10 | CIFAR-100|
114114
|---|---|---|---|

0 commit comments

Comments
 (0)