𝗗𝗮𝘆-𝟮𝟵𝟴 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗕𝗶𝗮𝘀 𝗟𝗼𝘀𝘀 for Mobile Neural Networks by Vrije Universiteit Brussel, Brussels, Belgium Follow me for a similar post: 🇮🇳 Ashish Patel 𝗜𝗻𝘁𝗲𝗿𝗲𝘀𝘁𝗶𝗻𝗴 𝗙𝗮𝗰𝘁𝘀 : 🔸This paper is published ICCV2021 with 1 Citation. 🔸Introduced New loss with SkipNet Mobile Architecture Proposed. 🔸Paper and Code is in the Comments. ------------------------------------------------------------------- 𝗜𝗠𝗣𝗢𝗥𝗧𝗔𝗡𝗖𝗘 🔸 Compact convolutional neural networks (CNNs) have witnessed exceptional improvements in performance in recent years. However, they still fail to provide the same predictive power as CNNs with a large number of parameters. The diverse and even abundant features captured by the layers are an important characteristic of these successful CNNs. 🔸 However, differences in this characteristic between large CNNs and their compact counterparts have rarely been investigated. In compact CNNs, due to the limited number of parameters, abundant features are unlikely to be obtained, and feature diversity becomes an essential characteristic. 🔸 Diverse features present in the activation maps derived from a data point during model inference may indicate the presence of a set of unique descriptors necessary to distinguish between objects of different classes. In contrast, data points with low feature diversity may not provide a sufficient amount of unique descriptors to make a valid prediction; 🔸 we refer to them as random predictions. Random predictions can negatively impact the optimization process and harm the final performance. This paper proposes addressing the problem raised by random predictions by reshaping the standard cross-entropy to make it biased toward data points with a limited number of unique descriptive features. 🔸 Our novel Bias Loss focuses the training on a set of valuable data points and prevents the vast number of samples with poor learning features from misleading the optimization process. Furthermore, to show the importance of diversity, we present a family of SkipNet models whose architectures are brought to boost the number of unique descriptors in the last layers. Our Skipnet-M can achieve a 1% higher classification accuracy than MobileNetV3 Large. #computervision #artificialintelligence #deeplearning
Great post and thanks for the information
𝗔𝗺𝗮𝘇𝗶𝗻𝗴 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵: https://arxiv.org/abs/2107.11170 code: https://github.com/lusinlu/biasloss_skipblocknet
Gold Medallist🥇||Data Scientist||GenAI||RAG||RAFT||Audio Analysis ||Machine Learning||Deep Learning|| NLP||Graph Neural Networks||GDS-neo4j||User Recommendations||Feed Recommendation||Feed Ranking||
3yThank for sharing this✨👌🏻👏🏻