File tree Expand file tree Collapse file tree 5 files changed +31
-11
lines changed Expand file tree Collapse file tree 5 files changed +31
-11
lines changed Original file line number Diff line number Diff line change 146
146
147
147
- [ ] TODO
148
148
149
+ ## value-based方法学习的目标是什么?
150
+
151
+ - [ ] TODO
152
+
149
153
## 强化学习 DQN,DDQN,AC,DDPG 的区别
150
154
151
155
- [ ] TODO
Original file line number Diff line number Diff line change @@ -103,7 +103,7 @@ $$J(\phi(z),y;w)=-yln(\phi(z))-(1-y)ln(1-\phi(z))$$
103
103
- [ 逻辑回归算法面经] ( https://zhuanlan.zhihu.com/p/46591702 )
104
104
- [ Logistic Regression 模型简介] ( https://tech.meituan.com/2015/05/08/intro-to-logistic-regression.html )
105
105
106
- ### 为什么逻辑回归模型要使用 sigmoid 函数?
106
+ ### 为什么 LR 要使用 sigmoid 函数?
107
107
108
108
1.广义模型推导所得
109
109
2.满足统计的最大熵模型
@@ -113,22 +113,26 @@ $$J(\phi(z),y;w)=-yln(\phi(z))-(1-y)ln(1-\phi(z))$$
113
113
114
114
- [ 为什么逻辑回归 模型要使用 sigmoid 函数] ( https://blog.csdn.net/weixin_39881922/article/details/80366324 )
115
115
116
- ### LR可以用核函数么 ?
116
+ ### LR 可以用核函数么 ?
117
117
118
118
- [ ] TODO
119
119
120
- ### 为什么logistic回归用交叉熵损失而不是平方损失 ?
120
+ ### 为什么 LR 用交叉熵损失而不是平方损失 ?
121
121
122
122
- [ ] TODO
123
123
124
- ### 逻辑斯蒂回归能否解决非线性分类问题 ?
124
+ ### LR 能否解决非线性分类问题 ?
125
125
126
126
- [ ] TODO
127
127
128
128
** 参考资料**
129
129
130
130
- [ 逻辑斯蒂回归能否解决非线性分类问题?] ( https://www.zhihu.com/question/29385169 )
131
131
132
+ ### LR为什么要离散特征?
133
+
134
+ - [ ] TODO
135
+
132
136
## 线性回归
133
137
134
138
### 基本原理
@@ -430,7 +434,7 @@ Adaboost采用迭代的思想,每次迭代只训练一个弱分类器,训练
430
434
- [ 数据挖掘领域十大经典算法之—AdaBoost算法(超详细附代码)] ( https://blog.csdn.net/fuqiuai/article/details/79482487 )
431
435
- [ 聊聊Adaboost,从理念到硬核推导] ( https://zhuanlan.zhihu.com/p/62037189 )
432
436
433
- ### GBDT 和 AdaBoost区别
437
+ ### GBDT 和 AdaBoost 区别
434
438
435
439
- [ ] TODO
436
440
Original file line number Diff line number Diff line change 312
312
313
313
- [ ] TODO
314
314
315
+ ### Adagrad
316
+
317
+ - [ ] TODO
318
+
315
319
### Adam
316
320
317
321
Adam算法结合了Momentum和RMSprop梯度下降法,是一种极其常见的学习算法,被证明能有效适用于不同神经网络,适用于广泛的结构。
@@ -678,7 +682,11 @@ identity mapping顾名思义,就是指本身,也就是公式中的x,而res
678
682
679
683
- [ ] TODO
680
684
681
- ### 为什么DenseNet比ResNet好?
685
+ ### 为什么 DenseNet 比 ResNet 好?
686
+
687
+ - [ ] TODO
688
+
689
+ ### 为什么 DenseNet 比 ResNet 更耗显存?
682
690
683
691
- [ ] TODO
684
692
Original file line number Diff line number Diff line change 87
87
88
88
- [ ] TODO
89
89
90
- ### 词向量如何训练的?(word2vector)
90
+ ### word2vec如何训练的?
91
91
92
92
- [ ] TODO
93
93
119
119
120
120
- [ ] TODO
121
121
122
+ ## Sentence Embedding
123
+
124
+ - [ ] TODO
125
+
122
126
## SeqSeq
123
127
124
128
- [ ] TODO
Original file line number Diff line number Diff line change 122
122
- [ ] [ 2020届NLP小渣渣实习及秋招面试记录] ( https://zhuanlan.zhihu.com/p/62902811 )
123
123
- [ ] [ 字节跳动实习算法岗面经] ( https://www.nowcoder.com/discuss/174565 )
124
124
- [ ] [ 20届-视觉算法-暑期实习-菜鸡的心酸历程] ( https://www.nowcoder.com/discuss/173292 )
125
- - [ ] [ 腾讯算法实习面试总结——论面试官虐我的一百种方式] ( https://www.nowcoder.com/discuss/163996 )
126
- - [ ] [ 旷视(face++)算法实习生面经] ( https://zhuanlan.zhihu.com/p/61221469 )
127
- - [ ] [ 我的春招实习结束了,详细算法面经] ( https://www.nowcoder.com/discuss/163388 )
128
- - [ ] [ 字节跳动自然语言处理面经-半凉等offer] ( https://www.nowcoder.com/discuss/170907 )
125
+ - [x ] [ 腾讯算法实习面试总结——论面试官虐我的一百种方式] ( https://www.nowcoder.com/discuss/163996 )
126
+ - [x ] [ 旷视(face++)算法实习生面经] ( https://zhuanlan.zhihu.com/p/61221469 )
127
+ - [x ] [ 我的春招实习结束了,详细算法面经] ( https://www.nowcoder.com/discuss/163388 )
128
+ - [x ] [ 字节跳动自然语言处理面经-半凉等offer] ( https://www.nowcoder.com/discuss/170907 )
129
129
- [x] ♥ [ 算法春招上岸,心路历程(内含部分公司面经)] ( https://www.nowcoder.com/discuss/170673 )
130
130
- [x] [ 360公司2018秋招算法精英面经分享~ ] ( https://www.nowcoder.com/discuss/154590 )
131
131
- [x] [ 腾讯算法岗实习烫经] ( https://www.nowcoder.com/discuss/169896 )
You can’t perform that action at this time.
0 commit comments