代码来源:https://github.com/eriklindernoren/ML-From-Scratch
卷积神经网络中卷积层Conv2D(带stride、padding)的具体实现:https://cloud.tencent.com/developer/article/1686529
激活函数的实现(sigmoid、softmax、tanh、relu、leakyrelu、elu、selu、softplus):https://cloud.tencent.com/developer/article/1686496
损失函数定义(均方误差、交叉熵损失):https://cloud.tencent.com/developer/article/1686498
优化器的实现(SGD、Nesterov、Adagrad、Adadelta、RMSprop、Adam):https://cloud.tencent.com/developer/article/1686499
卷积层反向传播过程:https://cloud.tencent.com/developer/article/1686503
全连接层实现:https://cloud.tencent.com/developer/article/1686504
批量归一化层实现:https://cloud.tencent.com/developer/article/1686506
池化层实现:https://cloud.tencent.com/developer/article/1686507
padding2D实现:https://cloud.tencent.com/developer/article/1686509
Flatten层实现:https://cloud.tencent.com/developer/article/1686511
上采样层UpSampling2D实现:https://cloud.tencent.com/developer/article/1686515
class Dropout(Layer):
"""A layer that randomly sets a fraction p of the output units of the previous layer
to zero.
Parameters:
-----------
p: float
The probability that unit x is set to zero.
"""
def __init__(self, p=0.2):
self.p = p
self._mask = None
self.input_shape = None
self.n_units = None
self.pass_through = True
self.trainable = True
def forward_pass(self, X, training=True):
c = (1 - self.p)
if training:
self._mask = np.random.uniform(size=X.shape) > self.p
c = self._mask
return X * c
def backward_pass(self, accum_grad):
return accum_grad * self._mask
def output_shape(self):
return self.input_shape
核心就是生成一个随机失活神经元的遮罩。