deep dive into images and convolutional models
Share Parameters across space
在Convnet上套Convnet,就可以一层一层综合局部得到的信息
将一个deep and narrow的feature层作为输入,传给一个Regular神经网络
将不同Stride的卷积用某种方式合并起来,节省卷积层的空间复杂度。
LENET-5, ALEXNET
在一个卷积层的输出层上,加一个1x1的卷积层,这样就形成了一个小型的神经网络。
tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)给定四维的input和filter tensor,计算一个二维卷积
input: A Tensor. type必须是以下几种类型之一: half, float32, float64.filter: A Tensor. type和input必须相同strides: A list of ints.一维,长度4, 在input上切片采样时,每个方向上的滑窗步长,必须和format指定的维度同阶padding: A string from: "SAME", "VALID". padding 算法的类型use_cudnn_on_gpu: An optional bool. Defaults to True.data_format: An optional string from: "NHWC", "NCHW", 默认为"NHWC"。
指定输入输出数据格式,默认格式为"NHWC", 数据按这样的顺序存储:
[batch, in_height, in_width, in_channels]
也可以用这种方式:"NCHW", 数据按这样的顺序存储:
[batch, in_channels, in_height, in_width]name: 操作名,可选.A Tensor. type与input相同
Given an input tensor of shape [batch, in_height, in_width, in_channels]
and a filter / kernel tensor of shape
[filter_height, filter_width, in_channels, out_channels]
conv2d实际上执行了以下操作:
[filter_height * filter_width * in_channels, output_channels].[batch, out_height, out_width, filter_height * filter_width * in_channels].具体来讲,当data_format为NHWC时:
output[b, i, j, k] =
sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] *
filter[di, dj, q, k]input 中的每个patch都作用于filter,每个patch都能获得其他patch对filter的训练
需要满足strides[0] = strides[3] = 1. 大多数水平步长和垂直步长相同的情况下:strides = [1, stride, stride, 1].
- - -
在tf.nn.conv2d后面接tf.nn.max_pool,将卷积层输出减小,从而减少要调整的参数
tf.nn.max_pool(value, ksize, strides, padding, data_format='NHWC', name=None)Performs the max pooling on the input.
value: A 4-D Tensor with shape [batch, height, width, channels] and
type tf.float32.ksize: A list of ints that has length >= 4. 要执行取最值的切片在各个维度上的尺寸strides: A list of ints that has length >= 4. 取切片的步长padding: A string, either 'VALID' or 'SAME'. padding算法data_format: A string. 'NHWC' and 'NCHW' are supported.name: 操作名,可选A Tensor with type tf.float32. The max pooled output tensor.
仿照lesson2,添加learning rate decay 和 drop out,可以将准确率提高到90.6%
觉得我的文章对您有帮助的话,给个star可好?