Loading [MathJax]/jax/input/TeX/config.js
前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >专栏 >tensorflow编程: Layers (contrib)

tensorflow编程: Layers (contrib)

作者头像
JNingWei
发布于 2018-09-28 07:39:11
发布于 2018-09-28 07:39:11
81000
代码可运行
举报
文章被收录于专栏:JNing的专栏JNing的专栏
运行总次数:0
代码可运行

Higher level ops for building neural network layers

tf.contrib.layers.batch_norm

  添加一个 Batch Normalization 层

tf.contrib.layers.batch_norm (inputs, decay=0.999, updates_collections=tf.GraphKeys.UPDATE_OPS, is_training=True, data_format=DATA_FORMAT_NHWC)

  可用作conv2d和fully_connected的归一化函数。

tf.nn.conv2d_transpose

   conv2d 的转置

tf.conv2d_transpose (value, filter, output_shape, strides, padding=’SAME’, data_format=’NHWC’, name=None)

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
# -*- coding: utf-8 -*-

import tensorflow as tf


def func(in_put, in_channel, out_channel):

    weights = tf.get_variable(name="weights", shape=[2, 2, in_channel, out_channel],
                              initializer=tf.contrib.layers.xavier_initializer_conv2d())
    convolution = tf.nn.conv2d(input=in_put, filter=weights, strides=[1, 1, 1, 1], padding='VALID')
    conv_shape = convolution.get_shape().as_list()
    deconv_shape = [conv_shape[0], conv_shape[1]*2, conv_shape[2]*2, conv_shape[3]]
    deconvolution = tf.nn.conv2d_transpose(value=convolution, filter=weights, output_shape=deconv_shape, strides=[1, 2, 2, 1], padding='VALID')
    return in_put, convolution, deconvolution


def main():

    with tf.Graph().as_default():
        input_x = tf.placeholder(dtype=tf.float32, shape=[1, 4, 4, 1])
        in_put, convolution, deconvolution = func(input_x, 1, 1)

        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            import numpy as np
            _in_put, _convolution, _deconvolution = sess.run([in_put, convolution, deconvolution], feed_dict={input_x:np.random.uniform(low=0, high=255, size=[1, 4, 4, 1])})
            print '\nin_put:'
            print in_put
            # print _in_put
            print '\nconvolution:'
            print convolution
            # print _convolution
            print '\ndeconvolution:'
            print deconvolution
            # print _deconvolution

if __name__ == "__main__":
    main()
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
2017-09-29 09:51:41.472842: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1052] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)

in_put:
Tensor("Placeholder:0", shape=(1, 4, 4, 1), dtype=float32)

convolution:
Tensor("Conv2D:0", shape=(1, 3, 3, 1), dtype=float32)

deconvolution:
Tensor("conv2d_transpose:0", shape=(1, 6, 6, 1), dtype=float32)

Process finished with exit code 0

tf.nn.dropout

tf.nn.dropout (x, keep_prob, noise_shape=None, seed=None, name=None)

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
# coding=utf-8

import tensorflow as tf

def main():

    with tf.Graph().as_default():
        import numpy as np
        input_x = np.random.uniform(0, 255, [3, 3])
        print input_x

        drop = [0, 0, 0]
        for i, keep_prob in enumerate([0.1, 0.5, 1.0]):
            drop[i] = tf.nn.dropout(x=input_x, keep_prob=keep_prob)
        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            for drop_i in drop:
                _drop_i = sess.run(drop_i)
                print '\n----------\n'
                print _drop_i

if __name__ == "__main__":
    main()
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
# 初始输入
[[  16.46278229  253.27597997  246.33614039]
 [ 130.45261984  227.85971767  142.72621045]
 [ 173.23025953  165.99906514  180.13238617]]

2017-09-29 11:02:29.146976: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1052] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)

----------
# keep_prob = 0.1
[[    0.            0.            0.       ]
 [    0.            0.            0.       ]
 [    0.            0.         1801.3238617]]

----------
# keep_prob = 0.5
[[  32.92556457  506.55195994    0.        ]
 [ 260.90523969  455.71943533    0.        ]
 [ 346.46051906    0.          360.26477234]]

----------
# keep_prob = 1.0
[[  16.46278229  253.27597997  246.33614039]
 [ 130.45261984  227.85971767  142.72621045]
 [ 173.23025953  165.99906514  180.13238617]]

tf.contrib.layers.fully_connected

tf.contrib.layers.fully_connected (inputs, num_outputs, activation_fn=tf.nn.relu)

  • 其中默认进行了 convolutionactivation
  • convolution 中,只对输入的最后一维求平均,且阶数不变。即 ‘weights:0’.shape=[inputs.shape[-1], num_outputs])。
  • ‘weights:0’.shape 永远是二维的。
  • num_outputs 是 ‘weights:0’ 第二维(即第-1维)的参数值;经过fn计算后,也变成了 结果输出的tensor 的最后一维(即第-1维)的参数值。
  • 如果设置 activation_fn=None,则输出结果 不经过激活,依然可能包含负数值。
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
# coding=utf-8

import tensorflow as tf
from tensorflow.contrib.layers import fully_connected

def main():

    with tf.Graph().as_default():
        import numpy as np
        input_x = np.random.uniform(0, 10, [3, 3])
        print input_x
        fn = fully_connected(inputs=input_x, num_outputs=1)
        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            _fn = sess.run(fn)
            print _fn
            print '\n----------\n'
            for (x, y) in zip(tf.global_variables(), sess.run(tf.global_variables())):
                print '\n', x, '\n', y

if __name__ == "__main__":
    main()
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
# 原始输入矩阵
[[ 7.73305319  0.2780667   7.27101124]
 [ 0.84666041  0.92980727  6.83676724]
 [ 1.02844109  5.51824496  1.78840816]]

2017-09-29 11:33:02.500942: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1052] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)

# fn后输出的结果tensor
[[ 2.32549239]
 [ 1.69284669]
 [ 0.        ]]

----------

<tf.Variable 'fully_connected/weights:0' shape=(3, 1) dtype=float64_ref> 
[[-0.01048241]
 [-0.83954232]
 [ 0.36308597]]

<tf.Variable 'fully_connected/biases:0' shape=(1,) dtype=float64_ref> 
[ 0.]

Process finished with exit code 0
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
# coding=utf-8

import tensorflow as tf
from tensorflow.contrib.layers import fully_connected

def main():

    with tf.Graph().as_default():
        import numpy as np
        input_x = np.random.uniform(0, 10, [2, 4, 4, 3])
        print np.shape(input_x)

        fn = fully_connected(inputs=input_x, num_outputs=1)
        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            _fn = sess.run(fn)
            print np.shape(_fn)
            print '\n----------\n'
            for i in tf.global_variables():
                print '\n', i

if __name__ == "__main__":
    main()
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
(2, 4, 4, 3)

2017-09-29 11:46:17.248114: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1052] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
# convolution 中,只对输入的最后一维求平均,且阶数不变。即 'weights:0'.shape=[inputs.shape[-1], num_outputs])。
(2, 4, 4, 1)

----------

<tf.Variable 'fully_connected/weights:0' shape=(3, 1) dtype=float64_ref>

<tf.Variable 'fully_connected/biases:0' shape=(1,) dtype=float64_ref>

tf.nn.relu

max(features, 0)

tf.nn.relu(features, name=None)

tf.nn.relu6

min(max(features, 0), 6)。即对 tf.nn.relu 的优化,防止 relu过后 某些 极端值 依然 大于6

tf.nn.relu6(features, name=None)

tf.nn.softmax

计算公式: softmax = exp(logits) / reduce_sum(exp(logits), dim)

tf.nn.softmax (logits, dim=-1, name=None)

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
# coding=utf-8

import tensorflow as tf
import numpy as np
input_x = tf.constant(np.random.uniform(0, 5, [2, 3]))
softmax = tf.nn.softmax(logits=input_x)

# 自己写的softmax接口
def my_softmax(logits, dim=-1):
    my_softmax = tf.div(tf.exp(logits), tf.reduce_sum(tf.exp(logits), dim, keep_dims=True))
    return my_softmax

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    my_softmax = my_softmax(logits=input_x)
    _input_x, _softmax, _my_softmax = sess.run([input_x, softmax, my_softmax])
    print 'input:', np.shape(input_x), input_x, '\n', _input_x
    print '\n----------\n'
    print 'softmax:', np.shape(_softmax), softmax, '\n', _softmax
    print '\n----------\n'
    print 'my_softmax:', np.shape(_my_softmax), my_softmax, '\n', _my_softmax
    print '\n----------\n'
    for i in tf.global_variables():
        print '\n', i
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
# softmax输入类型为tensor型
input: (2, 3) Tensor("Const:0", shape=(2, 3), dtype=float64) 
[[ 3.88517858  3.69402461  3.07837121]
 [ 1.27162028  2.12622856  4.34646188]]

----------
# softmax后,shape保持不变,返回tensor型
softmax: (2, 3) Tensor("Softmax:0", shape=(2, 3), dtype=float64) 
[[ 0.44008545  0.36351295  0.1964016 ]
 [ 0.04000495  0.09402978  0.86596527]]

----------
# 根据计算公式:softmax = exp(logits) / reduce_sum(exp(logits), dim) 得到了相同的接口输出
my_softmax: (2, 3) Tensor("div:0", shape=(2, 3), dtype=float64) 
[[ 0.44008545  0.36351295  0.1964016 ]
 [ 0.04000495  0.09402978  0.86596527]]

----------
# 内存中无参数保存

Process finished with exit code 0

Regularizers

tf.contrib.layers.l1_regularizer

tf.contrib.layers.l1_regularizer (scale, scope=None)

tf.contrib.layers.l2_regularizer

tf.contrib.layers.l2_regularizer (scale, scope=None)

Initializers

tf.contrib.layers.xavier_initializer

执行“Xavier”初始化的初始化程序。

tf.contrib.layers.xavier_initializer (uniform=True, seed=None, dtype=tf.float32)

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
# coding=utf-8

import tensorflow as tf

xavier = tf.get_variable(name="weights",
                         shape=[2, 2],
                         initializer=tf.contrib.layers.xavier_initializer())
constant = tf.get_variable(name='biases',
                         shape=[2],
                         initializer=tf.constant_initializer())

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    _xavier, _constant = sess.run([xavier, constant])
    print '\n\nxavier:'
    print xavier
    print _xavier
    print '\n\nconstant:'
    print constant
    print _constant
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
xavier:
<tf.Variable 'weights:0' shape=(2, 2) dtype=float32_ref>
[[ 1.20015538  0.34742999]
 [ 0.39075291  0.60076308]]


constant:
<tf.Variable 'biases:0' shape=(2,) dtype=float32_ref>
[ 0.  0.]
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
import tensorflow as tf

print '\n\ntf.contrib.layers.xavier_initializer_conv2d() :\n', tf.contrib.layers.xavier_initializer_conv2d()
print '\n\ntf.constant_initializer() :\n', tf.constant_initializer()
print '\n\ntf.global_variables_initializer() :\n', tf.global_variables_initializer()
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
tf.contrib.layers.xavier_initializer_conv2d() :
<function _initializer at 0x7fe5133da578>


tf.constant_initializer() :
<tensorflow.python.ops.init_ops.Constant object at 0x7fe528bbdfd0>


tf.global_variables_initializer() :
name: "init"
op: "NoOp"

Optimization

Summaries

Feature columns



本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2017年09月26日,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
暂无评论
推荐阅读
编辑精选文章
换一批
tensorflow: 对variable_scope进行reuse的两种方法
在tensorflow中,为了 节约变量存储空间 ,我们常常需要通过共享 变量作用域(variable_scope) 来实现 共享变量 。
JNingWei
2018/09/28
8K0
tensorflow: bn层
可视化 batch normalization 过程中的 tensor演化(以输入一张[1, 4 , 4, 1]的图片为例)
JNingWei
2018/09/28
1.1K0
tf.AUTO_REUSE作用
在tensorflow中,为了 节约变量存储空间 ,我们常常需要通过共享 变量作用域(variable_scope) 来实现 共享变量 。
狼啸风云
2019/11/04
2.1K0
tensorflow编程: Neural Network
对 features 和 tf.negative(features) 分别 Relu 并 concatenate 在一起。
JNingWei
2018/09/28
8410
Convolution_model_Application_v1a
Welcome to Course 4's second assignment! In this notebook, you will:
列夫托尔斯昊
2020/08/25
1.7K0
Convolution_model_Application_v1a
tf42:tensorflow多GPU训练
MachineLP的Github(欢迎follow):https://github.com/MachineLP
MachineLP
2019/05/26
7770
tensorflow: bn层 的 decay参数项
探究 batch normalization 过程中的 decay 参数项 在 train 和 test 过程中的不同作用。
JNingWei
2018/09/28
2.2K0
tf.get_collection()
tf.GraphKeys的点后可以跟很多类, 比如VARIABLES类(包含所有variables), 比如REGULARIZATION_LOSSES。
狼啸风云
2020/10/09
9140
卷积神经网络 第一周作业 Convolution+model+-+Application+-+v1
Welcome to Course 4’s second assignment! In this notebook, you will:
Steve Wang
2019/05/29
1.2K0
TensorFlow_Tutorial_v3b——improving NN performance测验
Welcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow:
列夫托尔斯昊
2020/08/25
1.4K0
TensorFlow_Tutorial_v3b——improving NN performance测验
初步了解TensorFlow
在本章中,我们一起来学习下TensorFlow。我们将会学习到TensorFlow的一些基本库。通过计算一个线性函数来熟悉这些库。最后还学习使用TensorFlow搭建一个神经网络来识别手势。本章用到的一些库在这里下载。
夜雨飘零
2020/05/06
5550
【tensorflow速成】Tensorflow图像分类从模型自定义到测试
TensorFlow 是 Google brain 推出的开源机器学习库,与 Caffe 一样,主要用作深度学习相关的任务。
用户1508658
2019/07/25
7360
【tensorflow速成】Tensorflow图像分类从模型自定义到测试
改善深层神经网络 - 第二课第三周作业 TensorFlow Tutorial
Welcome to this week’s programming assignment. Until now, you’ve always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow:
Steve Wang
2019/05/29
2.2K0
TensorFlow - TF-Slim 使用总览
虽然这里是采用 TF-Slim 处理图像分类问题,还需要安装 TF-Slim 图像模型库 tensorflow/models/research/slim. 假设该库的安装路径为 TF_MODELS. 添加 TF_MODELS/research/slim 到 python path.
狼啸风云
2019/07/08
2.9K0
tensorflow: 打印内存中的变量
法一: 循环打印 模板 for (x, y) in zip(tf.global_variables(), sess.run(tf.global_variables())): print '\n', x, y 实例 # coding=utf-8 import tensorflow as tf def func(in_put, layer_name, is_training=True): with tf.variable_scope(layer_name, reuse=tf.AUT
JNingWei
2018/09/28
1.9K0
【TensorFlow篇】--DNN初始和应用
正向传播:在开始的每一层上都有一个参数值w,初始的时候是随机的,前向带入的是每一个样本值。
LhWorld哥陪你聊算法
2018/09/13
8280
【TensorFlow篇】--DNN初始和应用
TensorFlow基础入门
到目前为止,您一直使用numpy来构建神经网络。现在我们将引导您使用一个深度学习框架,让您可以更轻松地构建神经网络。TensorFlow、PaddlePaddle、Torch、Caffe、Keras等机器学习框架可显著加速机器学习开发。在此作业中,您将学习在TensorFlow中执行以下操作:
云水木石
2019/07/01
1.7K0
TensorFlow基础入门
三天速成 TensorFlow课件分享
该教程第一天先介绍了深度学习和机器学习的潜力与基本概念,而后便开始探讨深度学习框架 TensorFlow。首先我们将学到如何安装 TensorFlow,其实我们感觉 TensorFlow 环境配置还是相当便捷的,基本上按照官网的教程就能完成安装。随后就从「Hello TensorFlow」开始依次讲解计算图、占位符、张量等基本概念。
刘盼
2018/03/16
2K0
三天速成 TensorFlow课件分享
02.改善深层神经网络:超参数调试、正则化以及优化 W3. 超参数调试、Batch Norm和程序框架(作业:TensorFlow教程+数字手势预测)
笔记:02.改善深层神经网络:超参数调试、正则化以及优化 W3. 超参数调试、Batch Norm和程序框架
Michael阿明
2021/02/19
9290
VggNet10模型的cifar10深度学习训练
先放些链接,cifar10的数据集的下载地址:http://www.cs.toronto.edu/~kriz/cifar.html
全栈程序员站长
2022/09/27
4980
推荐阅读
相关推荐
tensorflow: 对variable_scope进行reuse的两种方法
更多 >
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档
本文部分代码块支持一键运行,欢迎体验
本文部分代码块支持一键运行,欢迎体验