我最近刚进入Tensorflow,但从简单的单层神经网络扩展到多层神经网络时遇到了一些问题。我已经从我的尝试中粘贴了下面的代码,任何关于它为什么不工作的帮助都将不胜感激!
import tensorflow as tf
from tqdm import trange
from tensorflow.examples.tutorials.mnist import input_data
# Import data
mnist = input_data.read_data_sets("datasets/MNIST_data/", one_hot=True)
x = tf.placeholder(tf.float32, [None, 784])
W0 = tf.Variable(tf.zeros([784, 500]))
b0 = tf.Variable(tf.zeros([500]))
y0 = tf.matmul(x, W0) + b0
relu0 = tf.nn.relu(y0)
W1 = tf.Variable(tf.zeros([500, 100]))
b1= tf.Variable(tf.zeros([100]))
y1 = tf.matmul(relu0, W1) + b1
relu1 = tf.nn.relu(y1)
W2 = tf.Variable(tf.zeros([100, 10]))
b2= tf.Variable(tf.zeros([10]))
y2 = tf.matmul(relu1, W2) + b2
y = y2
# Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
# Create a Session object, initialize all variables
sess = tf.Session()
sess.run(tf.global_variables_initializer())
# Train
for _ in trange(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
# Test trained model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x:
mnist.test.images, y_: mnist.test.labels})))
sess.close()
PS:我知道使用Keras甚至是预建的Tensorflow层可以更容易地实现这段代码,但我正在尝试对库背后的数学有一个更基本的理解。谢谢!
发布于 2019-04-01 00:02:07
你有两件事要考虑。
1) tf.Variable(tf.zeros([784, 500]))
使用tf.Variable(tf.random_normal([784, 500]))
更改这一点,因为随机初始化权重比从一开始就将它们定义为0更好。通过最初将其设置为0(意味着所有内容都具有相同的值),模型将遵循相同的梯度路径,并且将无法学习。首先,使用random_normal
更改每个zeros
。首先定义变量有更好的方法,但这会给你一个好的开始
2)您的学习率太高train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
将此行更改为
train_step = tf.train.GradientDescentOptimizer(0.005).minimize(cross_entropy)
https://stackoverflow.com/questions/55446412
复制相似问题