版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/jinxiaonian11/article/details/102235274
更新时间:2020-10-6
# import lib
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.layers import Conv2D, BatchNormalization, MaxPool2D, Flatten, Dense,Dropout
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
# 准备数据
fashion_mnist = keras.datasets.fashion_mnist
(train_imgs, train_labels), (test_imgs,
test_labels) = fashion_mnist.load_data()
print(train_imgs.shape)
# 简单归一化
train_imgs, test_imgs = train_imgs / 255.0, test_imgs / 255.0
# 增加一个维度: 通道维度
train_imgs = train_imgs[..., tf.newaxis]
test_imgs = test_imgs[..., tf.newaxis]
# 构建模型
# 基于keras的序列式模型
model = keras.Sequential([
Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),
BatchNormalization(),
Conv2D(64, (3, 3), activation='relu', input_shape=(28, 28, 1)),
BatchNormalization(),
MaxPool2D((2,2)),
Conv2D(128, (3, 3), activation='relu', input_shape=(28, 28, 1)),
BatchNormalization(),
Flatten(),
Dense(1000, activation='relu'),
Dropout(0.2),
Dense(100, activation='relu'),
layers.Dense(10, activation='softmax')
])
# 模型编译
# 优化器选择:adam
# loss选择:交叉熵损失
# 验证方式: 精度
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# 模型选择
# batch_szie: 32
# epochs: 10
model.fit(train_imgs, train_labels, epochs=1, batch_size=32)
# 模型验证
test_loss, test_acc = model.evaluate(test_imgs, test_labels,verbose=0)
print(test_acc)
该示例是基于keras的序列式模型构建的方式。包含了常用的卷积层,BN层,最大池化,全连接层。