MNIST 数据集来自美国国家标准与技术研究所, National Institute of Standards and Technology (NIST)。训练集 (training set) 由来自 250 个不同人手写的数字构成, 其中 50% 是高中学生, 50% 来自人口普查局 (the Census Bureau) 的工作人员。测试集(test set) 也是同样比例的手写数字数据。
手写数字样式
官网截图
在数据集中,每张图片的大小是28*28,存储时1*784的向量保存。 打印数据,查看原始信息:
import numpy
import gzip
# Params for MNIST
IMAGE_SIZE = 28
NUM_CHANNELS = 1
PIXEL_DEPTH = 255
NUM_LABELS = 10
# Extract the images
def extract_data(filename, num_images):
"""Extract the images into a 4D tensor [image index, y, x, channels].
Values are rescaled from [0, 255] down to [-0.5, 0.5].
"""
print('Extracting', filename)
with gzip.open(filename) as bytestream:
bytestream.read(16)
buf = bytestream.read(IMAGE_SIZE * IMAGE_SIZE * num_images * NUM_CHANNELS)
data = numpy.frombuffer(buf, dtype=numpy.uint8).astype(numpy.float32)
data = (data - (PIXEL_DEPTH / 2.0)) / PIXEL_DEPTH
data = data.reshape(num_images, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS)
data = numpy.reshape(data, [num_images, -1])
return data
def extract_labels(filename, num_images):
"""Extract the labels into a vector of int64 label IDs."""
print('Extracting', filename)
with gzip.open(filename) as bytestream:
bytestream.read(8)
buf = bytestream.read(1 * num_images)
labels = numpy.frombuffer(buf, dtype=numpy.uint8).astype(numpy.int64)
# num_labels_data = len(labels)
# one_hot_encoding = numpy.zeros((num_labels_data,NUM_LABELS))
# one_hot_encoding[numpy.arange(num_labels_data),labels] = 1
# one_hot_encoding = numpy.reshape(one_hot_encoding, [-1, NUM_LABELS])
# sklearn的logisticsRegression 标签不用one-hot编码
return labels
path= '/Users/wangsen/ai/03/4day_tensorflow/data'
train_data = extract_data(path+'/train-images-idx3-ubyte.gz', 60000)
train_labels = extract_labels(path+'/train-labels-idx1-ubyte.gz', 60000)
test_data = extract_data(path+'/t10k-images-idx3-ubyte.gz', 10000)
test_labels = extract_labels(path+'/t10k-labels-idx1-ubyte.gz', 10000)
print(train_data.shape)
print(train_labels.shape)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(train_data,train_labels)
score = lr.score(test_data,test_labels)
print("sklearn logisticRegression score:",score)
结果:
(60000, 784)
(60000,)
sklearn logisticRegression score: 0.9194
上例子中,把784个像素作为特征,标签是数字,进行逻辑回归。score是准确率,最终的结果在测试集上达到90%以上的准确率。 这段程序时间较长,10分钟以上。
由于原图维度过大784,导致训练时间较长。可以通过降维算法,将数据压缩到较小的维度,再进行训练,可以调高训练速度。
from sklearn.decomposition import PCA
pca = PCA(60)
train_data= pca.fit_transform(train_data)
print('降维后训练集的维度',train_data.shape)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(train_data,train_labels)
test_data = pca.transform(test_data)
score = lr.score(test_data,test_labels)
print("sklearn logisticRegression score:",score)
结果:
降维后训练集的维度 (60000, 60)
sklearn logisticRegression score: 0.9088
MNIST手写数据集可以作为机器学习入门练手数据。在官网原文中,包含了机器学习和深度学习最终处理的结果,最低误差为0.35%。 本文用最简单的逻辑回归模型,从官网数据上看,取得最好评分模型的的还是卷积神经网络。