前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >专栏 >Keras_Tutorial_v2a

Keras_Tutorial_v2a

作者头像
列夫托尔斯昊
发布于 2020-08-25 08:31:23
发布于 2020-08-25 08:31:23
93700
代码可运行
举报
文章被收录于专栏:探物及理探物及理
运行总次数:0
代码可运行

Keras tutorial - Emotion Detection in Images of Faces

Welcome to the first assignment of week 2. In this assignment, you will:

  1. Learn to use Keras, a high-level neural networks API (programming framework), written in Python and capable of running on top of several lower-level frameworks including TensorFlow and CNTK.
  2. See how you can in a couple of hours build a deep learning algorithm.
Why are we using Keras?
  • Keras was developed to enable deep learning engineers to build and experiment with different models very quickly.
  • Just as TensorFlow is a higher-level framework than Python, Keras is an even higher-level framework and provides additional abstractions.
  • Being able to go from idea to result with the least possible delay is key to finding good models.
  • However, Keras is more restrictive than the lower-level frameworks, so there are some very complex models that you would still implement in TensorFlow rather than in Keras.
  • That being said, Keras will work fine for many common models.

Updates

If you were working on the notebook before this update...
  • The current notebook is version "v2a".
  • You can find your original work saved in the notebook with the previous version name ("v2").
  • To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory.
List of updates
  • Changed back-story of model to "emotion detection" from "happy house."
  • Cleaned/organized wording of instructions and commentary.
  • Added instructions on how to set input_shape
  • Added explanation of "objects as functions" syntax.
  • Clarified explanation of variable naming convention.
  • Added hints for steps 1,2,3,4

Load packages

  • In this exercise, you'll work on the "Emotion detection" model, which we'll explain below.
  • Let's load the required packages.
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
import numpy as np
from keras import layers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.models import Model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from kt_utils import *

import keras.backend as K
K.set_image_data_format('channels_last')
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow

%matplotlib inline
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
Using TensorFlow backend.

Note: As you can see, we've imported a lot of functions from Keras. You can use them by calling them directly in your code. Ex: X = Input(...) or X = ZeroPadding2D(...).

In other words, unlike TensorFlow, you don't have to create the graph and then make a separate sess.run() call to evaluate those variables.

1 - Emotion Tracking

  • A nearby community health clinic is helping the local residents monitor their mental health.
  • As part of their study, they are asking volunteers to record their emotions throughout the day.
  • To help the participants more easily track their emotions, you are asked to create an app that will classify their emotions based on some pictures that the volunteers will take of their facial expressions.
  • As a proof-of-concept, you first train your model to detect if someone's emotion is classified as "happy" or "not happy."

To build and train this model, you have gathered pictures of some volunteers in a nearby neighborhood. The dataset is labeled.

style="width:550px;height:250px;">

Run the following code to normalize the dataset and learn about its shapes.

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()

# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.

# Reshape
Y_train = Y_train_orig.T
Y_test = Y_test_orig.T

print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
number of training examples = 600
number of test examples = 150
X_train shape: (600, 64, 64, 3)
Y_train shape: (600, 1)
X_test shape: (150, 64, 64, 3)
Y_test shape: (150, 1)

Details of the "Face" dataset:

  • Images are of shape (64,64,3)
  • Training: 600 pictures
  • Test: 150 pictures

2 - Building a model in Keras

Keras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results.

Here is an example of a model in Keras:

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
def model(input_shape):
    """
    input_shape: The height, width and channels as a tuple.  
        Note that this does not include the 'batch' as a dimension.
        If you have a batch like 'X_train', 
        then you can provide the input_shape using
        X_train.shape[1:]
    """
    
    # Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
    X_input = Input(input_shape)

    # Zero-Padding: pads the border of X_input with zeroes
    X = ZeroPadding2D((3, 3))(X_input)

    # CONV -> BN -> RELU Block applied to X
    X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X)
    X = BatchNormalization(axis = 3, name = 'bn0')(X)
    X = Activation('relu')(X)

    # MAXPOOL
    X = MaxPooling2D((2, 2), name='max_pool')(X)

    # FLATTEN X (means convert it to a vector) + FULLYCONNECTED
    X = Flatten()(X)
    X = Dense(1, activation='sigmoid', name='fc')(X)

    # Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
    model = Model(inputs = X_input, outputs = X, name='HappyModel')
    
    return model
Variable naming convention
  • Note that Keras uses a different convention with variable names than we've previously used with numpy and TensorFlow.
  • Instead of creating unique variable names for each step and each layer, such as X = ... Z1 = ... A1 = ...
  • Keras re-uses and overwrites the same variable at each step: X = ... X = ... X = ...
  • The exception is X_input, which we kept separate since it's needed later.
Objects as functions
  • Notice how there are two pairs of parentheses in each statement. For example: X = ZeroPadding2D((3, 3))(X_input)
  • The first is a constructor call which creates an object (ZeroPadding2D).
  • In Python, objects can be called as functions. Search for 'python object as function and you can read this blog post Python Pandemonium. See the section titled "Objects as functions."
  • The single line is equivalent to this: ZP = ZeroPadding2D((3, 3)) # ZP is an object that can be called as a function X = ZP(X_input)

Exercise: Implement a HappyModel().

  • This assignment is more open-ended than most.
  • Start by implementing a model using the architecture we suggest, and run through the rest of this assignment using that as your initial model. * Later, come back and try out other model architectures.
  • For example, you might take inspiration from the model above, but then vary the network architecture and hyperparameters however you wish.
  • You can also use other functions such as AveragePooling2D(), GlobalMaxPooling2D(), Dropout().

Note: Be careful with your data's shapes. Use what you've learned in the videos to make sure your convolutional, pooling and fully-connected layers are adapted to the volumes you're applying it to.

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
# GRADED FUNCTION: HappyModel

def HappyModel(input_shape):
    """
    Implementation of the HappyModel.
    
    Arguments:
    input_shape -- shape of the images of the dataset
        (height, width, channels) as a tuple.  
        Note that this does not include the 'batch' as a dimension.
        If you have a batch like 'X_train', 
        then you can provide the input_shape using
        X_train.shape[1:]


    Returns:
    model -- a Model() instance in Keras
    
    """
    
    ### START CODE HERE ###
    # Feel free to use the suggested outline in the text above to get started, and run through the whole
    # exercise (including the later portions of this notebook) once. The come back also try out other
    # network architectures as well. 
    X_input = Input(input_shape)
    X = ZeroPadding2D((1,1))(X_input)
    X = Conv2D(32,(3,3),strides=(1,1),name = 'conv0')(X)
    X = BatchNormalization(axis = 3,name = 'bn0')(X)
    X = Activation('relu')(X)
    
    X = MaxPooling2D((2,2),name = 'max_pool')(X)
    X = Flatten()(X)
    Y = Dense(1,activation='sigmoid',name = 'fc')(X)
    
    model = Model(inputs= X_input, outputs = Y, name = 'HappyModel')
    
    
    
    ### END CODE HERE ###
    
    return model

You have now built a function to describe your model. To train and test this model, there are four steps in Keras:

  1. Create the model by calling the function above
  2. Compile the model by calling model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"])
  3. Train the model on train data by calling model.fit(x = ..., y = ..., epochs = ..., batch_size = ...)
  4. Test the model on test data by calling model.evaluate(x = ..., y = ...)

If you want to know more about model.compile(), model.fit(), model.evaluate() and their arguments, refer to the official Keras documentation.

Step 1: create the model.

Hint: The input_shape parameter is a tuple (height, width, channels). It excludes the batch number. Try X_train.shape[1:] as the input_shape.

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
### START CODE HERE ### (1 line)
happyModel = HappyModel(X_train[1,:].shape)
### END CODE HERE ###
Step 2: compile the model

Hint: Optimizers you can try include 'adam', 'sgd' or others. See the documentation for optimizers The "happiness detection" is a binary classification problem. The loss function that you can use is 'binary_cross_entropy'. Note that 'categorical_cross_entropy' won't work with your data set as its formatted, because the data is an array of 0 or 1 rather than two arrays (one for each category). Documentation for losses

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
### START CODE HERE ### (1 line)
happyModel.compile(optimizer="Adam",loss="binary_crossentropy",metrics=["accuracy"])
### END CODE HERE ###
Step 3: train the model

Hint: Use the 'X_train', 'Y_train' variables. Use integers for the epochs and batch_size

Note: If you run fit() again, the model will continue to train with the parameters it has already learned instead of reinitializing them.

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
### START CODE HERE ### (1 line)
happyModel.fit(X_train,Y_train,epochs=40, batch_size=32)
### END CODE HERE ###
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
Epoch 1/40
600/600 [==============================] - 9s - loss: 0.2501 - acc: 0.9017     
Epoch 2/40
600/600 [==============================] - 8s - loss: 0.1819 - acc: 0.9367     
Epoch 3/40
600/600 [==============================] - 9s - loss: 0.1683 - acc: 0.9300     
Epoch 4/40
600/600 [==============================] - 8s - loss: 0.1163 - acc: 0.9567     
Epoch 5/40
600/600 [==============================] - 9s - loss: 0.0801 - acc: 0.9800     
Epoch 6/40
600/600 [==============================] - 9s - loss: 0.0918 - acc: 0.9550     
Epoch 7/40
600/600 [==============================] - 8s - loss: 0.0586 - acc: 0.9800     
Epoch 8/40
600/600 [==============================] - 8s - loss: 0.0366 - acc: 0.9917     
Epoch 9/40
600/600 [==============================] - 9s - loss: 0.0286 - acc: 0.9900     
Epoch 10/40
600/600 [==============================] - 9s - loss: 0.0254 - acc: 0.9933     
Epoch 11/40
600/600 [==============================] - 9s - loss: 0.0360 - acc: 0.9900     
Epoch 12/40
600/600 [==============================] - 8s - loss: 0.0215 - acc: 0.9950     
Epoch 13/40
600/600 [==============================] - 9s - loss: 0.0196 - acc: 0.9967     
Epoch 14/40
600/600 [==============================] - 9s - loss: 0.0385 - acc: 0.9833     
Epoch 15/40
600/600 [==============================] - 9s - loss: 0.0170 - acc: 0.9967     
Epoch 16/40
600/600 [==============================] - 9s - loss: 0.0162 - acc: 0.9933     
Epoch 17/40
600/600 [==============================] - 9s - loss: 0.0103 - acc: 0.9983     
Epoch 18/40
600/600 [==============================] - 9s - loss: 0.0111 - acc: 0.9983     
Epoch 19/40
600/600 [==============================] - 9s - loss: 0.0100 - acc: 0.9967     
Epoch 20/40
600/600 [==============================] - 9s - loss: 0.0118 - acc: 0.9983     
Epoch 21/40
600/600 [==============================] - 9s - loss: 0.0086 - acc: 0.9983     
Epoch 22/40
600/600 [==============================] - 9s - loss: 0.0099 - acc: 0.9983     
Epoch 23/40
600/600 [==============================] - 9s - loss: 0.0200 - acc: 0.9917     
Epoch 24/40
600/600 [==============================] - 9s - loss: 0.0099 - acc: 1.0000     
Epoch 25/40
600/600 [==============================] - 9s - loss: 0.0087 - acc: 0.9983     
Epoch 26/40
600/600 [==============================] - 9s - loss: 0.0095 - acc: 0.9967     
Epoch 27/40
600/600 [==============================] - 9s - loss: 0.0075 - acc: 1.0000     
Epoch 28/40
600/600 [==============================] - 9s - loss: 0.0072 - acc: 1.0000     
Epoch 29/40
600/600 [==============================] - 9s - loss: 0.0076 - acc: 0.9983     
Epoch 30/40
600/600 [==============================] - 9s - loss: 0.0052 - acc: 0.9983     
Epoch 31/40
600/600 [==============================] - 9s - loss: 0.0058 - acc: 0.9983     
Epoch 32/40
600/600 [==============================] - 8s - loss: 0.0074 - acc: 0.9967     
Epoch 33/40
600/600 [==============================] - 9s - loss: 0.0044 - acc: 1.0000     
Epoch 34/40
600/600 [==============================] - 8s - loss: 0.0054 - acc: 0.9983     
Epoch 35/40
600/600 [==============================] - 8s - loss: 0.0159 - acc: 0.9933     
Epoch 36/40
600/600 [==============================] - 8s - loss: 0.0149 - acc: 0.9950     
Epoch 37/40
600/600 [==============================] - 8s - loss: 0.0116 - acc: 0.9967     
Epoch 38/40
600/600 [==============================] - 8s - loss: 0.0045 - acc: 1.0000     
Epoch 39/40
600/600 [==============================] - 8s - loss: 0.0064 - acc: 0.9967     
Epoch 40/40
600/600 [==============================] - 8s - loss: 0.0034 - acc: 1.0000     





<keras.callbacks.History at 0x7f43b4792f28>
Step 4: evaluate model

Hint: Use the 'X_test' and 'Y_test' variables to evaluate the model's performance.

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
### START CODE HERE ### (1 line)
preds = happyModel.evaluate(X_test,Y_test)
### END CODE HERE ###
print()
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
150/150 [==============================] - 1s     

Loss = 0.0785783381263
Test Accuracy = 0.953333337307
Expected performance

If your happyModel() function worked, its accuracy should be better than random guessing (50% accuracy).

To give you a point of comparison, our model gets around 95% test accuracy in 40 epochs (and 99% train accuracy) with a mini batch size of 16 and "adam" optimizer.

Tips for improving your model

If you have not yet achieved a very good accuracy (>= 80%), here are some things tips:

  • Use blocks of CONV->BATCHNORM->RELU such as: python X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X) X = BatchNormalization(axis = 3, name = 'bn0')(X) X = Activation('relu')(X) until your height and width dimensions are quite low and your number of channels quite large (≈32 for example). You can then flatten the volume and use a fully-connected layer.
  • Use MAXPOOL after such blocks. It will help you lower the dimension in height and width.
  • Change your optimizer. We find 'adam' works well.
  • If you get memory issues, lower your batch_size (e.g. 12 )
  • Run more epochs until you see the train accuracy no longer improves.

Note: If you perform hyperparameter tuning on your model, the test set actually becomes a dev set, and your model might end up overfitting to the test (dev) set. Normally, you'll want separate dev and test sets. The dev set is used for parameter tuning, and the test set is used once to estimate the model's performance in production.

3 - Conclusion

Congratulations, you have created a proof of concept for "happiness detection"!

Key Points to remember

  • Keras is a tool we recommend for rapid prototyping. It allows you to quickly try out different model architectures.
  • Remember The four steps in Keras:
  1. Create
  2. Compile
  3. Fit/Train
  4. Evaluate/Test

4 - Test with your own image (Optional)

Congratulations on finishing this assignment. You can now take a picture of your face and see if it can classify whether your expression is "happy" or "not happy". To do that:

  1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
  2. Add your image to this Jupyter Notebook's directory, in the "images" folder
  3. Write your image's name in the following code
  4. Run the code and check if the algorithm is right (0 is not happy, 1 is happy)!

The training/test sets were quite similar; for example, all the pictures were taken against the same background (since a front door camera is always mounted in the same position). This makes the problem easier, but a model trained on this data may or may not work on your own data. But feel free to give it a try!

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
### START CODE HERE ###
img_path = 'images/my_image.jpg'
img_path = 'images/2020-7-9 下午12.50拍摄的照片.jpg'
### END CODE HERE ###
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)

x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)

print(happyModel.predict(x))
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[[ 1.]]

5 - Other useful functions in Keras (Optional)

Two other basic features of Keras that you'll find useful are:

  • model.summary(): prints the details of your layers in a table with the sizes of its inputs/outputs
  • plot_model(): plots your graph in a nice layout. You can even save it as ".png" using SVG() if you'd like to share it on social media ;). It is saved in "File" then "Open..." in the upper bar of the notebook.

Run the following code.

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
happyModel.summary()
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_4 (InputLayer)         (None, 64, 64, 3)         0         
_________________________________________________________________
zero_padding2d_4 (ZeroPaddin (None, 66, 66, 3)         0         
_________________________________________________________________
conv0 (Conv2D)               (None, 64, 64, 32)        896       
_________________________________________________________________
bn0 (BatchNormalization)     (None, 64, 64, 32)        128       
_________________________________________________________________
activation_4 (Activation)    (None, 64, 64, 32)        0         
_________________________________________________________________
max_pool (MaxPooling2D)      (None, 32, 32, 32)        0         
_________________________________________________________________
flatten_4 (Flatten)          (None, 32768)             0         
_________________________________________________________________
fc (Dense)                   (None, 1)                 32769     
=================================================================
Total params: 33,793
Trainable params: 33,729
Non-trainable params: 64
_________________________________________________________________
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
plot_model(happyModel, to_file='HappyModel.png')
SVG(model_to_dot(happyModel).create(prog='dot', format='svg'))
代码语言:javascript
代码运行次数:0
运行
复制
本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2020-07-09 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
暂无评论
推荐阅读
编辑精选文章
换一批
卷积神经网络 第三周作业 Keras+-+Tutorial+-+Happy+House+v1
Welcome to the first assignment of week 2. In this assignment, you will:
Steve Wang
2019/05/28
7490
卷积神经网络 第三周作业 Keras+-+Tutorial+-+Happy+House+v1
Keras tutorialThe Happy House
The Happy House Why are we using Keras? Keras was developed to enable deep learning engineers to bui
小飞侠xp
2018/08/29
3360
04.卷积神经网络 W2.深度卷积网络:实例探究(作业:Keras教程+ResNets残差网络)
Keras 是更高级的框架,对普通模型来说很友好,但是要实现更复杂的模型需要 TensorFlow 等低级的框架
Michael阿明
2021/02/19
7740
深入解析EfficientNet:高效深度学习网络与ResNet的对比(使用keras进行代码复现,并使用cifar10数据集进行实战)
在深度学习领域,卷积神经网络(CNN)是解决图像分类、目标检测等问题的关键技术之一。近年来,随着深度学习的不断发展,新的网络架构不断涌现。在众多网络架构中,EfficientNet和ResNet都成为了深度学习模型的佼佼者,分别在高效性和深度特性上得到了广泛应用。本文将详细介绍EfficientNet,并与经典的ResNet进行对比,分析它的架构、使用场景、适用问题及实例。
机器学习司猫白
2025/03/05
4330
深入解析EfficientNet:高效深度学习网络与ResNet的对比(使用keras进行代码复现,并使用cifar10数据集进行实战)
ResNet50及其Keras实现
你或许看过这篇访问量过12万的博客ResNet解析,但该博客的第一小节ResNet和吴恩达的叙述完全不同,因此博主对这篇博文持怀疑态度,你可以在这篇博文最下面找到提出该网络的论文链接,这篇博文可以作为研读这篇论文的基础。
Steve Wang
2019/05/28
6.5K0
ResNet50及其Keras实现
数据科学 IPython 笔记本 四、Keras(下)
为了节省时间,你可以采样一个观测子集(例如 1000 个),这是你选择的特定数字(例如 6)和 1000 非特定数字的观察值(即非 6)。我们将使用它构建一个模型,并查看它在测试数据集上的表现。
ApacheCN_飞龙
2022/05/07
8700
数据科学 IPython 笔记本 四、Keras(下)
Residual_Networks_v2a
Welcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by He et al., allow you to train much deeper networks than were previously practically feasible.
列夫托尔斯昊
2020/08/25
9880
Residual_Networks_v2a
卷积神经网络 第三周作业:Residual+Networks+-+v1
Welcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by He et al., allow you to train much deeper networks than were previously practically feasible.
Steve Wang
2019/05/28
1.2K0
卷积神经网络 第三周作业:Residual+Networks+-+v1
Build Residual Networks
我们将使用残差网络建立一个很深的卷积神经网络,理论上而言越深的网络可以表示更加复杂的函数,但是训练也更加困难。Residual Networks可以让我们训练更深的网络。
小飞侠xp
2018/08/29
1.1K0
基于Keras+CNN的MNIST数据集手写数字分类
Keras官方github链接:https://github.com/keras-team/keras 官方的口号是Keras: Deep Learning for humans,中文叫做Keras是给人使用的深度学习开发框架,其意义是Keras是一个高度集成的开发框架,其中的API调用很简单。 Keras用python语言编写,在tensorflow、cntk、theano这3种框架的基础上运行。 本文是学习github源码的笔记,源码链接:https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py
潇洒坤
2018/10/09
2.4K0
基于Keras+CNN的MNIST数据集手写数字分类
深度学习实战 | 使用Kera预测人物年龄
01 问题描述 我们的任务是从一个人的面部特征来预测他的年龄(用“Young”“Middle ”“Old”表示),我们训练的数据集大约有19906多张照片及其每张图片对应的年龄(全是阿三的头像。。。),测试集有6636张图片,首先我们加载数据集,然后我们通过深度学习框架Keras建立、编译、训练模型,预测出6636张人物头像对应的年龄。 02 引入所需要的模块 import os import random import pandas as pd import numpy as np from PIL im
用户1332428
2018/03/09
1.7K0
深度学习实战 | 使用Kera预测人物年龄
机器学习(二)深度学习实战-使用Kera预测人物年龄问题描述引入所需要模块加载数据集创建模型编译模型优化optimize1 使用卷积神经网络optimize2 增加神经网络的层数输出结果结果
问题描述 我们的任务是从一个人的面部特征来预测他的年龄(用“Young”“Middle ”“Old”表示),我们训练的数据集大约有19906多张照片及其每张图片对应的年龄(全是阿三的头像。。。),测试集有6636张图片,首先我们加载数据集,然后我们通过深度学习框架Keras建立、编译、训练模型,预测出6636张人物头像对应的年龄 引入所需要模块 import os import random import pandas as pd import numpy as np from PIL import Ima
致Great
2018/04/11
1.1K0
机器学习(二)深度学习实战-使用Kera预测人物年龄问题描述引入所需要模块加载数据集创建模型编译模型优化optimize1 使用卷积神经网络optimize2 增加神经网络的层数输出结果结果
keras系列︱图像多分类训练与利用bottleneck features进行微调(三)
该文摘要总结:利用卷积神经网络来对图像进行特征提取和分类,使用预训练的VGG16网络作为基础网络,通过修改网络结构以适应自己的数据集,并使用合成数据集进行训练。在训练过程中,使用了数据增强技术,包括旋转、翻转和水平翻转等,以提高模型的性能。最终,该模型在测试集上获得了85.43%的准确率,表现良好。
悟乙己
2018/01/02
4.4K0
keras系列︱图像多分类训练与利用bottleneck features进行微调(三)
【连载14】VGG、MSRANet和Highway Networks
在论文《Very Deep Convolutional Networks for Large-Scale Image Recognition》中提出,通过缩小卷积核大小来构建更深的网络。
lujohn3li
2020/03/05
1.4K0
【连载14】VGG、MSRANet和Highway Networks
用Keras通过Python进行卷积神经网络的手写数字识别
图像识别是深度学习技术的一个普遍具有的功能。
青橙.
2018/02/07
6K0
用Keras通过Python进行卷积神经网络的手写数字识别
从0实现基于Keras的两种建模
可以看到cifar服装图片数据集存在50000个训练样本,10000个测试样本;数据集是四维的。
皮大大
2023/08/25
2120
基于CelebA数据集的GAN模型-2
前两篇我们介绍了celeB数据集 CelebA Datasets——Readme 基于CelebA数据集的GAN模型 直接上代码咯 导入依赖: # example of a gan for generating faces from numpy import load from numpy import zeros from numpy import ones from numpy.random import randn from numpy.random import randint from ker
Tom2Code
2023/02/14
6470
基于CelebA数据集的GAN模型-2
【Keras】Keras使用进阶
通常用keras做分类任务的时候,一张图像往往只对应着一种类别,但是在实际的问题中,可能你需要预测出一张图像的多种属性。例如在pyimagesearch的《multi-label-classification-with-keras》这篇文章中提出了一个衣服数据集,整个数据集有两种属性,一种是颜色(blue, red, black),另一种是衣服的类型(dress, jeans, shirt) 。如假设one-hot-vector编码顺序是(blue, red, black, dress, jeans, shirt)则black jeans的 label就是[0,0,1,0,1,0]。
keloli
2019/08/12
1.2K0
keras 基础入门整理
在进行自然语言处理之前,需要对文本进行处理。 本文介绍keras提供的预处理包keras.preproceing下的text与序列处理模块sequence模块
学到老
2019/01/25
1.6K0
Keras Callback之RemoteMonitor
Keras提供了一系列的回调函数,用来在训练网络的过程中,查看网络的内部信息,或者控制网络训练的过程。BaseLogger、ProgbarLogger用来在命令行输出Log信息(默认会调用), EarlyStopping、ReduceLROnPlateu分别用来提前终止训练和自动调整学习率,改变网络训练过程;而今天要介绍的RemoteMonitor则用来实时输出网络训练过程中的结果变化情况,包括训练集准确率(accu)、训练集损失值(loss)、验证集准确率(val_acc)、验证集损失值(val_loss),用户也可以自己修改需要显示的数据。一图胜千言,看看下面的结果图吧:
王云峰
2019/12/25
9340
Keras Callback之RemoteMonitor
相关推荐
卷积神经网络 第三周作业 Keras+-+Tutorial+-+Happy+House+v1
更多 >
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档
本文部分代码块支持一键运行,欢迎体验
本文部分代码块支持一键运行,欢迎体验