我使用Keras和precipitation data in Australia构建了一个简单的多层神经网络。该代码接受4个输入列:['MinTemp', 'MaxTemp', 'Rainfall', 'WindGustSpeed']
,并根据RainTomorrow
输出进行训练。
我已经将数据划分到训练/测试桶中,将所有值转换为0 <= n <= 1
。当我尝试运行model.fit
时,我的损失值稳定在~13.2,但我的准确率始终是0.0。记录拟合间隔的示例如下:
...
Epoch 37/200
113754/113754 [==============================] - 0s 2us/step - loss: -13.1274 - acc: 0.0000e+00 - val_loss: -16.1168 - val_acc: 0.0000e+00
Epoch 38/200
113754/113754 [==============================] - 0s 2us/step - loss: -13.1457 - acc: 0.0000e+00 - val_loss: -16.1168 - val_acc: 0.0000e+00
Epoch 39/200
113754/113754 [==============================] - 0s 2us/step - loss: -13.1315 - acc: 0.0000e+00 - val_loss: -16.1168 - val_acc: 0.0000e+00
Epoch 40/200
113754/113754 [==============================] - 0s 2us/step - loss: -13.1797 - acc: 0.0000e+00 - val_loss: -16.1168 - val_acc: 0.0000e+00
Epoch 41/200
113754/113754 [==============================] - 0s 2us/step - loss: -13.1844 - acc: 0.0000e+00 - val_loss: -16.1169 - val_acc: 0.0000e+00
Epoch 42/200
113754/113754 [==============================] - 0s 2us/step - loss: -13.2205 - acc: 0.0000e+00 - val_loss: -16.1169 - val_acc: 0.0000e+00
Epoch 43/200
...
我如何修改下面的脚本,使我的准确率提高,并且我的预测输出返回一个介于0和1之间的值(0:无雨,1:下雨)?
import keras
import sklearn.model_selection
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import MinMaxScaler
labelencoder = LabelEncoder()
# read data, replace NaN with 0.0
csv_data = pd.read_csv('weatherAUS.csv', header=0)
csv_data = csv_data.replace(np.nan, 0.0, regex=True)
# Input/output columns scaled to 0<=n<=1
x = csv_data.loc[:, ['MinTemp', 'MaxTemp', 'Rainfall', 'WindGustSpeed']]
y = labelencoder.fit_transform(csv_data['RainTomorrow'])
scaler_x = MinMaxScaler(feature_range =(-1, 1))
x = scaler_x.fit_transform(x)
scaler_y = MinMaxScaler(feature_range =(-1, 1))
y = scaler_y.fit_transform([y])[0]
# Partitioned data for training/testing
x_train, x_test, y_train, y_test = sklearn.model_selection.train_test_split(x, y, test_size=0.2)
# model
model = keras.models.Sequential()
model.add( keras.layers.normalization.BatchNormalization(input_shape=tuple([x_train.shape[1]])))
model.add(keras.layers.core.Dense(4, activation='relu'))
model.add(keras.layers.core.Dropout(rate=0.5))
model.add(keras.layers.normalization.BatchNormalization())
model.add(keras.layers.core.Dense(4, activation='relu'))
model.add(keras.layers.core.Dropout(rate=0.5))
model.add(keras.layers.normalization.BatchNormalization())
model.add(keras.layers.core.Dense(4, activation='relu'))
model.add(keras.layers.core.Dropout(rate=0.5))
model.add(keras.layers.core.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=["accuracy"])
callback_early_stopping = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='auto')
model.fit(x_train, y_train, batch_size=1024, epochs=200, validation_data=(x_test, y_test), verbose=1, callbacks=[callback_early_stopping])
y_test = model.predict(x_test.values)
发布于 2019-05-26 20:44:03
如您所见,您在神经网络输出(最后一层)中使用的sigmoid激活函数的范围是从0到1。
请注意,您的标签(y)已重新缩放为-1到1。
我建议您将y范围更改为0到1,并保留sigmoid输出。
发布于 2019-05-27 03:21:20
因此,sigmoid的范围是从0到1。您的MinMaxscaler将数据从-1缩放到1。
您可以通过将输出层中的'sigmoid‘替换为' tanh’来修复它,因为tanh的输出范围为-1到1
发布于 2019-05-27 03:44:41
其他两个答案都可以用来解决这样一个事实,即您的网络输出与y
向量值不在同一范围内。将最终层调整为tanh激活,或将y
-vector range更改为0,1。
但是,您的网络损失函数和度量是为了分类目的而定义的,因为您正在尝试回归( -1,1之间的连续值)。最常见的损失函数和精度度量是平均平方误差,或平均绝对误差。因此,我建议您更改以下内容:
model.compile(loss='mse', optimizer='rmsprop', metrics=['mse, 'mae'])
https://stackoverflow.com/questions/56317174
复制相似问题