官方案例来源:https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.classification.MultilayerPerceptronClassifier
>>> from pyspark.ml.linalg import Vectors
>>> df = spark.createDataFrame([
... (0.0, Vectors.dense([0.0, 0.0])),
... (1.0, Vectors.dense([0.0, 1.0])),
... (1.0, Vectors.dense([1.0, 0.0])),
... (0.0, Vectors.dense([1.0, 1.0]))], ["label", "features"])
>>> mlp = MultilayerPerceptronClassifier(maxIter=100, layers=[2, 2, 2], blockSize=1, seed=123)
>>> model = mlp.fit(df)
>>> model.layers
[2, 2, 2]
>>> model.weights.size
12
>>> testDF = spark.createDataFrame([
... (Vectors.dense([1.0, 0.0]),),
... (Vectors.dense([0.0, 0.0]),)], ["features"])
>>> model.transform(testDF).select("features", "prediction").show()
+---------+----------+
| features|prediction|
+---------+----------+
|[1.0,0.0]| 1.0|
|[0.0,0.0]| 0.0|
+---------+----------+
...
>>> mlp_path = temp_path + "/mlp"
>>> mlp.save(mlp_path)
>>> mlp2 = MultilayerPerceptronClassifier.load(mlp_path)
>>> mlp2.getBlockSize()
1
>>> model_path = temp_path + "/mlp_model"
>>> model.save(model_path)
>>> model2 = MultilayerPerceptronClassificationModel.load(model_path)
>>> model.layers == model2.layers
True
>>> model.weights == model2.weights
True
>>> mlp2 = mlp2.setInitialWeights(list(range(0, 12)))
>>> model3 = mlp2.fit(df)
>>> model3.weights != model2.weights
True
>>> model3.layers == model.layers
True
主函数为:
class pyspark.ml.classification.MultilayerPerceptronClassifier(featuresCol='features', labelCol='label', predictionCol='prediction', maxIter=100, tol=1e-06, seed=None, layers=None, blockSize=128, stepSize=0.03, solver='l-bfgs', initialWeights=None, probabilityCol='probability', rawPredictionCol='rawPrediction')
其中,隐藏层的解释:
layers=[8, 9, 8, 2]
指定神经网络的图层:输入层8个节点(即8个特征),与特征数对应;两个隐藏层,隐藏结点数分别为9和8;输出层2个结点(即二分类) 其中,节点特征数量限定的时候,自己的训练集是一次性将 特征+target一起给入模型,所以在计算特征个数的时候,需要整体-1
blockSize 用于在矩阵中堆叠输入数据的块大小以加速计算。 数据在分区内堆叠。
笔者自己在使用GBDT的时候,有点闹不明白:GBTClassificationModel和GBTClassifier的区别,因为两者都可以save 和load
这个小问题从官方的case来看,代表着: GBTClassifier是初始化的模型;GBTClassificationModel是fit之后的模型。如果是训练之后的model,需要使用GBTClassificationModel来进行save和load.
之前找这个评估函数找了半天,需要用这样的用法(f1|weightedPrecision|weightedRecall|accuracy)
其中,predictionCol
,是指定哪个是target。
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
predictionAndLabels = result.select("prediction", "label")
for indexes in ['accuracy','weightedRecall','weightedPrecision','f1']:
evaluator = MulticlassClassificationEvaluator(labelCol="label",
predictionCol="prediction", metricName=indexes)
print("Test set {} = {}".format(indexes,str(evaluator.evaluate(predictionAndLabels))))
扫码关注腾讯云开发者
领取腾讯云代金券
Copyright © 2013 - 2025 Tencent Cloud. All Rights Reserved. 腾讯云 版权所有
深圳市腾讯计算机系统有限公司 ICP备案/许可证号:粤B2-20090059 深公网安备号 44030502008569
腾讯云计算(北京)有限责任公司 京ICP证150476号 | 京ICP备11018762号 | 京公网安备号11010802020287
Copyright © 2013 - 2025 Tencent Cloud.
All Rights Reserved. 腾讯云 版权所有