首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >XGBoostError:检查失败: typestr.size() == 3 (2 vs.3):“type vs.”应该格式为在bytes>中键入的<endian><type><size

XGBoostError:检查失败: typestr.size() == 3 (2 vs.3):“type vs.”应该格式为在bytes>中键入的<endian><type><size
EN

Stack Overflow用户
提问于 2021-04-14 15:55:17
回答 4查看 2.8K关注 0票数 3

我有一个奇怪的问题,一个新的安装xgboost。在正常情况下,它可以正常工作。但是,当我在下面的函数中使用该模型时,它会给出标题中的错误。

我使用的数据集是从kaggle借来的,可以在这里看到:https://www.kaggle.com/kemical/kickstarter-projects

用于拟合我的模型的函数如下:

代码语言:javascript
运行
复制
def get_val_scores(model, X, y, return_test_score=False, return_importances=False, random_state=42, randomize=True, cv=5, test_size=0.2, val_size=0.2, use_kfold=False, return_folds=False, stratify=True):
    print("Splitting data into training and test sets")
    if randomize:
        if stratify:
            X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, stratify=y, shuffle=True, random_state=random_state)
        else:
            X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, shuffle=True, random_state=random_state)
    else:
        if stratify:
            X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, stratify=y, shuffle=False)
        else:
            X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, shuffle=False)
    print(f"Shape of training data, X: {X_train.shape}, y: {y_train.shape}.  Test, X: {X_test.shape}, y: {y_test.shape}")
    if use_kfold:
        val_scores = cross_val_score(model, X=X_train, y=y_train, cv=cv)
    else:
        print("Further splitting training data into validation sets")
        if randomize:
            if stratify:
                X_train_, X_val, y_train_, y_val = train_test_split(X_train, y_train, test_size=val_size, stratify=y_train, shuffle=True)
            else:
                X_train_, X_val, y_train_, y_val = train_test_split(X_train, y_train, test_size=val_size, shuffle=True)
        else:
            if stratify:
                print("Warning! You opted to both stratify your training data and to not randomize it.  These settings are incompatible with scikit-learn.  Stratifying the data, but shuffle is being set to True")
                X_train_, X_val, y_train_, y_val = train_test_split(X_train, y_train, test_size=val_size, stratify=y_train,  shuffle=True)
            else:
                X_train_, X_val, y_train_, y_val = train_test_split(X_train, y_train, test_size=val_size, shuffle=False)
        print(f"Shape of training data, X: {X_train_.shape}, y: {y_train_.shape}.  Val, X: {X_val.shape}, y: {y_val.shape}")
        print("Getting ready to fit model.")
        model.fit(X_train_, y_train_)
        val_score = model.score(X_val, y_val)
        
    if return_importances:
        if hasattr(model, 'steps'):
            try:
                feats = pd.DataFrame({
                    'Columns': X.columns,
                    'Importance': model[-2].feature_importances_
                }).sort_values(by='Importance', ascending=False)
            except:
                model.fit(X_train, y_train)
                feats = pd.DataFrame({
                    'Columns': X.columns,
                    'Importance': model[-2].feature_importances_
                }).sort_values(by='Importance', ascending=False)
        else:
            try:
                feats = pd.DataFrame({
                    'Columns': X.columns,
                    'Importance': model.feature_importances_
                }).sort_values(by='Importance', ascending=False)
            except:
                model.fit(X_train, y_train)
                feats = pd.DataFrame({
                    'Columns': X.columns,
                    'Importance': model.feature_importances_
                }).sort_values(by='Importance', ascending=False)
            
    mod_scores = {}
    try:
        mod_scores['validation_score'] = val_scores.mean()
        if return_folds:
            mod_scores['fold_scores'] = val_scores
    except:
        mod_scores['validation_score'] = val_score
        
    if return_test_score:
        mod_scores['test_score'] =  model.score(X_test, y_test)
            
    if return_importances:
        return mod_scores, feats
    else:
        return mod_scores

我遇到的奇怪部分是,如果我在sklearn中创建一个管道,它可以在函数之外的dataset上工作,而不是在它的内部。例如:

代码语言:javascript
运行
复制
from sklearn.pipeline import make_pipeline
from category_encoders import OrdinalEncoder
from xgboost import XGBClassifier

pipe = make_pipeline(OrdinalEncoder(), XGBClassifier())

X = df.drop('state', axis=1)
y = df['state']

在这种情况下,pipe.fit(X, y)工作得很好。但是get_val_scores(pipe, X, y)在标题中的错误消息失败了。更奇怪的是,get_val_scores(pipe, X, y)似乎在处理其他数据集,比如泰坦尼克号。当模型在X_trainy_train上拟合时,就会产生误差。

在这种情况下,丢失函数是binary:logistic,而state列的值为successfulfailed

EN

回答 4

Stack Overflow用户

回答已采纳

发布于 2021-04-24 10:51:57

目前,xgboost库正在更新以修复此错误,因此当前的解决方案是将库降级为旧版本,对于我来说,我已经通过将该库降级为xgboost v0.90来解决这个问题。

尝试通过cmd:检查xgboost版本

代码语言:javascript
运行
复制
python 

import xgboost

print(xgboost.__version__)

exit()

如果版本不是0.90,那么通过:卸载当前版本

代码语言:javascript
运行
复制
pip uninstall xgboost

安装xgboost版本0.90

代码语言:javascript
运行
复制
pip install xgboost==0.90

再次运行您的代码!

票数 2
EN

Stack Overflow用户

发布于 2021-04-28 18:56:47

我在macOS Big上使用python3.8.6,只是在xgboost==1.4.0和1.4.1中遇到了这个错误。当我把评级降到1.3.3时,这个问题就消失了。尝试升级或降级取决于您的当前版本。

票数 1
EN

Stack Overflow用户

发布于 2021-04-30 16:54:19

此错误将在XGBoost 1.4.2中修复。

请参阅:https://github.com/dmlc/xgboost/pull/6927

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/67095097

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档