我正在做一个将推特分类为健康和政治类别的项目。我使用朴素贝叶斯算法进行分类。
我试图通过使用POS标签来提高朴素贝叶斯分类的准确性。因为我认为,分配语言信息将提高分类效率。
在预处理和应用pos标记之后,我的数据集如下所示:
ID tweet Category pos_tagged_tweet
1 හාමුදුරුවරු සියලූම පූජකයන්ගේ මානසික සෞඛ්යය Health [(හාමුදුරුවරු, NNC), (සියලූම, NNC), (පූජකයන්ගේ, NNC), (මානසික, JJ), (සෞඛ්යය, NNC), (., FS)]
2 ද්විපාර්ශවික එකඟතා ජන ජීවිත සෞඛ්යය මනාව Politics [(ද්විපාර්ශවික, NNP), (එකඟතා, NNP), (ජන, JJ), (ජීවිත, NNJ), (සෞඛ්යය, NNC), (මනාව, RB), (., FS)]
3 කරැනාකර චින නිෂ්පාදිත එන්නත ලබාගත් Health [(කරැනාකර, NNC), (චින, VP), (නිෂ්පාදිත, VP), (එන්නත, NNC), (ලබාගත්, VP),(., FS)]
'
'
'
我需要知道如何将pos_tagged_tweet列和类别列应用到朴素贝叶斯算法中,以便对tweet是基于健康的tweet还是政治tweet进行分类。我的实现使用python和NLTK。
发布于 2021-09-20 07:15:55
我的想法是用句子中相应的pos_tags替换单词,并形成如下新的属性:
sentence = ["A quick brown fox jumped over the cat",
"An apple fell from a tree",
"I like old western classics"]
tokenized_sents = [nltk.word_tokenize(i) for i in sentence]
print(tokenized_sents)
pos_tags = [nltk.pos_tag(token) for token in tokenized_sents]
print(pos_tags)
[[('A', 'DT'), ('quick', 'JJ'), ('brown', 'NN'), ('fox', 'NN'), ('jumped', 'VBD'), ('over', 'IN'), ('the', 'DT'), ('cat', 'NN')], [('An', 'DT'), ('apple', 'NN'), ('fell', 'VBD'), ('from', 'IN'), ('a', 'DT'), ('tree', 'NN')], [('I', 'PRP'), ('like', 'VBP'), ('old', 'JJ'), ('western', 'JJ'), ('classics', 'NNS')]]
现在,通过用pos_tags替换句子中的单词,从pos_tags中创建单词向量。
# from gensim.test.utils import common_texts
from gensim.models import Word2Vec
pos_tag_list = [['DT', 'JJ', 'NN', 'NN', 'VBD', 'IN', 'DT', 'NN'],
['DT','NN','VBD','IN','DT','NN'],['PRP','VBP','JJ','JJ','NNS']]
w2v_model = Word2Vec(min_count=1,
window=2,
size=30,
sample=1e-5,
alpha=0.01,
min_alpha=0.0007,
negative=0,
workers=2)
w2v_model.build_vocab(pos_tag_list, progress_per=1)
w2v_model.train(pos_tag_list, total_examples=w2v_model.corpus_count, epochs=3, report_delay=1)
# get the vectors for the Pos_tags from w2v_model
my_dict = dict({})
for index, key in enumerate(w2v_model.wv.vocab):
my_dict[key] = w2v_model.wv[key]
# Sample Output vector for pos_tags, we got 30-dimensional word vector since
we used size=30.
{'DT': array([-0.01487986, 0.00341667, 0.00576919, -0.01203213, 0.01111736,
0.01643543, 0.00583243, 0.00283635, -0.00892249, 0.01334178,
0.01324782, 0.00843606, 0.00965199, 0.00849338, -0.00584444,
-0.00482766, 0.01218408, -0.00959254, -0.00172328, 0.01302824,
-0.00374165, -0.01516393, -0.00604865, 0.00170989, 0.00843781,
-0.01403714, 0.00150807, 0.01511062, 0.00798908, 0.0088043 ],
dtype=float32)}
现在,对于pos_tag_list中的每个条目,用向量替换pos_tags,并为朴素贝叶斯模型创建一个训练数据集。您还可以与pos_tags一起使用实际的单词向量,并创建一个全面的数据集。我并没有专门研究它,但根据我发现的研究结果,我认为这是可行的。试着检查一下。
发布于 2021-09-20 04:08:24
首先,当问一个问题时,请遵循指导方针,并包括一个最小的可重复的例子,如描述的这里。
现在,我假设你用的是nltk,所有东西都是用英语写的。首先,您需要标记您的句子,然后应用POS标记:
import nltk
sentence = "Can you help me with this?"
tokens = nltk.word_tokenize(sentence)
pos = nltk.pos_tag(tokens)
print(pos)
这将给你以下清单:
[('Can', 'MD'), ('you', 'PRP'), ('help', 'VB'), ('me', 'PRP'), ('with', 'IN'), ('this', 'DT'), ('?', '.')]
您可以在他们的文档上找到关于NLTK的更多信息。
https://stackoverflow.com/questions/69248442
复制相似问题