全面梳理Python下的NLP 库

一、说明

        Python 对自然语言处理库有丰富的支持。从文本处理、标记化文本并确定其引理开始,到句法分析、解析文本并分配句法角色,再到语义处理,例如识别命名实体、情感分析和文档分类,一切都由至少一个库提供。那么,你从哪里开始呢?

        本文的目标是为每个核心 NLP 任务提供相关 Python 库的概述。这些库通过简要说明进行了解释,并给出了 NLP 任务的具体代码片段。继续我对 NLP 博客文章的介绍,本文仅显示用于文本处理、句法和语义分析以及文档语义等核心 NLP 任务的库。此外,在 NLP 实用程序类别中,还提供了用于语料库管理和数据集的库。

        涵盖以下库:

  • NLTK
  • TextBlob
  • Spacy
  • SciKit Learn
  • Gensim
  • 这篇文章最初出现在博客 admantium.com。

二、核心自然语言处理任务

2.1 文本处理

        任务:标记化、词形还原、词干提取

      NLTK 库为文本处理提供了一个完整的工具包,包括标记化、词干提取和词形还原。


from nltk.tokenize import sent_tokenize, word_tokenize

paragraph = '''Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.'''
sentences = []
for sent in sent_tokenize(paragraph):
  sentences.append(word_tokenize(sent))

sentences[0]
# ['Artificial', 'intelligence', 'was', 'founded', 'as', 'an', 'academic', 'discipline'

        使用 TextBlob,支持相同的文本处理任务。它与NLTK的区别在于更高级的语义结果和易于使用的数据结构:解析句子已经生成了丰富的语义信息。

from textblob import TextBlob

text = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''

blob = TextBlob(text)
blob.ngrams()
#[WordList(['Artificial', 'intelligence', 'was']),
# WordList(['intelligence', 'was', 'founded']),
# WordList(['was', 'founded', 'as']),

blob.tokens
# WordList(['Artificial', 'intelligence', 'was', 'founded', 'as', 'an', 'academic', 'discipline', 'in', '1956', ',', 'and', 'in',

        借助现代 NLP 库 Spacy,文本处理只是主要语义任务的丰富管道中的第一步。与其他库不同,它需要首先加载目标语言的模型。最近的模型不是启发式的,而是人工神经网络,尤其是变压器,它提供了更丰富的抽象,可以更好地与其他模型相结合。

import spacy
nlp = spacy.load('en_core_web_lg')

text = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''

doc = nlp(text)
tokens = [token for token in doc]

print(tokens)
# [Artificial, intelligence, was, founded, as, an, academic, discipline

三、句法分析

        任务:解析、词性标记、名词短语提取

        从 NLTK 开始,支持所有语法任务。它们的输出作为 Python 原生数据结构提供,并且始终可以显示为简单的文本输出。

from nltk.tokenize import word_tokenize
from nltk import pos_tag, RegexpParser

text = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''

pos_tag(word_tokenize(text))
# [('Artificial', 'JJ'),
#  ('intelligence', 'NN'),
#  ('was', 'VBD'),
#  ('founded', 'VBN'),
#  ('as', 'IN'),
#  ('an', 'DT'),
#  ('academic', 'JJ'),
#  ('discipline', 'NN'),
# noun chunk parser
# source: https://www.nltk.org/book_1ed/ch07.html

grammar = "NP: {<DT>?<JJ>*<NN>}"
parser = RegexpParser(grammar)
parser.parse(pos_tag(word_tokenize(text)))
#(S
#  (NP Artificial/JJ intelligence/NN)
#  was/VBD
#  founded/VBN
#  as/IN
#  (NP an/DT academic/JJ discipline/NN)
#  in/IN
#  1956/CD

文本 Blob 在处理文本时立即提供 POS 标记。使用另一种方法,创建包含丰富语法信息的解析树。

from textblob import TextBlob

text = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''
blob = TextBlob(text)

blob.tags
#[('Artificial', 'JJ'),
# ('intelligence', 'NN'),
# ('was', 'VBD'),
# ('founded', 'VBN'),

blob.parse()
# Artificial/JJ/B-NP/O
# intelligence/NN/I-NP/O
# was/VBD/B-VP/O
# founded/VBN/I-VP/O

Spacy 库使用转换器神经网络来支持其语法任务。

import spacy
nlp = spacy.load('en_core_web_lg')

for token in nlp(text):
    print(f'{token.text:<20}{token.pos_:>5}{token.tag_:>5}')

#Artificial            ADJ   JJ
#intelligence         NOUN   NN
#was                   AUX  VBD
#founded              VERB  VBN

四、语义分析

        任务:命名实体识别、词义消歧、语义角色标记

        语义分析是NLP方法开始不同的领域。使用 NLTK 时,生成的语法信息将在字典中查找以识别例如命名实体。因此,在处理较新的文本时,可能无法识别实体。

from nltk import download as nltk_download
from nltk.tokenize import word_tokenize
from nltk import pos_tag, ne_chunk

nltk_download('maxent_ne_chunker')
nltk_download('words')
text = '''
As of 2016, only three nations have flown crewed spacecraft: USSR/Russia, USA, and China. The first crewed spacecraft was Vostok 1, which carried Soviet cosmonaut Yuri Gagarin into space in 1961, and completed a full Earth orbit. There were five other crewed missions which used a Vostok spacecraft. The second crewed spacecraft was named Freedom 7, and it performed a sub-orbital spaceflight in 1961 carrying American astronaut Alan Shepard to an altitude of just over 187 kilometers (116 mi). There were five other crewed missions using Mercury spacecraft.
'''

pos_tag(word_tokenize(text))
# [('Artificial', 'JJ'),
#  ('intelligence', 'NN'),
#  ('was', 'VBD'),
#  ('founded', 'VBN'),
#  ('as', 'IN'),
#  ('an', 'DT'),
#  ('academic', 'JJ'),
#  ('discipline', 'NN'),
# noun chunk parser
# source: https://www.nltk.org/book_1ed/ch07.html

print(ne_chunk(pos_tag(word_tokenize(text))))
# (S
#   As/IN
#   of/IN
#   [...]
#   (ORGANIZATION USA/NNP)
#   [...]
#   which/WDT
#   carried/VBD
#   (GPE Soviet/JJ)
#   cosmonaut/NN
#   (PERSON Yuri/NNP Gagarin/NNP)

      Spacy 库使用的转换器模型包含一个隐式的“时间戳”:它们的训练时间。这决定了模型使用了哪些文本,因此模型能够识别哪些文本。

import spacy
nlp = spacy.load('en_core_web_lg')

text = '''
As of 2016, only three nations have flown crewed spacecraft: USSR/Russia, USA, and China. The first crewed spacecraft was Vostok 1, which carried Soviet cosmonaut Yuri Gagarin into space in 1961, and completed a full Earth orbit. There were five other crewed missions which used a Vostok spacecraft. The second crewed spacecraft was named Freedom 7, and it performed a sub-orbital spaceflight in 1961 carrying American astronaut Alan Shepard to an altitude of just over 187 kilometers (116 mi). There were five other crewed missions using Mercury spacecraft.
'''
doc = nlp(paragraph)
for token in doc.ents:
    print(f'{token.text:<25}{token.label_:<15}')
# 2016                   DATE
# only three             CARDINAL
# USSR                   GPE
# Russia                 GPE
# USA                    GPE
# China                  GPE
# first                  ORDINAL
# Vostok 1               PRODUCT
# Soviet                 NORP
# Yuri Gagarin           PERSON

五、文档语义

        任务:文本分类、主题建模、情感分析、毒性识别

        情感分析也是NLP方法差异不同的任务:在词典中查找单词含义与在单词或文档向量上编码的学习单词相似性。

      TextBlob 具有内置的情感分析,可返回文本中的极性(整体正面或负面内涵)和主观性(个人意见的程度)。

from textblob import TextBlob

text = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''

blob = TextBlob(text)
blob.sentiment
#Sentiment(polarity=0.16180290297937355, subjectivity=0.42155589508530683)

Spacy 不包含文本分类功能,但可以作为单独的管道步骤进行扩展。下面的代码很长,包含几个 Spacy 内部对象和数据结构 — 以后的文章将更详细地解释这一点。

## train single label categorization from multi-label dataset
def convert_single_label(dataset, filename):
    db = DocBin()
    nlp = spacy.load('en_core_web_lg')

for index, fileid in enumerate(dataset):
        cat_dict = {cat: 0 for cat in dataset.categories()}
        cat_dict[dataset.categories(fileid).pop()] = 1
        doc = nlp(get_text(fileid))
        doc.cats = cat_dict
        db.add(doc)
    db.to_disk(filename)

## load trained model and apply to text
nlp = spacy.load('textcat_multilabel_model/model-best')
text = dataset.raw(42)
doc = nlp(text)
estimated_cats = sorted(doc.cats.items(), key=lambda i:float(i[1]), reverse=True)

print(dataset.categories(42))
# ['orange']

print(estimated_cats)
# [('nzdlr', 0.998894989490509), ('money-supply', 0.9969857335090637), ... ('orange', 0.7344251871109009),

SciKit Learn 是一个通用的机器学习库,提供许多聚类和分类算法。它仅适用于数字输入,因此需要对文本进行矢量化,例如使用 GenSims 预先训练的词向量,或使用内置的特征矢量化器。仅举一个例子,这里有一个片段,用于将原始文本转换为单词向量,然后将 KMeans分类器应用于它们。

from sklearn.feature_extraction import DictVectorizer
from sklearn.cluster import KMeans

vectorizer = DictVectorizer(sparse=False)
x_train = vectorizer.fit_transform(dataset['train'])
kmeans = KMeans(n_clusters=8, random_state=0, n_init="auto").fit(x_train)

print(kmeans.labels_.shape)
# (8551, )

print(kmeans.labels_)
# [4 4 4 ... 6 6 6]

最后,Gensim是一个专门用于大规模语料库的主题分类的库。以下代码片段加载内置数据集,矢量化每个文档的令牌,并执行聚类分析算法 LDA。仅在 CPU 上运行时,这些最多可能需要 15 分钟。

# source: https://radimrehurek.com/gensim/auto_examples/tutorials/run_lda.html, https://radimrehurek.com/gensim/auto_examples/howtos/run_downloader_api.html

import logging
import gensim.downloader as api
from gensim.corpora import Dictionary
from gensim.models import LdaModel

logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
docs = api.load('text8')
dictionary = Dictionary(docs)
corpus = [dictionary.doc2bow(doc) for doc in docs]

_ = dictionary[0]
id2word = dictionary.id2token
# Define and train the model

model = LdaModel(
    corpus=corpus,
    id2word=id2word,
    chunksize=2000,
    alpha='auto',
    eta='auto',
    iterations=400,
    num_topics=10,
    passes=20,
    eval_every=None
)

print(model.num_topics)
# 10

print(model.top_topics(corpus)[6])
#  ([(4.201401e-06, 'done'),
#    (4.1998064e-06, 'zero'),
#    (4.1478743e-06, 'eight'),
#    (4.1257395e-06, 'one'),
#    (4.1166854e-06, 'two'),
#    (4.085097e-06, 'six'),
#    (4.080696e-06, 'language'),
#    (4.050306e-06, 'system'),
#    (4.041121e-06, 'network'),
#    (4.0385708e-06, 'internet'),
#    (4.0379923e-06, 'protocol'),
#    (4.035399e-06, 'open'),
#    (4.033435e-06, 'three'),
#    (4.0334166e-06, 'interface'),
#    (4.030141e-06, 'four'),
#    (4.0283044e-06, 'seven'),
#    (4.0163245e-06, 'no'),
#    (4.0149207e-06, 'i'),
#    (4.0072555e-06, 'object'),
#    (4.007036e-06, 'programming')],

六、公用事业

6.1 语料库管理

NLTK为JSON格式的纯文本,降价甚至Twitter提要提供语料库阅读器。它通过传递文件路径来创建,然后提供基本统计信息以及迭代器以处理所有找到的文件。

from  nltk.corpus.reader.plaintext import PlaintextCorpusReader

corpus = PlaintextCorpusReader('wikipedia_articles', r'.*\.txt')

print(corpus.fileids())
# ['AI_alignment.txt', 'AI_safety.txt', 'Artificial_intelligence.txt', 'Machine_learning.txt', ...]

print(len(corpus.sents()))
# 47289

print(len(corpus.words()))
# 1146248

Gensim 处理文本文件以形成每个文档的词向量表示,然后可用于其主要用例主题分类。文档需要由包装遍历目录的迭代器处理,然后将语料库构建为词向量集合。但是,这种语料库表示很难外部化并与其他库重用。以下片段是上面的摘录——它将加载 Gensim 中包含的数据集,然后创建一个基于词向量的表示。

import gensim.downloader as api
from gensim.corpora import Dictionary

docs = api.load('text8')
dictionary = Dictionary(docs)
corpus = [dictionary.doc2bow(doc) for doc in docs]

print('Number of unique tokens: %d' % len(dictionary))
# Number of unique tokens: 253854

print('Number of documents: %d' % len(corpus))
# Number of documents: 1701

七、数据

NLTK提供了几个即用型数据集,例如路透社新闻摘录,欧洲议会会议记录以及古腾堡收藏的开放书籍。请参阅完整的数据集和模型列表。

from nltk.corpus import reuters

print(len(reuters.fileids()))
#10788

print(reuters.categories()[:43])
# ['acq', 'alum', 'barley', 'bop', 'carcass', 'castor-oil', 'cocoa', 'coconut', 'coconut-oil', 'coffee', 'copper', 'copra-cake', 'corn', 'cotton', 'cotton-oil', 'cpi', 'cpu', 'crude', 'dfl', 'dlr', 'dmk', 'earn', 'fuel', 'gas', 'gnp', 'gold', 'grain', 'groundnut', 'groundnut-oil', 'heat', 'hog', 'housing', 'income', 'instal-debt', 'interest', 'ipi', 'iron-steel', 'jet', 'jobs', 'l-cattle', 'lead', 'lei', 'lin-oil']

SciKit Learn包括来自新闻组,房地产甚至IT入侵检测的数据集,请参阅完整列表。这是后者的快速示例。

from sklearn.datasets import fetch_20newsgroups
dataset = fetch_20newsgroups()
dataset.data[1]
# "From: guykuo@carson.u.washington.edu (Guy Kuo)\nSubject: SI Clock Poll - Final Call\nSummary: Final call for SI clock reports\nKeywords: SI,acceleration,clock,upgrade\nArticle-I.D.: shelley.1qvfo9INNc3s\nOrganization: University of Washington\nLines: 11\nNNTP-Posting-Host: carson.u.washington.edu\n\nA fair number of brave souls who upgraded their SI clock oscillator have\nshared their experiences for this poll.

八、结论

        对于 Python 中的 NLP 项目,存在大量的库选择。为了帮助您入门,本文提供了 NLP 任务驱动的概述,其中包含紧凑的库解释和代码片段。从文本处理开始,您了解了如何从文本创建标记和引理。继续语法分析,您学习了如何生成词性标签和句子的语法结构。到达语义,识别文本中的命名实体以及文本情感也可以在几行代码中解决。对于语料库管理和访问预结构化数据集的其他任务,您还看到了库示例。总而言之,这篇文章应该会给你一个良好的开端,进入你的下一个NLP项目,从事核心NLP任务。

        NLP方法向使用神经网络,特别是大型语言模型的演变引发了一套完整的新库的创建和适应,从文本矢量化,神经网络定义和训练开始,以及语言生成模型的应用等等。这些模型涵盖了所有高级 NLP 任务,将在以后的文章中介绍。

塞巴斯蒂安

·

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/75527.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

LeetCode ACM模式——二叉树篇(二)

刷题顺序及思路来源于代码随想录&#xff0c;网站地址&#xff1a;https://programmercarl.com 二叉树的定义及创建见&#xff1a; LeetCode ACM模式——二叉树篇&#xff08;一&#xff09;_要向着光的博客-CSDN博客 目录 102. 二叉树的层序遍历 利用队列 利用递归 10…

R语言生存分析(机器学习)(1)——GBM(梯度提升机)

GBM是一种集成学习算法&#xff0c;它结合了多个弱学习器&#xff08;通常是决策树&#xff09;来构建一个强大的预测模型。GBM使用“Boosting”的技术来训练弱学习器&#xff0c;这种技术是一个迭代的过程&#xff0c;每一轮都会关注之前轮次中预测效果较差的样本&#xff0c;…

opencv进阶01-直方图的应用及示例cv2.calcHist()

直方图是什么&#xff1f; 直方图是一种图形表示方法&#xff0c;用于显示数据中各个数值或数值范围的分布情况。它将数据划分为一系列的区间&#xff08;也称为“箱子”或“bin”&#xff09;&#xff0c;然后统计每个区间中数据出现的频次&#xff08;或频率&#xff09;。直…

Floyd(多源汇最短路)

Floyd求最短路 给定一个 n 个点 m 条边的有向图&#xff0c;图中可能存在重边和自环&#xff0c;边权可能为负数。 再给定 k 个询问&#xff0c;每个询问包含两个整数 x 和 y&#xff0c;表示查询从点 x 到点 y 的最短距离&#xff0c;如果路径不存在&#xff0c;则输出 impo…

rabbitmq的持久化

目录 队列实现持久化 如何删除队列​编辑 消息实现持久化 不公平分发 如何保障当 RabbitMQ 服务停掉以后消息生产者发送过来的消息不丢失。默认情况下 RabbitMQ 退出或由于某种原因崩溃时&#xff0c;它忽视队列和消息&#xff0c;除非告知它不要这样做。确保消息不会丢失需…

桂林小程序https证书

现在很多APP都相继推出了小程序&#xff0c;比如微信小程序、百度小程序等&#xff0c;这些小程序的功能也越来越复杂&#xff0c;不可避免的和网站一样会传输数据&#xff0c;因此小程序想要上线就要保证信息传输的安全性&#xff0c;也就是说各种类型的小程序也需要部署https…

SAP MM学习笔记24-以评估收货(评价)和非评估收货(非评价)

SAP 中 有评价入库&#xff08;评估收货&#xff09;和非评价入库&#xff08;非评估收货&#xff09;两种入库方式。 一般来说在库品目会采用评价入库&#xff0c;而消费品目&#xff0c;会采用非评价入库。 其实评价入库&#xff0c;非评价入库对外都无所谓的&#xff0c;人…

计算机竞赛 python+opencv+机器学习车牌识别

0 前言 &#x1f525; 优质竞赛项目系列&#xff0c;今天要分享的是 &#x1f6a9; 基于机器学习的车牌识别系统 &#x1f947;学长这里给一个题目综合评分(每项满分5分) 难度系数&#xff1a;4分工作量&#xff1a;4分创新点&#xff1a;3分 该项目较为新颖&#xff0c;适…

HTTP 协议的基本格式和 fiddler 的用法

目录 一. HTTP 协议 1. HTTP协议是什么 2. HTTP协议的基本格式 HTTP请求 首行 GET和POST方法&#xff1a; 其他方法 经典面试题&#xff1a; URL Header(请求报头)部分 空行 ​HTTP响应 状态码总结: 二、Fiddler的用法 1.Fidder的安装 2.Fidder的使用 一. HTTP 协议 1. H…

日常BUG——普通页面跳转tabbar页面报错

&#x1f61c;作 者&#xff1a;是江迪呀✒️本文关键词&#xff1a;日常BUG、BUG、问题分析☀️每日 一言 &#xff1a;存在错误说明你在进步&#xff01; 一、问题描述 微信小程序页面跳转的时候出现下面的问题&#xff1a; wx.redirectTo({url: /pages/index/i…

java 使用log4j显示到界面和文件 并格式化

1.下载log4j jar包https://dlcdn.apache.org/logging/log4j/2.20.0/apache-log4j-2.20.0-bin.zip 2. 我只要到核心包 &#xff0c;看需要 sources是源码包&#xff0c;可以看到说明。在IDEA里先加入class jar后&#xff0c;再双击这个class jar包或或右键选Navigate ,Add ,…

【贪心】CF1779 D

不会1700 Problem - 1779D - Codeforces 题意&#xff1a; 思路&#xff1a; 首先手推样例&#xff0c;可以发现一些零碎的性质&#xff1a; 然后考虑如何去计算贡献 难点在于&#xff0c;当一个区间的两端是区间max时&#xff0c;怎么去计算贡献 事实上&#xff0c;只需要…

【HttpRunnerManager】搭建接口自动化测试平台操作流程

一、需要准备的知识点 1. linux: 安装 python3、nginx 安装和配置、mysql 安装和配置 2. python: django 配置、uwsgi 配置 二、我搭建的环境 1. Centos7 &#xff08;配置 rabbitmq、mysql 、Supervisord&#xff09; 2. python 3.6.8 &#xff08;配置 django、uwsgi&…

kubernetes二进制部署2之 CNI 网络组件部署

CNI 网络组件部署 一&#xff1a;K8S提供三大接口1容器运行时接口CRI2云原生网络接口CNI3云原生存储接口CSI 部署 flannelK8S 中 Pod 网络通信&#xff1a;Overlay Network&#xff1a;VXLAN&#xff1a;Flannel:Flannel udp 模式的工作原理&#xff1a;ETCD 之 Flannel 提供说…

麒麟arm架构 编译安装qt5.14.2

1、先在官网下载qt源码&#xff1a; https://download.qt.io/archive/qt/5.14/5.14.2/single/[qt源码下载地址] 2、解压编译 使用tar -xvf qt-everywhere-src-5.14.2.tar.xz 解压压缩包 cd qt-everywhere-src-5.14.2 执行 ./configure --prefix/usr/local/qt.5.14.2 make -…

计算机竞赛 python 爬虫与协同过滤的新闻推荐系统

1 前言 &#x1f525; 优质竞赛项目系列&#xff0c;今天要分享的是 &#x1f6a9; python 爬虫与协同过滤的新闻推荐系统 &#x1f947;学长这里给一个题目综合评分(每项满分5分) 难度系数&#xff1a;3分工作量&#xff1a;3分创新点&#xff1a;4分 该项目较为新颖&…

Cpp学习——string模拟实现

目录 一&#xff0c;string的成员变量 二&#xff0c;string的各项功能函数 1.构造函数 2.析构函数 3.扩容函数 4.插入与删除数据的函数 5.运算符重载 6.打印显示函数 7&#xff0c;拷贝构造 8.find函数 一&#xff0c;string的成员变量 在模拟实现string之前&#xff…

云原生 AI 工程化实践之 FasterTransformer 加速 LLM 推理

作者&#xff1a;颜廷帅&#xff08;瀚廷&#xff09; 01 背景 OpenAI 在 3 月 15 日发布了备受瞩目的 GPT4&#xff0c;它在司法考试和程序编程领域的惊人表现让大家对大语言模型的热情达到了顶点。人们纷纷议论我们是否已经跨入通用人工智能的时代。与此同时&#xff0c;基…

二自由度机械臂的gazebo仿真

一、创建ros软件包 #1、创建工作空间 mkdir 2d_robot_ws cd 2d_robot_ws mkdir src cd src catkin_init_workspace #2、编译工作空间 cd .. catkin_make #3、创建软件包 catkin_create_pkg 2d_robot std_msgs rospy roscpp二、创建模型文件 1、编写urdf模型文件 在2d_robot_…

LLMs大模型plugin开发实战

一、概述 ChatGPT是通用语言大模型&#xff0c;如果用户想要在与大模型进行交互时能够使用到企业私有的数据&#xff0c;那么可以通过开发plugin&#xff08;插件&#xff09;的方式来实现&#xff0c;另外GPT3.5模型的训练数据是截止到2021年9月&#xff0c;如果想让模型能够…
最新文章