基于Transformer的中文文本分类

news2026/5/4 7:50:24
前言我在github上发现了一个有意思的项目Chinese-Text-Classification-Pytorch使用pytorch复现了基于Transformer的中文文本分类。中文数据集我从THUCNews中抽取了20万条新闻标题文本长度在20到30之间。一共10个类别每类2万条。以字为单位输入模型使用了预训练词向量搜狗新闻 WordCharacter 300d。类别财经、房产、股票、教育、科技、社会、时政、体育、游戏、娱乐。数据集在githubChinese-Text-Classification-Pytorch数据集划分数据集数据量训练集18万验证集1万测试集1万代码实现1. 首先我们要对数据信息处理将文字转化成模型可以使用的张量并将我们的数据集分割成训练集、验证集和测试集。utils_fasttext.py实现了以下功能词汇表构建通过build_vocab()函数统计训练数据中的词频过滤低频词min_freq1限制词汇表大小MAX_VOCAB_SIZE10000并为每个词分配索引同时添加UNK未知词和PAD填充特殊标记数据集加载通过load_dataset()函数读取数据文件将文本分词后转换为对应的词索引序列并进行填充pad_size32或截断处理N-gram特征为每个位置计算 Bigram 和 Trigram 哈希特征用于增强模型对局部词序的感知数据集分割通过build_dataset()函数分别加载训练集config.train_path、验证集config.dev_path和测试集config.test_path返回词汇表和三个数据集批量迭代器通过DataSetIterate类将数据按批次batch_size组织成迭代器并将数据转换为 PyTorch 张量torch.LongTensor并移动到指定设备config.deviceimport os import torch import numpy as np from tqdm import tqdm import pickle as pkl import time from datetime import timedelta MAX_VOCAB_SIZE 10000 UNK, PAD UNK, PAD def build_vocab(file_path,tokenizer,max_size,min_freq): vocab_dic{} with open(file_path,r,encodingutf-8) as f: for line in tqdm(f): line.strip() if not line: continue contentline.split(\t)[0] for word in tokenizer(content): vocab_dic[word]vocab_dic.get(word,0)1 vocab_listsorted([_ for _ in vocab_dic.items() if _[1]min_freq],keylambda x: x[1],reverseTrue)[:max_size] vocab_dic{wordcount[0]:idx for idx,wordcount in enumerate(vocab_list)} vocab_dic.update({UNK:len(vocab_dic),PAD:len(vocab_dic)1}) return vocab_dic def build_dataset(config,ues_word): if ues_word: Tokenizerlambda x:x.split( ) else: Tokenizerlambda x:[y for y in x] if os.path.exists(config.vocab_path): vocabpkl.load(open(config.vocab_path,rb)) else: vocabbuild_vocab(config.train_path,Tokenizer,max_sizeMAX_VOCAB_SIZE,min_freq1) pkl.dump(vocab,open(config.vocab_path,wb)) print(fvocab的大小为{len(vocab)}\n) def biGremHash(sequence,t,buckets): t1sequence[t-1] if t-10 else 0 return (t1 * 14918087) % buckets def triGremHash(sequence,t,buckets): t1sequence[t-1] if t-1 0 else 0 t2sequence[t-2] if t-2 0 else 0 return (t2 * 14918087 * 18408749 t1 * 14918087) % buckets def load_dataset(path,pad_size32): contents[] with open(path,r, encodingUTF-8) as f : for line in tqdm(f): lineline.strip() if not line: continue content,labelline.split(\t) words_line[] tokenTokenizer(content) seq_lenlen(token) if seq_lenpad_size: token.extend([PAD]*(pad_size-seq_len)) else: tokentoken[:pad_size] seq_lenpad_size for word in token: words_line.append(vocab.get(word,vocab.get(UNK))) bucketsconfig.n_vocab bigrem[] trigrem[] for i in range(pad_size): bigrem.append(biGremHash(words_line,i,buckets)) trigrem.append(triGremHash(words_line, i, buckets)) contents.append((words_line,int(label),seq_len,bigrem,trigrem)) return contents train load_dataset(config.train_path,config.pad_size) dev load_dataset(config.dev_path, config.pad_size) test load_dataset(config.test_path, config.pad_size) return vocab,train,dev,test class DataSetIterate(object): def __init__(self,batches,batch_size,device): self.batchesbatches self.batch_sizebatch_size self.devicedevice self.n_batcheslen(batches)//batch_size self.residue False # 记录batch数量是否为整数 if len(batches) % self.batch_size ! 0: self.residue True self.index 0 def _to_tensor(self,datas): x torch.LongTensor([_[0] for _ in datas]).to(self.device) y torch.LongTensor([_[1] for _ in datas]).to(self.device) bigram torch.LongTensor([_[3] for _ in datas]).to(self.device) trigram torch.LongTensor([_[4] for _ in datas]).to(self.device) seq_len torch.LongTensor([_[2] for _ in datas]).to(self.device) return (x, seq_len, bigram, trigram), y def __next__(self): #先处理最后一组不满batch——size的情况 if self.residue and self.index self.n_batches: batchesself.batches[self.index*self.batch_size:len(self.batches)] self.index1 batchesself._to_tensor(batches) return batches elif self.indexself.n_batches: self.index0 raise StopIteration else: batches self.batches[self.index * self.batch_size:(self.index1) * self.batch_size] self.index 1 batches self._to_tensor(batches) return batches def __iter__(self): return self def __len__(self): if self.residue: return self.n_batches1 else: return self.n_batches def bulid_iterator(dataset,config): iterDataSetIterate(dataset,config.batch_size,config.device) return iter def get_time_dif(start_time): 获取已使用时间 end_time time.time() time_dif end_time - start_time return timedelta(secondsint(round(time_dif)))2.基于Transformer的中文文本分类模型架构import torch import torch.nn as nn import torch.nn.functional as F import numpy as np import copy from torch.nn.functional import multi_head_attention_forward class Config(object): 配置参数 def __init__(self,dataset,embedding): self.model_nameTransformer self.train_path dataset /data/train.txt # 训练集 self.dev_path dataset /data/dev.txt # 验证集 self.test_path dataset /data/test.txt#测试集 self.class_list[x.strip() for x in open(dataset/data/class.txt, encodingutf-8).readlines()]#分类标签 self.vocab_pathdataset/data/vocab.pkl#词表 self.save_path dataset /saved_dict/ self.model_name .ckpt # 模型训练结果 self.log_path dataset /log/ self.model_name self.embedding_pretrained torch.tensor( np.load(dataset /data/ embedding)[embeddings].astype(float32)) if embedding ! random else None # 预训练词向量 self.device torch.device(cuda if torch.cuda.is_available() else cpu) # 设备 self.dropout 0.5 # 随机失活 self.require_improvement 2000 # 若超过1000batch效果还没提升则提前结束训练 self.num_classes len(self.class_list) # 类别数 self.n_vocab 0 # 词表大小在运行时赋值 self.num_epochs 20 # epoch数 self.batch_size 128 # mini-batch大小 self.pad_size 32 # 每句话处理成的长度(短填长切) self.learning_rate 5e-4 # 学习率 self.embed self.embedding_pretrained.size(1) if self.embedding_pretrained is not None else 300 # 字向量维度 self.dim_model 300 self.hidden 1024 self.last_hidden 512 self.num_head 5 self.num_encoder 2 self.n_gram_vocab8 class Positional_Emcoding(nn.Module): def __init__(self,embed,pad_size,dropout,device): super().__init__() self.devicedevice self.petorch.tensor([[pos/(10000**(i // 2 * 2.0 / embed)) for i in range(embed)] for pos in range(pad_size)]) self.pe[:,0::2]np.sin(self.pe[:,0::2]) self.pe[:,1::2]np.cos(self.pe[:,1::2]) self.dropoutnn.Dropout(dropout) def forward(self,x): outxself.pe outself.dropout(out) return out class Scaled_Dot_Product_Attention(nn.Module): def __init__(self): super().__init__() def forward(self,Q,K,V,scaleNone): attentiontorch.matmul(Q,K.permute(0,2,1)) if scale: attention attention * scale attentionF.softmax(attention,dim-1) contexttorch.matmul(attention,V) return context class Multi_head_attention(nn.Module): def __init__(self,dim_model,num_head,dropout0): super().__init__() self.num_headnum_head assert dim_model%num_head0 self.dim_headdim_model//num_head self.fc_Qnn.Linear(dim_model,num_head*self.dim_head) self.fc_K nn.Linear(dim_model, num_head * self.dim_head) self.fc_V nn.Linear(dim_model, num_head * self.dim_head) self.attentionScaled_Dot_Product_Attention() self.fcnn.Linear(num_head*self.dim_head,dim_model) self.dropoutnn.Dropout(dropout) self.layernormnn.LayerNorm(dim_model) def forward(self,x): batch_sizex.shape[0] Qself.fc_Q(x) Kself.fc_K(x) Vself.fc_V(x) Q Q.view(batch_size*self.num_head,-1,self.dim_head) K K.view(batch_size * self.num_head, -1, self.dim_head) V V.view(batch_size * self.num_head, -1, self.dim_head) scale1/K.size(-1)**(1/2) contextself.attention(Q,K,V,scale) contextcontext.view(batch_size,-1,self.num_head*self.dim_head) outself.fc(context) out self.dropout(out) out out x # 残差连接 out self.layernorm(out) return out class Position_wise_Feed_Forward(nn.Module): def __init__(self,dim_model,hidden,dropout0): super().__init__() self.fc1nn.Linear(dim_model,hidden) self.fc2nn.Linear(hidden,dim_model) self.dropoutnn.Dropout(dropout) self.layer_norm nn.LayerNorm(dim_model) def forward(self,x): out self.fc1(x) out F.relu(out) out self.fc2(out) out self.dropout(out) out out x # 残差连接 out self.layer_norm(out) return out class Encoder(nn.Module): def __init__(self,dim_model,num_head,hidden,dropout): super().__init__() self.attentionMulti_head_attention(dim_model,num_head,dropout) self.feed_forward Position_wise_Feed_Forward(dim_model, hidden, dropout) def forward(self, x): out self.attention(x) out self.feed_forward(out) return out class Model(nn.Module): def __init__(self,config): super().__init__() if config.embedding_pretrained is not None: self.embeddingnn.Embedding.from_pretrained(config.embedding_pretrained,freezeFalse) else: self.embeddingnn.Embedding(config.n_vocab,config.embed, padding_idxconfig.n_vocab - 1) self.positional_embeddingPositional_Emcoding(config.embed,config.pad_size,config.dropout,config.device) self.encoderEncoder(config.dim_model, config.num_head, config.hidden, config.dropout) self.encodersnn.ModuleList([copy.deepcopy(self.encoder) for _ in range(config.num_encoder)]) self.fc1 nn.Linear(config.pad_size * config.dim_model, config.num_classes) def forward(self, x): out self.embedding(x[0]) out self.positional_embedding(out) for encoder in self.encoders: out encoder(out) out out.view(out.size(0), -1) # out torch.mean(out, 1) out self.fc1(out) return out3.模型训练和测试代码import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from sklearn import metrics import time from utils_fasttext import get_time_dif from tensorboardX import SummaryWriter def init_network(model,methodxavier, excludeembedding, seed123): for name,w in model.named_parameters(): if exclude not in name: if weight in name: if method xavier: nn.init.xavier_normal_(w) elif method kaiming: nn.init.kaiming_normal_(w) else: nn.init.normal_(w) elif bias in name: nn.init.constant_(w,0) else: pass def train(config,model,train_iter,dev_iter,test_iter): start_timetime.time() model.train() optimizertorch.optim.Adam(model.parameters(),lrconfig.learning_rate) total_batch0 best_val_lossfloat(inf) last_improve0 flagFalse writerSummaryWriter(log_dirconfig.log_path / time.strftime(%m-%d_%H.%M, time.localtime())) for epoch in range(config.num_epoch): print(Epoch[{}/{}].format(epoch1,config.num_epoch)) for i,(train,labels) in enumerate(train_iter): outputsmodel(train) model.zero_grad() lossF.cross_entropy(outputs,labels) loss.backward() optimizer.step() if total_batch%1000: truelabels.data.cpu() predicttorch.max(outputs,1)[1].cpu() train_accmetrics.accuracy_score(true,predict) dev_acc,dev_lossevaluate(config,model,dev_iter) if dev_lossbest_val_loss: best_val_lossdev_loss last_improvetotal_batch torch.save(model.state_dict(),config.save_path) improve* else: improve time_difget_time_dif(start_time) msgIter: {0:6}, Train Loss: {1:5.2}, Train Acc: {2:6.2%}, Val Loss: {3:5.2}, Val Acc: {4:6.2%}, Time: {5} {6} print(msg.format(total_batch, loss.item(), train_acc, dev_loss, dev_acc, time_dif, improve)) writer.add_scalar(loss/train,loss.item(),total_batch) writer.add_scalar(loss/dev, dev_loss, total_batch) writer.add_scalar(acc/train, train_acc, total_batch) writer.add_scalar(acc/dev, dev_acc, total_batch) model.train() total_batch1 if total_batch - last_improve config.require_improvement: # 验证集loss超过1000batch没下降结束训练 print(No optimization for a long time, auto-stopping...) flag True break if flag: break writer.close() test(config,model,test_iter) def evaluate(config,model,data_iter,testFalse): model.eval() loss_total 0 pretict_allnp.array([],dtypeint) true_allnp.array([],dtypeint) with torch.no_grad(): for texts,labels in data_iter: outputsmodel(texts) lossF.cross_entropy(outputs,labels) loss_totalloss preticttorch.max(outputs.data,1)[1].cpu().numpy() labellabels.data.cpu().numpy() pretict_allnp.append(pretict_all,pretict) true_allnp.append(true_all,labels) accmetrics.accuracy_score(true_all,pretict_all) if test: reportmetrics.classification_report(true_all,pretict_all,target_namesconfig.class_list,digits4) confusionmetrics.confusion_matrix(true_all,pretict_all) return acc,loss_total/len(data_iter),report,confusion return acc, loss_total / len(data_iter) def test(config,model,data_iter): model.load_state_dict(torch.load(config.save_path)) model.eval() start_timetime.time() test_acc, test_loss, test_report, test_confusion evaluate(config, model, data_iter, testTrue) msg Test Loss: {0:5.2}, Test Acc: {1:6.2%} print(msg.format(test_loss, test_acc)) print(Precision, Recall and F1-Score...) print(test_report) print(Confusion Matrix...) print(test_confusion) time_dif get_time_dif(start_time) print(Time usage:, time_dif)4.运行代码import torch import torch.nn as nn import torch.nn.functional as F import numpy as np import copy class Config(object): 配置参数 def __init__(self, dataset, embedding): self.model_name Transformer self.train_path dataset /data/train.txt # 训练集 self.dev_path dataset /data/dev.txt # 验证集 self.test_path dataset /data/test.txt # 测试集 self.class_list [x.strip() for x in open( dataset /data/class.txt, encodingutf-8).readlines()] # 类别名单 self.vocab_path dataset /data/vocab.pkl # 词表 self.save_path dataset /saved_dict/ self.model_name .ckpt # 模型训练结果 self.log_path dataset /log/ self.model_name self.embedding_pretrained torch.tensor( np.load(dataset /data/ embedding)[embeddings].astype(float32))\ if embedding ! random else None # 预训练词向量 self.device torch.device(cuda if torch.cuda.is_available() else cpu) # 设备 self.dropout 0.5 # 随机失活 self.require_improvement 2000 # 若超过1000batch效果还没提升则提前结束训练 self.num_classes len(self.class_list) # 类别数 self.n_vocab 0 # 词表大小在运行时赋值 self.num_epochs 20 # epoch数 self.batch_size 128 # mini-batch大小 self.pad_size 32 # 每句话处理成的长度(短填长切) self.learning_rate 5e-4 # 学习率 self.embed self.embedding_pretrained.size(1) if self.embedding_pretrained is not None else 300 # 字向量维度 self.dim_model 300 self.hidden 1024 self.last_hidden 512 self.num_head 5 self.num_encoder 2 self.n_gram_vocab8 Attention Is All You Need class Model(nn.Module): def __init__(self, config): super(Model, self).__init__() if config.embedding_pretrained is not None: self.embedding nn.Embedding.from_pretrained(config.embedding_pretrained, freezeFalse) else: self.embedding nn.Embedding(config.n_vocab, config.embed, padding_idxconfig.n_vocab - 1) self.postion_embedding Positional_Encoding(config.embed, config.pad_size, config.dropout, config.device) self.encoder Encoder(config.dim_model, config.num_head, config.hidden, config.dropout) self.encoders nn.ModuleList([ copy.deepcopy(self.encoder) # Encoder(config.dim_model, config.num_head, config.hidden, config.dropout) for _ in range(config.num_encoder)]) self.fc1 nn.Linear(config.pad_size * config.dim_model, config.num_classes) # self.fc2 nn.Linear(config.last_hidden, config.num_classes) # self.fc1 nn.Linear(config.dim_model, config.num_classes) def forward(self, x): out self.embedding(x[0]) out self.postion_embedding(out) for encoder in self.encoders: out encoder(out) out out.view(out.size(0), -1) # out torch.mean(out, 1) out self.fc1(out) return out class Encoder(nn.Module): def __init__(self, dim_model, num_head, hidden, dropout): super(Encoder, self).__init__() self.attention Multi_Head_Attention(dim_model, num_head, dropout) self.feed_forward Position_wise_Feed_Forward(dim_model, hidden, dropout) def forward(self, x): out self.attention(x) out self.feed_forward(out) return out class Positional_Encoding(nn.Module): def __init__(self, embed, pad_size, dropout, device): super(Positional_Encoding, self).__init__() self.device device self.pe torch.tensor([[pos / (10000.0 ** (i // 2 * 2.0 / embed)) for i in range(embed)] for pos in range(pad_size)]) self.pe[:, 0::2] np.sin(self.pe[:, 0::2]) self.pe[:, 1::2] np.cos(self.pe[:, 1::2]) self.dropout nn.Dropout(dropout) def forward(self, x): out x nn.Parameter(self.pe, requires_gradFalse).to(self.device) out self.dropout(out) return out class Scaled_Dot_Product_Attention(nn.Module): Scaled Dot-Product Attention def __init__(self): super(Scaled_Dot_Product_Attention, self).__init__() def forward(self, Q, K, V, scaleNone): Args: Q: [batch_size, len_Q, dim_Q] K: [batch_size, len_K, dim_K] V: [batch_size, len_V, dim_V] scale: 缩放因子 论文为根号dim_K Return: self-attention后的张量以及attention张量 attention torch.matmul(Q, K.permute(0, 2, 1)) if scale: attention attention * scale # if mask: # TODO change this # attention attention.masked_fill_(mask 0, -1e9) attention F.softmax(attention, dim-1) context torch.matmul(attention, V) return context class Multi_Head_Attention(nn.Module): def __init__(self, dim_model, num_head, dropout0.0): super(Multi_Head_Attention, self).__init__() self.num_head num_head assert dim_model % num_head 0 self.dim_head dim_model // self.num_head self.fc_Q nn.Linear(dim_model, num_head * self.dim_head) self.fc_K nn.Linear(dim_model, num_head * self.dim_head) self.fc_V nn.Linear(dim_model, num_head * self.dim_head) self.attention Scaled_Dot_Product_Attention() self.fc nn.Linear(num_head * self.dim_head, dim_model) self.dropout nn.Dropout(dropout) self.layer_norm nn.LayerNorm(dim_model) def forward(self, x): batch_size x.size(0) Q self.fc_Q(x) K self.fc_K(x) V self.fc_V(x) Q Q.view(batch_size * self.num_head, -1, self.dim_head) K K.view(batch_size * self.num_head, -1, self.dim_head) V V.view(batch_size * self.num_head, -1, self.dim_head) # if mask: # TODO # mask mask.repeat(self.num_head, 1, 1) # TODO change this scale K.size(-1) ** -0.5 # 缩放因子 context self.attention(Q, K, V, scale) context context.view(batch_size, -1, self.dim_head * self.num_head) out self.fc(context) out self.dropout(out) out out x # 残差连接 out self.layer_norm(out) return out class Position_wise_Feed_Forward(nn.Module): def __init__(self, dim_model, hidden, dropout0.0): super(Position_wise_Feed_Forward, self).__init__() self.fc1 nn.Linear(dim_model, hidden) self.fc2 nn.Linear(hidden, dim_model) self.dropout nn.Dropout(dropout) self.layer_norm nn.LayerNorm(dim_model) def forward(self, x): out self.fc1(x) out F.relu(out) out self.fc2(out) out self.dropout(out) out out x # 残差连接 out self.layer_norm(out) return out

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2561543.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

SpringBoot-17-MyBatis动态SQL标签之常用标签

文章目录 1 代码1.1 实体User.java1.2 接口UserMapper.java1.3 映射UserMapper.xml1.3.1 标签if1.3.2 标签if和where1.3.3 标签choose和when和otherwise1.4 UserController.java2 常用动态SQL标签2.1 标签set2.1.1 UserMapper.java2.1.2 UserMapper.xml2.1.3 UserController.ja…

wordpress后台更新后 前端没变化的解决方法

使用siteground主机的wordpress网站,会出现更新了网站内容和修改了php模板文件、js文件、css文件、图片文件后,网站没有变化的情况。 不熟悉siteground主机的新手,遇到这个问题,就很抓狂,明明是哪都没操作错误&#x…

网络编程(Modbus进阶)

思维导图 Modbus RTU(先学一点理论) 概念 Modbus RTU 是工业自动化领域 最广泛应用的串行通信协议,由 Modicon 公司(现施耐德电气)于 1979 年推出。它以 高效率、强健性、易实现的特点成为工业控制系统的通信标准。 包…

UE5 学习系列(二)用户操作界面及介绍

这篇博客是 UE5 学习系列博客的第二篇,在第一篇的基础上展开这篇内容。博客参考的 B 站视频资料和第一篇的链接如下: 【Note】:如果你已经完成安装等操作,可以只执行第一篇博客中 2. 新建一个空白游戏项目 章节操作,重…

IDEA运行Tomcat出现乱码问题解决汇总

最近正值期末周,有很多同学在写期末Java web作业时,运行tomcat出现乱码问题,经过多次解决与研究,我做了如下整理: 原因: IDEA本身编码与tomcat的编码与Windows编码不同导致,Windows 系统控制台…

利用最小二乘法找圆心和半径

#include <iostream> #include <vector> #include <cmath> #include <Eigen/Dense> // 需安装Eigen库用于矩阵运算 // 定义点结构 struct Point { double x, y; Point(double x_, double y_) : x(x_), y(y_) {} }; // 最小二乘法求圆心和半径 …

使用docker在3台服务器上搭建基于redis 6.x的一主两从三台均是哨兵模式

一、环境及版本说明 如果服务器已经安装了docker,则忽略此步骤,如果没有安装,则可以按照一下方式安装: 1. 在线安装(有互联网环境): 请看我这篇文章 传送阵>> 点我查看 2. 离线安装(内网环境):请看我这篇文章 传送阵>> 点我查看 说明&#xff1a;假设每台服务器已…

XML Group端口详解

在XML数据映射过程中&#xff0c;经常需要对数据进行分组聚合操作。例如&#xff0c;当处理包含多个物料明细的XML文件时&#xff0c;可能需要将相同物料号的明细归为一组&#xff0c;或对相同物料号的数量进行求和计算。传统实现方式通常需要编写脚本代码&#xff0c;增加了开…

LBE-LEX系列工业语音播放器|预警播报器|喇叭蜂鸣器的上位机配置操作说明

LBE-LEX系列工业语音播放器|预警播报器|喇叭蜂鸣器专为工业环境精心打造&#xff0c;完美适配AGV和无人叉车。同时&#xff0c;集成以太网与语音合成技术&#xff0c;为各类高级系统&#xff08;如MES、调度系统、库位管理、立库等&#xff09;提供高效便捷的语音交互体验。 L…

(LeetCode 每日一题) 3442. 奇偶频次间的最大差值 I (哈希、字符串)

题目&#xff1a;3442. 奇偶频次间的最大差值 I 思路 &#xff1a;哈希&#xff0c;时间复杂度0(n)。 用哈希表来记录每个字符串中字符的分布情况&#xff0c;哈希表这里用数组即可实现。 C版本&#xff1a; class Solution { public:int maxDifference(string s) {int a[26]…

【大模型RAG】拍照搜题技术架构速览:三层管道、两级检索、兜底大模型

摘要 拍照搜题系统采用“三层管道&#xff08;多模态 OCR → 语义检索 → 答案渲染&#xff09;、两级检索&#xff08;倒排 BM25 向量 HNSW&#xff09;并以大语言模型兜底”的整体框架&#xff1a; 多模态 OCR 层 将题目图片经过超分、去噪、倾斜校正后&#xff0c;分别用…

【Axure高保真原型】引导弹窗

今天和大家中分享引导弹窗的原型模板&#xff0c;载入页面后&#xff0c;会显示引导弹窗&#xff0c;适用于引导用户使用页面&#xff0c;点击完成后&#xff0c;会显示下一个引导弹窗&#xff0c;直至最后一个引导弹窗完成后进入首页。具体效果可以点击下方视频观看或打开下方…

接口测试中缓存处理策略

在接口测试中&#xff0c;缓存处理策略是一个关键环节&#xff0c;直接影响测试结果的准确性和可靠性。合理的缓存处理策略能够确保测试环境的一致性&#xff0c;避免因缓存数据导致的测试偏差。以下是接口测试中常见的缓存处理策略及其详细说明&#xff1a; 一、缓存处理的核…

龙虎榜——20250610

上证指数放量收阴线&#xff0c;个股多数下跌&#xff0c;盘中受消息影响大幅波动。 深证指数放量收阴线形成顶分型&#xff0c;指数短线有调整的需求&#xff0c;大概需要一两天。 2025年6月10日龙虎榜行业方向分析 1. 金融科技 代表标的&#xff1a;御银股份、雄帝科技 驱动…

观成科技:隐蔽隧道工具Ligolo-ng加密流量分析

1.工具介绍 Ligolo-ng是一款由go编写的高效隧道工具&#xff0c;该工具基于TUN接口实现其功能&#xff0c;利用反向TCP/TLS连接建立一条隐蔽的通信信道&#xff0c;支持使用Let’s Encrypt自动生成证书。Ligolo-ng的通信隐蔽性体现在其支持多种连接方式&#xff0c;适应复杂网…

铭豹扩展坞 USB转网口 突然无法识别解决方法

当 USB 转网口扩展坞在一台笔记本上无法识别,但在其他电脑上正常工作时,问题通常出在笔记本自身或其与扩展坞的兼容性上。以下是系统化的定位思路和排查步骤,帮助你快速找到故障原因: 背景: 一个M-pard(铭豹)扩展坞的网卡突然无法识别了,扩展出来的三个USB接口正常。…

未来机器人的大脑:如何用神经网络模拟器实现更智能的决策?

编辑&#xff1a;陈萍萍的公主一点人工一点智能 未来机器人的大脑&#xff1a;如何用神经网络模拟器实现更智能的决策&#xff1f;RWM通过双自回归机制有效解决了复合误差、部分可观测性和随机动力学等关键挑战&#xff0c;在不依赖领域特定归纳偏见的条件下实现了卓越的预测准…

Linux应用开发之网络套接字编程(实例篇)

服务端与客户端单连接 服务端代码 #include <sys/socket.h> #include <sys/types.h> #include <netinet/in.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <arpa/inet.h> #include <pthread.h> …

华为云AI开发平台ModelArts

华为云ModelArts&#xff1a;重塑AI开发流程的“智能引擎”与“创新加速器”&#xff01; 在人工智能浪潮席卷全球的2025年&#xff0c;企业拥抱AI的意愿空前高涨&#xff0c;但技术门槛高、流程复杂、资源投入巨大的现实&#xff0c;却让许多创新构想止步于实验室。数据科学家…

深度学习在微纳光子学中的应用

深度学习在微纳光子学中的主要应用方向 深度学习与微纳光子学的结合主要集中在以下几个方向&#xff1a; 逆向设计 通过神经网络快速预测微纳结构的光学响应&#xff0c;替代传统耗时的数值模拟方法。例如设计超表面、光子晶体等结构。 特征提取与优化 从复杂的光学数据中自…