基于双向LSTM模型完成文本分类任务

news2025/6/7 3:27:06

6.4.1 数据处理

IMDB电影评论数据集是一份关于电影评论的经典二分类数据集.IMDB 按照评分的高低筛选出了积极评论和消极评论,如果评分 ≥7≥7,则认为是积极评论;如果评分 ≤4≤4,则认为是消极评论.数据集包含训练集和测试集数据,数量各为 25000 条,每条数据都是一段用户关于某个电影的真实评价,以及观众对这个电影的情感倾向,其目录结构如下所示

  ├── train/
      ├── neg                 # 消极数据  
      ├── pos                 # 积极数据
      ├── unsup               # 无标签数据
  ├── test/
      ├── neg                 # 消极数据
      ├── pos                 # 积极数据
6.4.1.1 数据加载

原始训练集和测试集数据分别25000条,本节将原始的测试集平均分为两份,分别作为验证集和测试集,存放于./dataset目录下。使用如下代码便可以将数据加载至内存:

import os
# 加载数据集
def load_imdb_data(path):
    assert os.path.exists(path)
    trainset, devset, testset = [], [], []
    with open(os.path.join(path, "C:\\Users\\hp\\Desktop\\IMDB\\train.txt"), "r",encoding="utf-8") as fr:
        for line in fr:
            sentence_label, sentence = line.strip().lower().split("\t", maxsplit=1)
            trainset.append((sentence, sentence_label))

    with open(os.path.join(path, "C:\\Users\\hp\\Desktop\\IMDB\\dev.txt"), "r",encoding="utf-8") as fr:
        for line in fr:
            sentence_label, sentence = line.strip().lower().split("\t", maxsplit=1)
            devset.append((sentence, sentence_label))

    with open(os.path.join(path, "C:\\Users\\hp\\Desktop\\IMDB\\test.txt"), "r",encoding="utf-8") as fr:
        for line in fr:
            sentence_label, sentence = line.strip().lower().split("\t", maxsplit=1)
            testset.append((sentence, sentence_label))

    return trainset, devset, testset

# 加载IMDB数据集
train_data, dev_data, test_data = load_imdb_data("./dataset/")
# 打印一下加载后的数据样式
print(train_data[4])
("the premise of an african-american female scrooge in the modern, struggling city was inspired, but nothing else in this film is. here, ms. scrooge is a miserly banker who takes advantage of the employees and customers in the largely poor and black neighborhood it inhabits. there is no doubt about the good intentions of the people involved. part of the problem is that story's roots don't translate well into the urban setting of this film, and the script fails to make the update work. also, the constant message about sharing and giving is repeated so endlessly, the audience becomes tired of it well before the movie reaches its familiar end. this is a message film that doesn't know when to quit. in the title role, the talented cicely tyson gives an overly uptight performance, and at times lines are difficult to understand. the charles dickens novel has been adapted so many times, it's a struggle to adapt it in a way that makes it fresh and relevant, in spite of its very relevant message.", '0')

期末复习太紧张了,直接附上全部代码

import os
import torch
import torch.nn as nn
from torch.utils.data import Dataset

from functools import partial
import time
import random
import numpy as np

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def load_vocab(path):
    assert os.path.exists(path)
    words = []
    with open(path, "r", encoding="utf-8") as f:
        words = f.readlines()
        words = [word.strip() for word in words if word.strip()]
    word2id = dict(zip(words, range(len(words))))
    return word2id

def load_imdb_data(path):
    assert os.path.exists(path)
    trainset, devset, testset = [], [], []
    with open(os.path.join(path, "train.txt"), "r", encoding='utf-8') as fr:
        for line in fr:
            sentence_label, sentence = line.strip().lower().split("\t", maxsplit=1)
            trainset.append((sentence, sentence_label))

    with open(os.path.join(path, "dev.txt"), "r", encoding='utf-8') as fr:
        for line in fr:
            sentence_label, sentence = line.strip().lower().split("\t", maxsplit=1)
            devset.append((sentence, sentence_label))

    with open(os.path.join(path, "test.txt"), "r", encoding='utf-8') as fr:
        for line in fr:
            sentence_label, sentence = line.strip().lower().split("\t", maxsplit=1)
            testset.append((sentence, sentence_label))

    return trainset, devset, testset


# 加载IMDB数据集
train_data, dev_data, test_data = load_imdb_data("./dataset/")
# # 打印一下加载后的数据样式
print(train_data[4])


class IMDBDataset(Dataset):
    def __init__(self, examples, word2id_dict):
        super(IMDBDataset, self).__init__()
        # 词典,用于将单词转为字典索引的数字
        self.word2id_dict = word2id_dict
        # 加载后的数据集
        self.examples = self.words_to_id(examples)

    def words_to_id(self, examples):
        tmp_examples = []
        for idx, example in enumerate(examples):
            seq, label = example
            # 将单词映射为字典索引的ID, 对于词典中没有的单词用[UNK]对应的ID进行替代
            seq = [self.word2id_dict.get(word, self.word2id_dict['[UNK]']) for word in seq.split(" ")]
            label = int(label)
            tmp_examples.append([seq, label])
        return tmp_examples

    def __getitem__(self, idx):
        seq, label = self.examples[idx]
        return seq, label

    def __len__(self):
        return len(self.examples)


# 加载词表
word2id_dict = load_vocab("./dataset/vocab.txt")

# 实例化Dataset
train_set = IMDBDataset(train_data, word2id_dict)
dev_set = IMDBDataset(dev_data, word2id_dict)
test_set = IMDBDataset(test_data, word2id_dict)

print('训练集样本数:', len(train_set))
print('样本示例:', train_set[4])

import os


def load_vocab(path):
    assert os.path.exists(path)
    words = []
    with open(path, "r", encoding="utf-8") as f:
        words = f.readlines()
        words = [word.strip() for word in words if word.strip()]
    word2id = dict(zip(words, range(len(words))))
    return word2id


def collate_fn(batch_data, pad_val=0, max_seq_len=256):
    seqs, seq_lens, labels = [], [], []
    max_len = 0
    for example in batch_data:
        seq, label = example
        # 对数据序列进行截断
        seq = seq[:max_seq_len]
        # 对数据截断并保存于seqs中
        seqs.append(seq)
        seq_lens.append(len(seq))
        labels.append(label)
        # 保存序列最大长度
        max_len = max(max_len, len(seq))
    # 对数据序列进行填充至最大长度
    for i in range(len(seqs)):
        seqs[i] = seqs[i] + [pad_val] * (max_len - len(seqs[i]))

    # return (torch.tensor(seqs), torch.tensor(seq_lens)), torch.tensor(labels)
    return (torch.tensor(seqs).to(device), torch.tensor(seq_lens)), torch.tensor(labels).to(device)


max_seq_len = 5
batch_data = [[[1, 2, 3, 4, 5, 6], 1], [[2, 4, 6], 0]]
(seqs, seq_lens), labels = collate_fn(batch_data, pad_val=word2id_dict["[PAD]"], max_seq_len=max_seq_len)
print("seqs: ", seqs)
print("seq_lens: ", seq_lens)
print("labels: ", labels)

max_seq_len = 256
batch_size = 128
collate_fn = partial(collate_fn, pad_val=word2id_dict["[PAD]"], max_seq_len=max_seq_len)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size,
                                           shuffle=True, drop_last=False, collate_fn=collate_fn)
dev_loader = torch.utils.data.DataLoader(dev_set, batch_size=batch_size,
                                         shuffle=False, drop_last=False, collate_fn=collate_fn)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=batch_size,
                                          shuffle=False, drop_last=False, collate_fn=collate_fn)


class AveragePooling(nn.Module):
    def __init__(self):
        super(AveragePooling, self).__init__()

    def forward(self, sequence_output, sequence_length):
        # 假设 sequence_length 是一个 PyTorch 张量
        sequence_length = sequence_length.unsqueeze(-1).to(torch.float32)
        # 根据sequence_length生成mask矩阵,用于对Padding位置的信息进行mask
        max_len = sequence_output.shape[1]

        mask = torch.arange(max_len, device='cuda') < sequence_length.to('cuda')
        mask = mask.to(torch.float32).unsqueeze(-1)
        # 对序列中paddling部分进行mask

        sequence_output = torch.multiply(sequence_output, mask.to('cuda'))
        # 对序列中的向量取均值
        batch_mean_hidden = torch.divide(torch.sum(sequence_output, dim=1), sequence_length.to('cuda'))
        return batch_mean_hidden


class Model_BiLSTM_FC(nn.Module):
    def __init__(self, num_embeddings, input_size, hidden_size, num_classes=2):
        super(Model_BiLSTM_FC, self).__init__()
        # 词典大小
        self.num_embeddings = num_embeddings
        # 单词向量的维度
        self.input_size = input_size
        # LSTM隐藏单元数量
        self.hidden_size = hidden_size
        # 情感分类类别数量
        self.num_classes = num_classes
        # 实例化嵌入层
        self.embedding_layer = nn.Embedding(num_embeddings, input_size, padding_idx=0)
        # 实例化LSTM层
        self.lstm_layer = nn.LSTM(input_size, hidden_size, batch_first=True, bidirectional=True)
        # 实例化聚合层
        self.average_layer = AveragePooling()
        # 实例化输出层
        self.output_layer = nn.Linear(hidden_size * 2, num_classes)

    def forward(self, inputs):
        # 对模型输入拆分为序列数据和mask
        input_ids, sequence_length = inputs
        # 获取词向量
        inputs_emb = self.embedding_layer(input_ids)

        packed_input = nn.utils.rnn.pack_padded_sequence(inputs_emb, sequence_length.cpu(), batch_first=True,
                                                         enforce_sorted=False)
        # 使用lstm处理数据
        packed_output, _ = self.lstm_layer(packed_input)
        # 解包输出
        sequence_output, _ = nn.utils.rnn.pad_packed_sequence(packed_output, batch_first=True)
        # 使用聚合层聚合sequence_output
        batch_mean_hidden = self.average_layer(sequence_output, sequence_length)
        # 输出文本分类logits
        logits = self.output_layer(batch_mean_hidden)
        return logits


import torch
import matplotlib.pyplot as plt


class RunnerV3(object):
    def __init__(self, model, optimizer, loss_fn, metric, **kwargs):
        self.model = model
        self.optimizer = optimizer
        self.loss_fn = loss_fn
        self.metric = metric  # 只用于计算评价指标

        # 记录训练过程中的评价指标变化情况
        self.dev_scores = []

        # 记录训练过程中的损失函数变化情况
        self.train_epoch_losses = []  # 一个epoch记录一次loss
        self.train_step_losses = []  # 一个step记录一次loss
        self.dev_losses = []

        # 记录全局最优指标
        self.best_score = 0

    def train(self, train_loader, dev_loader=None, **kwargs):
        # 将模型切换为训练模式
        self.model.train()

        # 传入训练轮数,如果没有传入值则默认为0
        num_epochs = kwargs.get("num_epochs", 0)
        # 传入log打印频率,如果没有传入值则默认为100
        log_steps = kwargs.get("log_steps", 100)
        # 评价频率
        eval_steps = kwargs.get("eval_steps", 0)

        # 传入模型保存路径,如果没有传入值则默认为"best_model.pdparams"
        save_path = kwargs.get("save_path", "best_model.pdparams")

        custom_print_log = kwargs.get("custom_print_log", None)

        # 训练总的步数
        num_training_steps = num_epochs * len(train_loader)

        if eval_steps:
            if self.metric is None:
                raise RuntimeError('Error: Metric can not be None!')
            if dev_loader is None:
                raise RuntimeError('Error: dev_loader can not be None!')

        # 运行的step数目
        global_step = 0
        total_acces = []
        total_losses = []
        Iters = []

        # 进行num_epochs轮训练
        for epoch in range(num_epochs):
            # 用于统计训练集的损失
            total_loss = 0

            for step, data in enumerate(train_loader):
                X, y = data
                # 获取模型预测
                # 计算logits
                logits = self.model(X)

                # 将y转换为和logits相同的形状
                acc_y = y.view(-1, 1)

                # 计算准确率
                probs = torch.softmax(logits, dim=1)
                pred = torch.argmax(probs, dim=1)
                correct = (pred == acc_y).sum().item()
                total = acc_y.size(0)
                acc = correct / total
                total_acces.append(acc)
                # print(acc.numpy()[0])

                loss = self.loss_fn(logits, y)  # 默认求mean
                total_loss += loss
                total_losses.append(loss.item())
                Iters.append(global_step)

                # 训练过程中,每个step的loss进行保存
                self.train_step_losses.append((global_step, loss.item()))

                if log_steps and global_step % log_steps == 0:
                    print(
                        f"[Train] epoch: {epoch}/{num_epochs}, step: {global_step}/{num_training_steps}, loss: {loss.item():.5f}")
                # 梯度反向传播,计算每个参数的梯度值
                loss.backward()

                if custom_print_log:
                    custom_print_log(self)

                # 小批量梯度下降进行参数更新
                self.optimizer.step()
                # 梯度归零
                self.optimizer.zero_grad()

                # 判断是否需要评价
                if eval_steps > 0 and global_step != 0 and \
                        (global_step % eval_steps == 0 or global_step == (num_training_steps - 1)):

                    dev_score, dev_loss = self.evaluate(dev_loader, global_step=global_step)
                    print(f"[Evaluate]  dev score: {dev_score:.5f}, dev loss: {dev_loss:.5f}")

                    # 将模型切换为训练模式
                    self.model.train()

                    # 如果当前指标为最优指标,保存该模型
                    if dev_score > self.best_score:
                        self.save_model(save_path)
                        print(
                            f"[Evaluate] best accuracy performence has been updated: {self.best_score:.5f} --> {dev_score:.5f}")
                        self.best_score = dev_score

                global_step += 1

            # 当前epoch 训练loss累计值
            trn_loss = (total_loss / len(train_loader)).item()
            # epoch粒度的训练loss保存
            self.train_epoch_losses.append(trn_loss)

        draw_process("trainning acc", "green", Iters, total_acces, "trainning acc")
        print("total_acc:")
        print(total_acces)
        print("total_loss:")

        print(total_losses)

        print("[Train] Training done!")

    # 模型评估阶段,使用'paddle.no_grad()'控制不计算和存储梯度
    @torch.no_grad()
    def evaluate(self, dev_loader, **kwargs):
        assert self.metric is not None

        # 将模型设置为评估模式
        self.model.eval()

        global_step = kwargs.get("global_step", -1)

        # 用于统计训练集的损失
        total_loss = 0

        # 重置评价
        self.metric.reset()

        # 遍历验证集每个批次
        for batch_id, data in enumerate(dev_loader):
            X, y = data

            # 计算模型输出
            logits = self.model(X)

            # 计算损失函数
            loss = self.loss_fn(logits, y).item()
            # 累积损失
            total_loss += loss

            # 累积评价
            self.metric.update(logits, y)

        dev_loss = (total_loss / len(dev_loader))
        self.dev_losses.append((global_step, dev_loss))

        dev_score = self.metric.accumulate()
        self.dev_scores.append(dev_score)

        return dev_score, dev_loss

    # 模型评估阶段,使用'paddle.no_grad()'控制不计算和存储梯度
    @torch.no_grad()
    def predict(self, x, **kwargs):
        # 将模型设置为评估模式
        self.model.eval()
        # 运行模型前向计算,得到预测值
        logits = self.model(x)
        return logits

    def save_model(self, save_path):
        torch.save(self.model.state_dict(), save_path)

    def load_model(self, model_path):
        model_state_dict = torch.load(model_path)
        self.model.load_state_dict(model_state_dict)


class Accuracy():
    def __init__(self, is_logist=True):
        # 用于统计正确的样本个数
        self.num_correct = 0
        # 用于统计样本的总数
        self.num_count = 0

        self.is_logist = is_logist

    def update(self, outputs, labels):

        # 判断是二分类任务还是多分类任务,shape[1]=1时为二分类任务,shape[1]>1时为多分类任务
        if outputs.shape[1] == 1:  # 二分类
            outputs = torch.squeeze(outputs, dim=-1)
            if self.is_logist:
                # logist判断是否大于0
                preds = torch.tensor((outputs >= 0), dtype=torch.float32)
            else:
                # 如果不是logist,判断每个概率值是否大于0.5,当大于0.5时,类别为1,否则类别为0
                preds = torch.tensor((outputs >= 0.5), dtype=torch.float32)
        else:
            # 多分类时,使用'torch.argmax'计算最大元素索引作为类别
            preds = torch.argmax(outputs, dim=1)

        # 获取本批数据中预测正确的样本个数
        labels = torch.squeeze(labels, dim=-1)
        batch_correct = torch.sum(torch.tensor(preds == labels, dtype=torch.float32)).cpu().numpy()
        batch_count = len(labels)

        # 更新num_correct 和 num_count
        self.num_correct += batch_correct
        self.num_count += batch_count

    def accumulate(self):
        # 使用累计的数据,计算总的指标
        if self.num_count == 0:
            return 0
        return self.num_correct / self.num_count

    def reset(self):
        # 重置正确的数目和总数
        self.num_correct = 0
        self.num_count = 0

    def name(self):
        return "Accuracy"


def draw_process(title, color, iters, data, label):
    plt.title(title, fontsize=24)
    plt.xlabel("iter", fontsize=20)
    plt.ylabel(label, fontsize=20)
    plt.plot(iters, data, color=color, label=label)
    plt.legend()
    plt.grid()
    print(plt.show())


np.random.seed(0)
random.seed(0)
torch.seed()

# 指定训练轮次
num_epochs = 3
# 指定学习率
learning_rate = 0.001
# 指定embedding的数量为词表长度
num_embeddings = len(word2id_dict)
# embedding向量的维度
input_size = 256
# LSTM网络隐状态向量的维度
hidden_size = 256

# 实例化模型
model = Model_BiLSTM_FC(num_embeddings, input_size, hidden_size).to(device)
# 指定优化器
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, betas=(0.9, 0.999))
# 指定损失函数
loss_fn = nn.CrossEntropyLoss()
# 指定评估指标
metric = Accuracy()
# 实例化Runner
runner = RunnerV3(model, optimizer, loss_fn, metric)
# 模型训练
start_time = time.time()
runner.train(train_loader, dev_loader, num_epochs=num_epochs, eval_steps=10, log_steps=10,
             save_path="./checkpoints/best.pdparams")
end_time = time.time()
print("time: ", (end_time - start_time))

from nndl4.tools import plot_training_loss_acc

# 图像名字
fig_name = "./images/6.16.pdf"
# sample_step: 训练损失的采样step,即每隔多少个点选择1个点绘制
# loss_legend_loc: loss 图像的图例放置位置
# acc_legend_loc: acc 图像的图例放置位置
plot_training_loss_acc(runner, fig_name, fig_size=(16, 6), sample_step=10, loss_legend_loc="lower left",
                       acc_legend_loc="lower right")

model_path = "./checkpoints/best.pdparams"
runner.load_model(model_path)
accuracy, _ = runner.evaluate(test_loader)
print(f"Evaluate on test set, Accuracy: {accuracy:.5f}")

id2label = {0: "消极情绪", 1: "积极情绪"}
text = "this movie is so great. I watched it three times already"
# 处理单条文本
sentence = text.split(" ")
words = [word2id_dict[word] if word in word2id_dict else word2id_dict['[UNK]'] for word in sentence]
words = words[:max_seq_len]
sequence_length = torch.tensor([len(words)], dtype=torch.int64)
words = torch.tensor(words, dtype=torch.int64).unsqueeze(0)
# 使用模型进行预测
logits = runner.predict((words.to(device), sequence_length.to(device)))
max_label_id = torch.argmax(logits, dim=-1).cpu().numpy()[0]
pred_label = id2label[max_label_id]
print("Label: ", pred_label)

结果:

 

 总结:

1、一开始运行代码遇见了一些错误,我看不懂

经过我查阅发现

于是我修改在代码改成

with open(os.path.join(path, "C:\\Users\\hp\\Desktop\\IMDB\\train.txt"), "r",encoding="utf-8") as fr:

成功解决了问题。

然后只知道utf-8是一种编码方式,所以查了查

 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1355411.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

艾思控AQMD6008BLS-TE无刷电机驱动使用笔记(配合STM32)

一、介绍 本驱动器使用的电机电流精确检测技术、有感无刷电机自测速、有感无刷电机转动位置检测、再生电流恒电流制动&#xff08;或称刹车&#xff09;技术和强大的PID调节技术可地控制电机平稳正反转、换向及制动&#xff0c;输出电流实时调控防止过流&#xff0c;精准控制电…

「许战海战略文库」佳隆股份:2亿级别的调味品公司如何应对增长难题

自2002年以来&#xff0c;佳隆食品逐步向集团化方向发展&#xff0c;2010年11月2日在深圳证券交易所成功挂牌上市。 2009年-2022年&#xff0c;公司营收增长并不明显&#xff0c;基本维持在2-3亿之间。尤其是2022年&#xff0c;营收出现亏损的情况&#xff0c;在运营和增长战略…

ruoyi-ai 基于ruoyi-plus实现AI聊天和绘画功能-后端

基于ruoyi-plus实现AI聊天和绘画功能-后端 本项目完全开源免费&#xff01; 后台管理界面使用elementUI服务端使用Java17SpringBoot3.X ruoyi-ai: 基于ruoyi-plus实现AI聊天和绘画功能-后端 实现功能 集成OpenAi API (gpt-4-vision-preview dall-e-3)接入文生图模型&#xf…

satellite-image-deep-learning,一个遥感深度学习方向的宝藏网站

今天我们分享一个深度学习遥感相关的网站&#xff1a;「satellite-image-deep-learning」。这是一个github库&#xff0c;里面含有大量应用于卫星和航空图像的深度学习资源。 主要包括以下几个方面&#xff1a; annotation&#xff1a;提供数据集注释信息&#xff0c;里面包含…

基于ssm的视康眼镜网店销售系统的设计与实现+vue论文

引 言 随着互联网应用的不断发展&#xff0c;以及受新冠病毒疫情影响&#xff0c;越来越多的零售行业将其销售方式从实体门店销售转向虚拟网店销售方向发展。中国互联网络信息中心(CNNIC)发布第48次《中国互联网络发展状况统计报告》显示&#xff0c;截至2021年6月&#xff0c…

年度总结丨酷雷曼2023年度记忆

2023年 我们关心粮食和蔬菜 也关心“视界”和未来 我们执着于向新兴科技深处钻研 也探寻着让VR全景广泛落地 我们目睹着智慧生活的日新月异 也记录着平凡奋斗者们创造的非凡事业 2024年 属于VR的盛行之年 你又会将目光投向哪里&#xff1f; 这里有一份来自 酷雷曼的…

前端angular 实现验证码 输入+展示(大框+加粗内容 )

参考用原生方在手机上此效果 如何实现一个4位验证码输入框效果 输入使用的任旧是html的input元素&#xff0c;只是让它看不到了只是把输入到input元素里的内容取到的内容放在改过样式的div里不需要dom操作&#xff0c;直接用双向绑定就拿到数据&#xff1b;使用动态样式 设置了…

SQL Yog 连接MySQL的时候出现 错误码 2058的问题

查看报错信息&#xff1a; 这个问题是出现在&#xff0c;我使用sql Yog连接MySQL数据库的时候出现的错误。 问题分析&#xff1a; 原因可能是MySQL加密方式&#xff0c;不允许本地访问&#xff0c; 解决办法&#xff1a; 1&#xff0c;window r 输入cmd进入黑窗口 2&#xff…

flink on yarn任务启停脚本(实现一键读取ck启动,保存ck停止)

1.问题描述 flink同步任务&#xff0c;长期任务过多&#xff0c;某个任务停止保存checkpoint或者savepoint后&#xff0c;修改代码&#xff0c;使用命令行读取检查点重新启动需要人工去hdfs上找寻检查点保存位置。任务过多管理起来很不方便。 鉴于此&#xff0c;使用脚本编写了…

springcloud之集成nacos config

写在前面 源码 。 本文看下如下集成nacos config组件。 1&#xff1a;常见配置方式分析 我们先来看下常见的配置方式都有哪些&#xff0c;以及其有什么优点和缺点。 硬编码 优点&#xff1a;hardcode&#xff0c;除了开发的时候快些&#xff0c;爽一下&#xff0c;有个屁优…

京东年度数据报告-2023全年度吸尘器十大热门品牌销量榜单

2023年&#xff0c;在家电行业增长乏力的环境下&#xff0c;作为环境电器的热门品类之一&#xff0c;吸尘器市场整体的销售也呈现下滑态势。 根据鲸参谋平台的数据显示&#xff0c;2023年度&#xff0c;京东平台上吸尘器市场的年度销量累计超过300万&#xff0c;同比下滑约20%…

多线程基础入门【Linux之旅】——上篇【线程控制,线程互斥,线程安全】

目录 前文 回望页表 一&#xff0c;什么是线程 二&#xff0c;使用 pthread_create &#xff08;线程创建&#xff09; 三&#xff0c;线程控制 1 &#xff0c;线程共享进程数据&#xff0c;但也拥有自己的一部分数据: 2&#xff0c; 线程 VS 进程优点 3&#xff0c;…

【数据结构】数组实现队列(详细版)

目录 队列的定义 普通顺序队列的劣势——与链队列相比 顺序队列实现方法&#xff1a; 一、动态增长队列 1、初始化队列 2、元素入队 3、判断队列是否为空 4、元素出队 5、获取队首元素 6、获取队尾元素 7、获取队列元素个数 8、销毁队列 总结&#xff1a; 动态增长队列…

企业数据存储监控

随着组织及其网络基础架构的不断扩展&#xff0c;存储将不可避免地成为一项挑战&#xff0c;随着存储需求的增长&#xff0c;调配更多存储资源的需求也会随之增长。为基础架构配置了更多存储资源后&#xff0c;它们需要不间断地运行&#xff0c;并且应该免受威胁。从本质上讲&a…

Linux之Ubuntu环境安装配置Jenkins

Ubuntu环境安装配置Jenkins&#xff0c;启动服务 一、安装过程 1、查看服务器的操作系统 lsb_release -a 2、查看JDK是否安装 java -version 如果还没有安装&#xff0c;则需要安装&#xff0c;命令如下&#xff1a; sudo apt install openjdk-11-jre-headless 3、下载2.…

pandas.DataFrame() 数据自动写入Excel

DataFrame 表格数据格式 &#xff1b; to_excel 写入Excel数据&#xff1b; read_excel 阅读 Excel数据函数 import pandas as pd#df2 pd.DataFrame({neme: [zhangsan, lisi, 3]}) df1 pd.DataFrame({One: [1, 2, 3],name: [zhangsan, lisi, 3]})#One是列明&#xff0c;123是…

前端三剑客——HTML5+CSS3+JavaScript

核心技术●实战训练营●项目实战&#xff08;微视频版&#xff09;   《前端三剑客——HTML5CSS3JavaScript》采用“核心技术→实战训练营→企业级项目实践”的结构和“由浅入深&#xff0c;由深到精”的模式进行讲解。 全书科学设置七大阶段由浅入深循序渐进&#xff0c;为解…

消防船,2027年将达到约26亿美元左右

近年来&#xff0c;消防船市场不断扩大&#xff0c;主要受到城市化进程和海上经济的快速发展的推动。消防船能够有效地灭火、增强海上边防力量&#xff0c;受到国内外政府和企业的广泛关注全球市场 近年来&#xff0c;消防船市场不断扩大&#xff0c;主要受到城市化进程和海上经…

C++ 多态向上转型详解

文章目录 1 . 前言2 . 多态3 . 向上转型4 . 总结 【极客技术传送门】 : https://blog.csdn.net/Engineer_LU/article/details/135149485 1 . 前言 此篇博文详解C的多态向上转型平台 : Qt 2 . 多态 【Q】什么是多态&#xff1f; 【A】解释如下 : 通俗来说,就是多种形态,具体…