PyTorch Lightning实战 - 训练 MNIST 数据集

news2025/7/17 4:28:58

MNIST with PyTorch Lightning

利用 PyTorch Lightning 训练 MNIST 数据。验证梯度范数、学习率、优化器对训练的影响。

pip show lightning
Version: 2.5.1.post0

Fast dev run

DATASET_DIR="/repos/datasets"
python mnist_pl.py --output_grad_norm --fast_dev_run --dataset_dir $DATASET_DIR
Seed set to 1234
Using default `ModelCheckpoint`. Consider installing `litmodels` package to enable `LitModelCheckpoint` for automatic upload to the Lightning model registry.
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
Running in `fast_dev_run` mode: will run the requested loop using 1 batch(es). Logging and checkpointing is suppressed.
You are using a CUDA device ('NVIDIA GeForce RTX 3060 Ti') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

  | Name           | Type               | Params | Mode 
--------------------------------------------------------------
0 | model          | ResNet             | 11.2 M | train
1 | criterion      | CrossEntropyLoss   | 0      | train
2 | train_accuracy | MulticlassAccuracy | 0      | train
3 | val_accuracy   | MulticlassAccuracy | 0      | train
4 | test_accuracy  | MulticlassAccuracy | 0      | train
--------------------------------------------------------------
11.2 M    Trainable params
0         Non-trainable params
11.2 M    Total params
44.701    Total estimated model params size (MB)
72        Modules in train mode
0         Modules in eval mode
Epoch 0: 100%|██████████████| 1/1 [00:00<00:00,  1.02it/s, train_loss_step=2.650, val_loss=2.500, val_acc=0.0781, train_loss_epoch=2.650, train_acc_epoch=0.0938]`Trainer.fit` stopped: `max_steps=1` reached.                                                                                                                    
Epoch 0: 100%|██████████████| 1/1 [00:00<00:00,  1.02it/s, train_loss_step=2.650, val_loss=2.500, val_acc=0.0781, train_loss_epoch=2.650, train_acc_epoch=0.0938]
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing DataLoader 0: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 70.41it/s]
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
       Test metric             DataLoader 0
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
        test_acc                 0.015625
        test_loss           2.5446341037750244
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Track gradients

python mnist_pl.py --output_grad_norm --max_epochs 1 --dataset_dir $DATASET_DIR

在这里插入图片描述

Different learning rates

python mnist_pl.py  --learning_rate 0.0001 --max_epochs 1  --dataset_dir $DATASET_DIR
python mnist_pl.py --learning_rate 0.001 --max_epochs 1  --dataset_dir $DATASET_DIR
python mnist_pl.py --learning_rate 0.01 --max_epochs 1  --dataset_dir $DATASET_DIR

在这里插入图片描述

在这里插入图片描述

Different optimizers

python mnist_pl.py --optimizer "Adam" --max_epochs 1 --dataset_dir $DATASET_DIR
python mnist_pl.py --optimizer "RMSProp" --max_epochs 1 --dataset_dir $DATASET_DIR
python mnist_pl.py --optimizer "AdaGrad" --max_epochs 1 --dataset_dir $DATASET_DIR

在这里插入图片描述

在这里插入图片描述

Code

import argparse
import csv
import os

import lightning as pl
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from lightning.pytorch.callbacks import Callback
from torch.utils.data import DataLoader, random_split
from torchmetrics import Accuracy
from torchvision import models


class MNISTDataModule(pl.LightningDataModule):
    def __init__(
        self, data_dir: str = "./data", batch_size: int = 64, num_workers: int = 4
    ):
        super().__init__()
        self.data_dir = data_dir
        self.batch_size = batch_size
        self.num_workers = num_workers
        self.transform = transforms.Compose(
            [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
        )
        self.mnist_train = None
        self.mnist_val = None
        self.mnist_test = None

    def prepare_data(self):
        datasets.MNIST(self.data_dir, train=True, download=True)
        datasets.MNIST(self.data_dir, train=False, download=True)

    def setup(self, stage: str = None):
        if stage == "fit" or stage is None:
            mnist_full = datasets.MNIST(
                self.data_dir, train=True, transform=self.transform
            )
            self.mnist_train, self.mnist_val = random_split(mnist_full, [55000, 5000])
        if stage == "test" or stage is None:
            self.mnist_test = datasets.MNIST(
                self.data_dir, train=False, transform=self.transform
            )

    def train_dataloader(self):
        return DataLoader(
            self.mnist_train,
            batch_size=self.batch_size,
            num_workers=self.num_workers,
            shuffle=True,
            persistent_workers=True if self.num_workers > 0 else False,
        )

    def val_dataloader(self):
        return DataLoader(
            self.mnist_val,
            batch_size=self.batch_size,
            num_workers=self.num_workers,
            persistent_workers=True if self.num_workers > 0 else False,
        )

    def test_dataloader(self):
        return DataLoader(
            self.mnist_test,
            batch_size=self.batch_size,
            num_workers=self.num_workers,
            persistent_workers=True if self.num_workers > 0 else False,
        )


class LitResNet18(pl.LightningModule):
    def __init__(self, learning_rate=1e-3, optimizer_name="Adam"):
        super().__init__()
        self.save_hyperparameters()
        self.learning_rate = learning_rate
        self.optimizer_name = optimizer_name

        self.model = models.resnet18(
            weights=None
        )  # weights=None as we train from scratch
        # Adjust for MNIST (1 input channel, 10 output classes)
        self.model.conv1 = nn.Conv2d(
            1, 64, kernel_size=7, stride=2, padding=3, bias=False
        )
        self.model.fc = nn.Linear(self.model.fc.in_features, 10)

        self.criterion = nn.CrossEntropyLoss()

        # For torchmetrics >= 0.7, task needs to be specified
        self.train_accuracy = Accuracy(task="multiclass", num_classes=10)
        self.val_accuracy = Accuracy(task="multiclass", num_classes=10)
        self.test_accuracy = Accuracy(task="multiclass", num_classes=10)

    def forward(self, x):
        return self.model(x)

    def training_step(self, batch, batch_idx):
        x, y = batch
        logits = self(x)
        loss = self.criterion(logits, y)
        preds = torch.argmax(logits, dim=1)

        self.train_accuracy.update(preds, y)

        self.log(
            "train_loss", loss, on_step=True, on_epoch=True, prog_bar=True, logger=True
        )
        self.log(
            "train_acc",
            self.train_accuracy,
            on_step=True,
            on_epoch=True,
            prog_bar=True,
            logger=True,
        )
        return {"loss": loss, "train_acc": self.train_accuracy.compute()}

    def validation_step(self, batch, batch_idx):
        x, y = batch
        logits = self(x)
        loss = self.criterion(logits, y)
        preds = torch.argmax(logits, dim=1)

        self.val_accuracy.update(preds, y)

        self.log(
            "val_loss", loss, on_step=False, on_epoch=True, prog_bar=True, logger=True
        )
        self.log(
            "val_acc",
            self.val_accuracy,
            on_step=False,
            on_epoch=True,
            prog_bar=True,
            logger=True,
        )
        return loss

    def test_step(self, batch, batch_idx):
        x, y = batch
        logits = self(x)
        loss = self.criterion(logits, y)
        preds = torch.argmax(logits, dim=1)

        self.test_accuracy.update(preds, y)

        self.log("test_loss", loss, on_step=False, on_epoch=True, logger=True)
        self.log(
            "test_acc", self.test_accuracy, on_step=False, on_epoch=True, logger=True
        )
        return loss

    def configure_optimizers(self):
        if self.optimizer_name == "Adam":
            optimizer = optim.Adam(self.parameters(), lr=self.learning_rate)
        elif self.optimizer_name == "AdaGrad":
            optimizer = optim.Adagrad(self.parameters(), lr=self.learning_rate)
        elif self.optimizer_name == "RMSProp":
            optimizer = optim.RMSprop(self.parameters(), lr=self.learning_rate)
        else:
            raise ValueError(f"Unsupported optimizer: {self.optimizer_name}")
        return optimizer


class CustomCSVLogger(Callback):
    def __init__(self, save_dir, lr, optimizer_name, output_grad_norm):
        super().__init__()
        self.save_dir = save_dir
        self.lr = lr
        self.optimizer_name = optimizer_name
        self.output_grad_norm = output_grad_norm

        os.makedirs(self.save_dir, exist_ok=True)

        self.train_metrics_file = os.path.join(
            self.save_dir, f"{self.lr}_{self.optimizer_name}_train_metrics.csv"
        )
        self.val_eval_file = os.path.join(
            self.save_dir, f"{self.lr}_{self.optimizer_name}_val_eval.csv"
        )
        self.test_eval_file = os.path.join(
            self.save_dir, f"{self.lr}_{self.optimizer_name}_test_eval.csv"
        )

        if self.output_grad_norm:
            self.grad_norm_file = os.path.join(
                self.save_dir, f"{self.lr}_{self.optimizer_name}_grad_norm.csv"
            )

        self._initialize_files()

    def _initialize_files(self):
        with open(self.train_metrics_file, "w", newline="") as f:
            writer = csv.writer(f)
            writer.writerow(["step", "train_loss", "train_acc"])

        with open(self.val_eval_file, "w", newline="") as f:
            writer = csv.writer(f)
            writer.writerow(["step", "val_loss", "val_acc"])

        with open(
            self.test_eval_file, "w", newline=""
        ) as f:  # Header written, data appended on_test_end
            writer = csv.writer(f)
            writer.writerow(["epoch", "test_loss", "test_acc"])

        if self.output_grad_norm:
            with open(self.grad_norm_file, "w", newline="") as f:
                writer = csv.writer(f)
                writer.writerow(["step", "grad_norm"])

    def on_train_batch_end(
        self,
        trainer: "pl.Trainer",
        pl_module: "pl.LightningModule",
        outputs: dict,
        batch: any,
        batch_idx: int,
    ):
        step = trainer.global_step

        train_loss = outputs["loss"]
        train_acc = outputs["train_acc"]

        with open(self.train_metrics_file, "a", newline="") as f:
            writer = csv.writer(f)
            writer.writerow(
                [
                    step,
                    train_loss.item() if torch.is_tensor(train_loss) else train_loss,
                    train_acc.item() if torch.is_tensor(train_acc) else train_acc,
                ]
            )

        if self.output_grad_norm:
            grad_norm_val = trainer.logged_metrics.get("grad_norm_step", float("nan"))

            with open(self.grad_norm_file, "a", newline="") as f:
                writer = csv.writer(f)
                writer.writerow(
                    [
                        step,
                        grad_norm_val.item()
                        if torch.is_tensor(grad_norm_val)
                        else grad_norm_val,
                    ]
                )

    def on_validation_epoch_end(
        self, trainer: "pl.Trainer", pl_module: "pl.LightningModule"
    ):
        step = trainer.global_step

        val_loss = trainer.logged_metrics.get("val_loss", float("nan"))
        val_acc = trainer.logged_metrics.get("val_acc", float("nan"))

        if (
            not (torch.is_tensor(val_loss) or isinstance(val_loss, float))
            or not (torch.is_tensor(val_acc) or isinstance(val_acc, float))
            or (isinstance(val_loss, float) and val_loss == float("nan"))
        ):
            if trainer.sanity_checking:
                return

        with open(self.val_eval_file, "a", newline="") as f:
            writer = csv.writer(f)
            writer.writerow(
                [
                    step,
                    val_loss.item() if torch.is_tensor(val_loss) else val_loss,
                    val_acc.item() if torch.is_tensor(val_acc) else val_acc,
                ]
            )

    def on_test_end(self, trainer: "pl.Trainer", pl_module: "pl.LightningModule"):
        epoch = trainer.current_epoch  # Epoch at which testing was performed
        test_loss = trainer.logged_metrics.get("test_loss", float("nan"))
        test_acc = trainer.logged_metrics.get("test_acc", float("nan"))

        with open(self.test_eval_file, "a", newline="") as f:
            writer = csv.writer(f)
            # This will typically be one row of data after training completes.
            writer.writerow(
                [
                    epoch,
                    test_loss.item() if torch.is_tensor(test_loss) else test_loss,
                    test_acc.item() if torch.is_tensor(test_acc) else test_acc,
                ]
            )


class GradientNormCallback(Callback):
    def on_after_backward(self, trainer, pl_module):
        grad_norm = 0.0
        for p in pl_module.parameters():
            if p.grad is not None:
                grad_norm += p.grad.data.norm(2).item() ** 2
        grad_norm = grad_norm**0.5
        pl_module.log("grad_norm", grad_norm, on_step=True, on_epoch=True)


def main(args):
    pl.seed_everything(args.seed, workers=True)

    data_module = MNISTDataModule(
        data_dir=args.dataset_dir,
        batch_size=args.batch_size,
        num_workers=args.num_workers,
    )
    model = LitResNet18(learning_rate=args.learning_rate, optimizer_name=args.optimizer)

    # Determine the actual root directory for all logs
    actual_default_root_dir = args.default_root_dir
    if actual_default_root_dir is None:
        # This matches PyTorch Lightning's default behavior for default_root_dir
        actual_default_root_dir = os.path.join(os.getcwd(), "lightning_logs")

    # Define the path for our custom CSV logs within the actual_default_root_dir
    csv_output_subdir_name = "csv_logs"
    csv_save_location = os.path.join(actual_default_root_dir, csv_output_subdir_name)

    custom_csv_logger = CustomCSVLogger(
        save_dir=csv_save_location,
        lr=args.learning_rate,
        optimizer_name=args.optimizer,
        output_grad_norm=args.output_grad_norm,
    )

    callbacks = [custom_csv_logger]

    # Add other PL callbacks if needed, e.g., ModelCheckpoint, EarlyStopping
    # from pytorch_lightning.callbacks import ModelCheckpoint, EarlyStopping
    # callbacks.append(ModelCheckpoint(dirpath=os.path.join(args.default_root_dir or 'lightning_logs', 'checkpoints')))

    trainer_args = {
        "deterministic": True,  # For reproducibility
        "callbacks": callbacks,
        "logger": True,  # Enables internal logging accessible by callbacks, logs to default logger (e.g. TensorBoardLogger)
        "val_check_interval": 1,
    }
    if args.output_grad_norm:
        trainer_args["callbacks"].append(GradientNormCallback())  # L2 norm

    trainer = pl.Trainer(
        max_epochs=args.max_epochs,
        accelerator=args.accelerator,
        devices=args.devices,
        default_root_dir=args.default_root_dir
        if args.default_root_dir
        else "lightning_logs",
        fast_dev_run=args.fast_dev_run,
        **trainer_args,
    )

    trainer.fit(model, datamodule=data_module)
    trainer.test(model, datamodule=data_module)


if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        description="PyTorch Lightning MNIST ResNet18 Training",
        formatter_class=argparse.ArgumentDefaultsHelpFormatter,
    )

    # Model/Training specific arguments
    parser.add_argument(
        "--learning_rate",
        type=float,
        default=1e-3,
    )
    parser.add_argument(
        "--optimizer",
        type=str,
        default="Adam",
        choices=["Adam", "AdaGrad", "RMSProp"],
    )
    parser.add_argument(
        "--batch_size",
        type=int,
        default=64,
    )
    parser.add_argument("--num_workers", type=int, default=4)
    parser.add_argument("--seed", type=int, default=1234)
    parser.add_argument(
        "--output_grad_norm",
        action="store_true",
        help="If set, output gradient norm to CSV.",
    )
    parser.add_argument(
        "--dataset_dir",
        type=str,
        default="/repos/datasets/",
        help="Directory to save MNIST dataset.",
    )

    # Add all PyTorch Lightning Trainer arguments
    # parser = pl.Trainer.add_argparse_args(parser) # Deprecated
    # Instead, let users pass them directly, and Trainer.from_argparse_args will pick them up.
    parser.add_argument("--max_epochs", type=int, default=10)
    parser.add_argument(
        "--accelerator",
        type=str,
        default="auto",
        help="Accelerator to use ('cpu', 'gpu', 'tpu', 'mps', 'auto')",
    )
    parser.add_argument(
        "--devices",
        default="auto",
        help="Devices to use (e.g., 1 for one GPU, [0,1] for two GPUs, 'auto')",
    )
    parser.add_argument(
        "--default_root_dir",
        type=str,
        default=None,
        help="Default root directory for logs and checkpoints. If None, uses 'lightning_logs'.",
    )
    parser.add_argument("--fast_dev_run", action="store_true", help="Fast dev run")

    args = parser.parse_args()
    main(args)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2374556.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

力扣2094题解

记录&#xff1a; 2025.5.12 题目&#xff1a; 思路&#xff1a; 暴力遍历。 解题步骤&#xff1a; 1.统计数字出现次数&#xff1a;使用数组cnt来记录输入数组中每个数字的出现次数。 2.生成三位偶数&#xff1a;通过循环从100开始&#xff0c;每次递增2&#xff0c;生成…

DHCP自动分配IP

DHCP自动分配IP 练习1 路由器 Router>en Router#conf t Router(config)#ip dhcp pool ip10 //创建DHCP地址池 Router(dhcp-config)#network 192.168.20.0 255.255.255.0 // 配置网络地址和子网掩码 Router(dhcp-config)#default-router 192.168.20.254 //配置默认网关 Rou…

【CF】Day57——Codeforces Round 955 (Div. 2, with prizes from NEAR!) BCD

B. Collatz Conjecture 题目&#xff1a; 思路&#xff1a; 简单模拟 很简单的模拟&#xff0c;我们只需要快速的找到下一个离 x 最近的 y 的倍数即可&#xff08;要大于 x&#xff09; 这里我们可以这样写 add y - (x % y)&#xff0c;这样就知道如果 x 要变成 y 的倍数还要…

(done) 补充:xv6 的一个用户程序 init 是怎么启动的 ?它如何启动第一个 bash ?

先看 main.c 从函数名来看&#xff0c;比较相关的就 userinit() 和 scheduler() #include "types.h" #include "param.h" #include "memlayout.h" #include "riscv.h" #include "defs.h"volatile static int started 0;//…

超详细讲解C语言转义字符\a \b \r \t \? \n等等

转义字符 C语言有一组字符很特殊&#xff0c;叫做转义字符&#xff0c;顾名思义&#xff0c;改变原来的意思的字符。 1 \? ??)是一个三字母词&#xff0c;在以前的编译器它会被编译为] (??会被编译为[ 因此在以前输入(are you ok ??)就会被编译为are you ok ] 解决这个…

SpringBoot校园失物招领信息平台

SpringBoot校园失物招领信息平台 文章目录 SpringBoot校园失物招领信息平台1、技术栈2、项目说明2.1、登录注册2.2、管理员端截图2.3、用户端截图 3、核心代码实现3.1、前端首页3.2、前端招领广场3.3、后端业务处理 1、技术栈 本项目采用前后端分离的架构&#xff0c;前端和后…

rust 全栈应用框架dioxus server

接上一篇文章dioxus全栈应用框架的基本使用&#xff0c;支持web、desktop、mobile等平台。 可以先查看上一篇文章rust 全栈应用框架dioxus&#x1f448; 既然是全栈框架&#xff0c;那肯定是得有后端服务的&#xff0c;之前创建的服务没有包含后端服务包&#xff0c;我们修改…

西安交大多校联训NOIP1模拟赛题解

西安交大多校联训NOIP1模拟赛题解 T1 秘境形式化题意思路代码&#xff08;丑陋&#xff09; T2 礼物形式化题意思路代码&#xff08;实现&#xff09; T3 小盒子的数论形式化题意思路代码&#xff08;分讨&#xff09; T4 猫猫贴贴(CF997E)形式化题意思路代码&#xff08;深奥&…

数据结构(三)——栈和队列

一、栈和队列的定义和特点 栈&#xff1a;受约束的线性表&#xff0c;只允许栈顶元素入栈和出栈 对栈来说&#xff0c;表尾端称为栈顶&#xff0c;表头端称为栈底&#xff0c;不含元素的空表称为空栈 先进后出&#xff0c;后进先出 队列&#xff1a;受约束的线性表&#xff0…

若依定制pdf生成实战

一、介绍 使用 Java Apache POI 将文字渲染到 Word 模板是一种常见的文档自动化技术&#xff0c;广泛应用于批量生成或定制 Word 文档的场景。使用aspose可以将word转成pdf从而达到定制化pdf的目的。 参考文档&#xff1a;java实现Word转Pdf&#xff08;Windows、Linux通用&a…

c++STL-vector的模拟实现

cSTL-vector的模拟实现 vector的模拟实现基本信息构造函数析构函数返回容量&#xff08;capacity&#xff09;返回元素个数&#xff08;size&#xff09;扩容&#xff08;reserve和resize&#xff09;访问&#xff08;[]&#xff09;迭代器&#xff08;**iterator**&#xff09…

在 Elasticsearch 中连接两个索引

作者&#xff1a;来自 Elastic Kofi Bartlett 解释如何使用 terms query 和 enrich processor 来连接 Elasticsearch 中的两个索引。 更多有关连接两个索引的查询&#xff0c;请参阅文章 “Elastic&#xff1a;开发者上手指南” 中的 “丰富数据及 lookup” 章节。 Elasticsea…

使用 Watt toolkit 加速 git clone

一、前言 Watt toolkit 工具是我经常用于加速 GitHub 网页和 Steam 游戏商店访问的工具&#xff0c;最近想加速 git clone&#xff0c;发现可以使用 Watt toolkit 工具的代理实现。 二、查看端口 我这里以 Ubuntu 为例&#xff0c;首先是需要将加速模式设置为 System&#xff1…

应急响应靶机——WhereIS?

用户名及密码&#xff1a;zgsf/zgsf 下载资源还有个解题.exe: 1、攻击者的两个ip地址 2、flag1和flag2 3、后门程序进程名称 4、攻击者的提权方式(输入程序名称即可) 之前的命令&#xff1a; 1、攻击者的两个ip地址 先获得root权限&#xff0c;查看一下历史命令记录&#x…

Docke容器下JAVA系统时间与Linux服务器时间不一致问题解决办法

本篇文章主要讲解&#xff0c;通过docker部署jar包运行环境后出现java系统内时间与服务器、个人电脑真实时间不一致的问题原因及解决办法。 作者&#xff1a;任聪聪 日期&#xff1a;2025年5月12日 问题现象&#xff1a; 说明&#xff1a;与实际时间不符&#xff0c;同时与服务…

【MCP】其他MCP服务((GitHub)

【MCP】其他MCP服务&#xff08;&#xff08;GitHub&#xff09; 1、其他MCP服务&#xff08;GitHub&#xff09; MCP广场&#xff1a;https://www.modelscope.cn/mcp 1、其他MCP服务&#xff08;GitHub&#xff09; 打开MCP广场 找到github服务 访问github生成令牌 先…

内存 -- Linux内核内存分配机制

内存可以怎么用&#xff1f; kmalloc&#xff1a;内核最常用&#xff0c;用于频繁使用的小内存申请 alloc_pages&#xff1a;以页框为单位申请&#xff0c;物理内存连续 vmalloc&#xff1a;虚拟地址连续的内存块&#xff0c;物理地址不连线 dma_alloc_coherent&#xff1a;常…

关于读写锁的一些理解

同一线程的两种情况&#xff1a; 读读&#xff1a; public static void main(String[] args) throws InterruptedException {ReentrantReadWriteLock lock new ReentrantReadWriteLock();Lock readLock lock.readLock();Lock writeLock lock.writeLock();readLock.lock();S…

C++修炼:模板进阶

Hello大家好&#xff01;很高兴我们又见面啦&#xff01;给生活添点passion&#xff0c;开始今天的编程之路&#xff01; 我的博客&#xff1a;<但凡. 我的专栏&#xff1a;《编程之路》、《数据结构与算法之美》、《题海拾贝》、《C修炼之路》 欢迎点赞&#xff0c;关注&am…

android-ndk开发(10): use of undeclared identifier ‘pthread_getname_np‘

1. 报错描述 使用 pthread 获取线程名字&#xff0c; 用到 pthread_getname_np 函数。 交叉编译到 Android NDK 时链接报错 test_pthread.cpp:19:5: error: use of undeclared identifier pthread_getname_np19 | pthread_getname_np(thread_id, thread_name, sizeof(thr…