LlamaIndex

news2025/5/22 6:30:20

1、大语言模型开发框架的价值是什么?

SDK:Software Development Kit,它是一组软件工具和资源的集合,旨在帮助开发者创建、测试、部署和维护应用程序或软件。

所有开发框架(SDK)的核心价值,都是降低开发、维护成本。

大语言模型开发框架的价值,是让开发者可以更方便地开发基于大语言模型的应用。主要提供两类帮助:

  1. 第三方能力抽象。比如 LLM、向量数据库、搜索接口等
  2. 常用工具、方案封装
  3. 底层实现封装。比如流式接口、超时重连、异步与并行等

好的开发框架,需要具备以下特点:

  1. 可靠性、鲁棒性高
  2. 可维护性高
  3. 可扩展性高
  4. 学习成本低

举些通俗的例子:

  • 与外部功能解依赖
    • 比如可以随意更换 LLM 而不用大量重构代码
    • 更换三方工具也同理
  • 经常变的部分要在外部维护而不是放在代码里
    • 比如 Prompt 模板
  • 各种环境下都适用
    • 比如线程安全
  • 方便调试和测试
    • 至少要能感觉到用了比不用方便吧
    • 合法的输入不会引发框架内部的报错
划重点:选对了框架,事半功倍;反之,事倍功半。
举个例子:使用 SDK,4 行代码实现一个简易的 RAG 系统
!pip install --upgrade llama-index
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("./data").load_data()
index = VectorStoreIndex.from_documents(documents)

query_engine = index.as_query_engine()

response = query_engine.query("llama2有多少参数")
print(response)

2、LlamaIndex 介绍

「 LlamaIndex is a framework for building context-augmented LLM applications. Context augmentation refers to any use case that applies LLMs on top of your private or domain-specific data. 」

LlamaIndex 是一个为开发「上下文增强」的大语言模型应用的框架(也就是 SDK)。上下文增强,泛指任何在私有或特定领域数据基础上应用大语言模型的情况。例如:

在这里插入图片描述

  • Question-Answering Chatbots (也就是 RAG)

  • Document Understanding and Extraction (文档理解与信息抽取)

  • Autonomous Agents that can perform research and take actions (智能体应用)

LlamaIndex 有 Python 和 Typescript 两个版本,Python 版的文档相对更完善。

  • Python 文档地址:https://docs.llamaindex.ai/en/stable/

  • Python API 接口文档:https://docs.llamaindex.ai/en/stable/api_reference/

  • TS 文档地址:https://ts.llamaindex.ai/

  • TS API 接口文档:https://ts.llamaindex.ai/api/

LlamaIndex 是一个开源框架,Github 链接:https://github.com/run-llama

LlamaIndex 的核心模块

在这里插入图片描述

安装 LlamaIndex

  1. Python
pip install llama-index
  1. Typescript
# 通过 npm 安装
npm install llamaindex

# 通过 yarn 安装
yarn add llamaindex

# 通过 pnpm 安装
pnpm add llamaindex

本博客以 Python 版为例进行讲解。

3、数据加载(Loading)

SimpleDirectoryReader 是一个简单的本地文件加载器。它会遍历指定目录,并根据文件扩展名自动加载文件(文本内容)。

支持的文件类型:

  • .csv - comma-separated values
  • .docx - Microsoft Word
  • .epub - EPUB ebook format
  • .hwp - Hangul Word Processor
  • .ipynb - Jupyter Notebook
  • .jpeg, .jpg - JPEG image
  • .mbox - MBOX email archive
  • .md - Markdown
  • .mp3, .mp4 - audio and video
  • .pdf - Portable Document Format
  • .png - Portable Network Graphics
  • .ppt, .pptm, .pptx - Microsoft PowerPoint
import json
from pydantic.v1 import BaseModel


def show_json(data):
    """用于展示json数据"""
    if isinstance(data, str):
        obj = json.loads(data)
        print(json.dumps(obj, indent=4))
    elif isinstance(data, dict) or isinstance(data, list):
        print(json.dumps(data, indent=4))
    elif issubclass(type(data), BaseModel):
        print(json.dumps(data.dict(), indent=4, ensure_ascii=False))


def show_list_obj(data):
    """用于展示一组对象"""
    if isinstance(data, list):
        for item in data:
            show_json(item)
    else:
        raise ValueError("Input is not a list")


from llama_index.core import SimpleDirectoryReader

reader = SimpleDirectoryReader(
    input_dir="./data",  # 目标目录
    recursive=False,  # 是否递归遍历子目录
    required_exts=[".pdf"]  # (可选)只读取指定后缀的文件
)
documents = reader.load_data()

show_json(documents[0])
print(documents[0].text)
{
    "id_": "358482ee-4232-45eb-a5ae-8f595f16c8cd",
    "embedding": null,
    "metadata": {
        "page_label": "1",
        "file_name": "llama2-extracted.pdf",
        "file_path": "/home/jovyan/lecture-notes/07-llamaindex/data/llama2-extracted.pdf",
        "file_type": "application/pdf",
        "file_size": 401338,
        "creation_date": "2024-06-14",
        "last_modified_date": "2024-06-14"
    },
    "excluded_embed_metadata_keys": [
        "file_name",
        "file_type",
        "file_size",
        "creation_date",
        "last_modified_date",
        "last_accessed_date"
    ],
    "excluded_llm_metadata_keys": [
        "file_name",
        "file_type",
        "file_size",
        "creation_date",
        "last_modified_date",
        "last_accessed_date"
    ],
    "relationships": {},
    "text": "Llama 2: OpenFoundation andFine-Tuned ChatModels\nHugo Touvron∗Louis Martin†Kevin Stone†\nPeter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov SoumyaBatra\nPrajjwal Bhargava Shruti Bhosale Dan Bikel LukasBlecher Cristian CantonFerrer MoyaChen\nGuillem Cucurull David Esiobu Jude Fernandes Jeremy Fu Wenyin Fu BrianFuller\nCynthia Gao VedanujGoswami NamanGoyal AnthonyHartshorn Saghar Hosseini RuiHou\nHakan Inan Marcin Kardas Viktor Kerkez Madian Khabsa IsabelKloumann ArtemKorenev\nPunit Singh Koura Marie-AnneLachaux ThibautLavril Jenya Lee Diana Liskovich\nYinghai Lu YuningMao Xavier Martinet Todor Mihaylov PushkarMishra\nIgor Molybog Yixin Nie AndrewPoulton Jeremy Reizenstein Rashi Rungta Kalyan Saladi\nAlan Schelten Ruan Silva EricMichael Smith Ranjan Subramanian XiaoqingEllenTan BinhTang\nRoss Taylor AdinaWilliams JianXiang Kuan PuxinXu ZhengYan Iliyan Zarov YuchenZhang\nAngela Fan MelanieKambadur SharanNarang Aurelien Rodriguez RobertStojnic\nSergey Edunov ThomasScialom∗\nGenAI, Meta\nAbstract\nIn this work, we develop and release Llama 2, a collection of pretrained and fine-tuned\nlarge language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.\nOur fine-tuned LLMs, called Llama 2-Chat , are optimized for dialogue use cases. Our\nmodels outperform open-source chat models on most benchmarks we tested, and based on\nourhumanevaluationsforhelpfulnessandsafety,maybeasuitablesubstituteforclosed-\nsource models. We provide a detailed description of our approach to fine-tuning and safety\nimprovements of Llama 2-Chat in order to enable the community to build on our work and\ncontribute to the responsibledevelopmentof LLMs.\n∗Equal contribution, corresponding authors: {tscialom, htouvron}@meta.com\n†Second author\nContributions for all the authors can be found in Section A.1.arXiv:2307.09288v2  [cs.CL]  19 Jul 2023",
    "mimetype": "text/plain",
    "start_char_idx": null,
    "end_char_idx": null,
    "text_template": "{metadata_str}\n\n{content}",
    "metadata_template": "{key}: {value}",
    "metadata_seperator": "\n",
    "class_name": "Document"
}
Llama 2: OpenFoundation andFine-Tuned ChatModels
Hugo Touvron∗Louis Martin†Kevin Stone†
Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov SoumyaBatra
Prajjwal Bhargava Shruti Bhosale Dan Bikel LukasBlecher Cristian CantonFerrer MoyaChen
Guillem Cucurull David Esiobu Jude Fernandes Jeremy Fu Wenyin Fu BrianFuller
Cynthia Gao VedanujGoswami NamanGoyal AnthonyHartshorn Saghar Hosseini RuiHou
Hakan Inan Marcin Kardas Viktor Kerkez Madian Khabsa IsabelKloumann ArtemKorenev
Punit Singh Koura Marie-AnneLachaux ThibautLavril Jenya Lee Diana Liskovich
Yinghai Lu YuningMao Xavier Martinet Todor Mihaylov PushkarMishra
Igor Molybog Yixin Nie AndrewPoulton Jeremy Reizenstein Rashi Rungta Kalyan Saladi
Alan Schelten Ruan Silva EricMichael Smith Ranjan Subramanian XiaoqingEllenTan BinhTang
Ross Taylor AdinaWilliams JianXiang Kuan PuxinXu ZhengYan Iliyan Zarov YuchenZhang
Angela Fan MelanieKambadur SharanNarang Aurelien Rodriguez RobertStojnic
Sergey Edunov ThomasScialom∗
GenAI, Meta
Abstract
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned
large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.
Our fine-tuned LLMs, called Llama 2-Chat , are optimized for dialogue use cases. Our
models outperform open-source chat models on most benchmarks we tested, and based on
ourhumanevaluationsforhelpfulnessandsafety,maybeasuitablesubstituteforclosed-
source models. We provide a detailed description of our approach to fine-tuning and safety
improvements of Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsibledevelopmentof LLMs.
∗Equal contribution, corresponding authors: {tscialom, htouvron}@meta.com
†Second author
Contributions for all the authors can be found in Section A.1.arXiv:2307.09288v2  [cs.CL]  19 Jul 2023
注意:对图像、视频、语音类文件,默认不会自动提取其中文字。如需提取,参考下面介绍的 Data Connectors

默认的 PDFReader 效果并不理想,我们可以更换文件加载器

# !pip install pymupdf
from llama_index.core import SimpleDirectoryReader
from llama_index.readers.file import PyMuPDFReader

reader = SimpleDirectoryReader(
        input_dir="./data", # 目标目录
        recursive=False, # 是否递归遍历子目录
        required_exts=[".pdf"], # (可选)只读取指定后缀的文件
        file_extractor={
   ".pdf": PyMuPDFReader()} # 指定特定的文件加载器
    )

documents = reader.load_data()

print(documents[0].text)
Llama 2: Open Foundation and Fine-Tuned Chat Models
Hugo Touvron∗
Louis Martin†
Kevin Stone†
Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov Soumya Batra
Prajjwal Bhargava Shruti Bhosale Dan Bikel Lukas Blecher Cristian Canton Ferrer Moya Chen
Guillem Cucurull David Esiobu Jude Fernandes Jeremy Fu Wenyin Fu Brian Fuller
Cynthia Gao Vedanuj Goswami Naman Goyal Anthony Hartshorn Saghar Hosseini Rui Hou
Hakan Inan Marcin Kardas Viktor Kerkez Madian Khabsa Isabel Kloumann Artem Korenev
Punit Singh Koura Marie-Anne Lachaux Thibaut Lavril Jenya Lee Diana Liskovich
Yinghai Lu Yuning Mao Xavier Martinet Todor Mihaylov Pushkar Mishra
Igor Molybog Yixin Nie Andrew Poulton Jeremy Reizenstein Rashi Rungta Kalyan Saladi
Alan Schelten Ruan Silva Eric Michael Smith Ranjan Subramanian Xiaoqing Ellen Tan Binh Tang
Ross Taylor Adina Williams Jian Xiang Kuan Puxin Xu Zheng Yan Iliyan Zarov Yuchen Zhang
Angela Fan Melanie Kambadur Sharan Narang Aurelien Rodriguez Robert Stojnic
Sergey Edunov
Thomas Scialom∗
GenAI, Meta
Abstract
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned
large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.
Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our
models outperform open-source chat models on most benchmarks we tested, and based on
our human evaluations for helpfulness and safety, may be a suitable substitute for closed-
source models. We provide a detailed description of our approach to fine-tuning and safety
improvements of Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs.
∗Equal contribution, corresponding authors: {tscialom, htouvron}@meta.com
†Second author
Contributions for all the authors can be found in Section A.1.
arXiv:2307.09288v2  [cs.CL]  19 Jul 2023

更多的 PDF 加载器还有 SmartPDFLoaderLlamaParse, 二者都提供了更丰富的解析能力,包括解析章节与段落结构等。但不是 100%准确,偶有文字丢失或错位情况,建议根据自身需求详细测试评估。

3.2、Data Connectors

用于处理更丰富的数据类型,并将其读取为 Document 的形式(text + metadata)。

更多 Data Connectors
  • 内置的文件加载器
  • 连接三方服务的数据加载器,例如数据库
  • 更多加载器可以在 LlamaHub 上找到

4、文本切分与解析(Chunking)

为方便检索,我们通常把 Document 切分为 Node

在 LlamaIndex 中,Node 被定义为一个文本的「chunk」。

4.1、使用 TextSplitters 对文本做切分

例如:TokenTextSplitter 按指定 token 数切分文本

from llama_index.core.node_parser import TokenTextSplitter




from llama_index.core import SimpleDirectoryReader

reader = SimpleDirectoryReader(
    input_dir="./data",  # 目标目录
    recursive=False,  # 是否递归遍历子目录
    required_exts=[".pdf"]  # (可选)只读取指定后缀的文件
)
documents = reader.load_data()
node_parser = TokenTextSplitter(
    chunk_size=100,  # 每个 chunk 的最大长度
    chunk_overlap=50  # chunk 之间重叠长度
)

nodes = node_parser.get_nodes_from_documents(
    documents, show_progress=False
)

for node in nodes:
    print(node)
D:\develop\anaconda3\envs\llm-project\python.exe D:\projects\llm-project\llama-index\TextSplitters.py 
Node ID: be6157bd-acd5-419b-903b-fb335ebf1805
Text: Llama 2: Open Foundation and Fine-Tuned Chat Models Hugo
Touvron∗ Louis Martin† Kevin Stone† Peter Albert Amjad Almahairi
Yasmine Babaei Nikolay Bashlykov Soumya Batra Prajjwal Bhargava Shruti
Bhosale Dan Bikel Lukas Blecher
Node ID: b76c809f-16b6-4988-94e2-7feafcfdd506
Text: Louis Martin† Kevin Stone† Peter Albert Amjad Almahairi Yasmine
Babaei Nikolay Bashlykov Soumya Batra Prajjwal Bhargava Shruti Bhosale
Dan Bikel Lukas Blecher Cristian Canton Ferrer Moya Chen Guillem
Cucurull David
Node ID: 578cb67c-1bff-4dd8-9329-af4c2f8f881b
Text: Babaei Nikolay Bashlykov Soumya Batra Prajjwal Bhargava Shruti
Bhosale Dan Bikel Lukas Blecher Cristian Canton Ferrer Moya Chen
Guillem Cucurull David Esiobu Jude Fernandes Jeremy Fu Wenyin Fu Brian
Fuller Cynthia Gao
Node ID: dbb31d51-67ad-4d21-93d5-5062275a66c1
Text: Shruti Bhosale Dan Bikel Lukas Blecher Cristian Canton Ferrer
Moya Chen Guillem Cucurull David Esiobu Jude Fernandes Jeremy Fu
Wenyin Fu Brian Fuller Cynthia Gao Vedanuj Goswami Naman Goyal Anthony
Hartshorn Saghar Hosseini Rui
Node ID: 2d6466e9-a4d4-454e-a2d3-631364d0a126
Text: Cucurull David Esiobu Jude Fernandes Jeremy Fu Wenyin Fu Brian
Fuller Cynthia Gao Vedanuj Goswami Naman Goyal Anthony Hartshorn
Saghar Hosseini Rui Hou Hakan Inan Marcin Kardas Viktor Kerkez Madian
Khabsa Isabel Kloumann
Node ID: 3dcaf924-e52e-4d93-98bc-9292a1884a40
Text: Gao Vedanuj Goswami Naman Goyal Anthony Hartshorn Saghar
Hosseini Rui Hou Hakan Inan Marcin Kardas Viktor Kerkez Madian Khabsa
Isabel Kloumann Artem Korenev Punit Singh Koura Marie-Anne Lachaux
Thibaut Lavril Jenya
Node ID: 8fa64cb9-c510-4223-b0cb-e223178ebdb9
Text: Rui Hou Hakan Inan Marcin Kardas Viktor Kerkez Madian Khabsa
Isabel Kloumann Artem Korenev Punit Singh Koura Marie-Anne Lachaux
Thibaut Lavril Jenya Lee Diana Liskovich Yinghai Lu Yuning Mao Xavier
Martinet Todor
Node ID: 9f5c2d90-ffe9-4efe-a5b1-68eef0119b5d
Text: Madian Khabsa Isabel Kloumann Artem Korenev Punit Singh Koura
Marie-Anne Lachaux Thibaut Lavril Jenya Lee Diana Liskovich Yinghai Lu
Yuning Mao Xavier Martinet Todor Mihaylov Pushkar Mishra Igor Molybog
Yixin Nie Andrew
Node ID: fff15a52-a0da-40c8-a09a-0fe1ea2d6ea1
Text: Marie-Anne Lachaux Thibaut Lavril Jenya Lee Diana Liskovich
Yinghai Lu Yuning Mao Xavier Martinet Todor Mihaylov Pushkar Mishra
Igor Molybog Yixin Nie Andrew Poulton Jeremy Reizenstein Rashi Rungta
Kalyan Saladi Alan
Node ID: 5fa0508f-3ca7-4425-a48c-a9cde1112060
Text: Lu Yuning Mao Xavier Martinet Todor Mihaylov Pushkar Mishra Igor
Molybog Yixin Nie Andrew Poulton Jeremy Reizenstein Rashi Rungta
Kalyan Saladi Alan Schelten Ruan Silva Eric Michael Smith Ranjan
Subramanian Xiaoqing Ellen Tan Binh Tang Ross
Node ID: ee895744-82ae-49ad-94c0-8368c0790dea
Text: Molybog Yixin Nie Andrew Poulton Jeremy Reizenstein Rashi Rungta
Kalyan Saladi Alan Schelten Ruan Silva Eric Michael Smith Ranjan
Subramanian Xiaoqing Ellen Tan Binh Tang Ross Taylor Adina Williams
Jian Xiang Kuan Puxin Xu Zheng Yan Iliyan Zarov
Node ID: 0138d1ac-6358-47c3-8f52-d0054b80a568
Text: Kalyan Saladi Alan Schelten Ruan Silva Eric Michael Smith Ranjan
Subramanian Xiaoqing Ellen Tan Binh Tang Ross Taylor Adina Williams
Jian Xiang Kuan Puxin Xu Zheng Yan Iliyan Zarov Yuchen Zhang Angela
Fan Melanie Kambadur Sharan Narang Aurelien Rodriguez Robert
Node ID: 73429f67-589e-442d-89af-1513be6bb88f
Text: Xiaoqing Ellen Tan Binh Tang Ross Taylor Adina Williams Jian
Xiang Kuan Puxin Xu Zheng Yan Iliyan Zarov Yuchen Zhang Angela Fan
Melanie Kambadur Sharan Narang Aurelien Rodriguez Robert Stojnic
Sergey Edunov Thomas Scialom∗ GenAI,
Node ID: 9d37dbc7-16ef-4036-bb8f-5f3091dba100
Text: Xu Zheng Yan Iliyan Zarov Yuchen Zhang Angela Fan Melanie
Kambadur Sharan Narang Aurelien Rodriguez Robert Stojnic Sergey Edunov
Thomas Scialom∗ GenAI, Meta Abstract In this work, we develop and
release Llama 2, a collection of pretrained and
Node ID: 87459f44-90eb-43c8-9e4c-329777efe449
Text: Sharan Narang Aurelien Rodriguez Robert Stojnic Sergey Edunov
Thomas Scialom∗ GenAI, Meta Abstract In this work, we develop and
release Llama 2, a collection of pretrained and fine-tuned large
language models (LLMs) ranging in scale from 7 billion to
Node ID: 8f8a5262-03a3-4e0d-9c5f-f6830bd2cf27
Text: Scialom∗ GenAI, Meta Abstract In this work, we develop and
release Llama 2, a collection of pretrained and fine-tuned large
language models (LLMs) ranging in scale from 7 billion to 70 billion
parameters. Our fine-tuned LLMs, calledLlama
Node ID: fc5b70c8-4e78-40bf-9d77-911391e5ea1c
Text: we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7
billion to 70 billion parameters. Our fine-tuned LLMs, calledLlama
2-Chat, are optimized for dialogue use cases. Our models outperform
open-source chat models
Node ID: 6d36968c-de01-4afa-97dc-7987a192bf30
Text: models (LLMs) ranging in scale from 7 billion to 70 billion
parameters. Our fine-tuned LLMs, calledLlama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on
most benchmarks we tested, and based on our human evaluations for
helpfulness and safety, may
Node ID: e7ef4823-6937-4d80-824e-693bd81dc149
Text: LLMs, calledLlama 2-Chat, are optimized for dialogue use cases.
Our models outperform open-source chat models on most benchmarks we
tested, and based on our human evaluations for helpfulness and safety,
may be a suitable substitute for closed- source models. We provide a
detailed description of our approach to fine-tuning
Node ID: d1c044a6-02e6-4d88-a92f-a96b65da6de0
Text: outperform open-source chat models on most benchmarks we tested,
and based on our human evaluations for helpfulness and safety, may be
a suitable substitute for closed- source models. We provide a detailed
description of our approach to fine-tuning and safety improvements
ofLlama 2-Chatin order to enable the community to build on our
Node ID: de25c573-8d6f-4d81-8cc0-2df0b8706f6d
Text: helpfulness and safety, may be a suitable substitute for closed-
source models. We provide a detailed description of our approach to
fine-tuning and safety improvements ofLlama 2-Chatin order to enable
the community to build on our work and contribute to the responsible
development of LLMs. ∗Equal contribution, corresponding
Node ID: 366723b1-8227-4c8e-9cdf-fb6596c6b607
Text: description of our approach to fine-tuning and safety
improvements ofLlama 2-Chatin order to enable the community to build
on our work and contribute to the responsible development of LLMs.
∗Equal contribution, corresponding authors: {tscialom,
htouvron}@meta.com †Second
Node ID: b1d0a429-f88c-4eaf-a5e2-438395063dd1
Text: 2-Chatin order to enable the community to build on our work and
contribute to the responsible development of LLMs. ∗Equal
contribution, corresponding authors: {tscialom, htouvron}@meta.com
†Second author Contributions for all the authors can be found in
Section
Node ID: 3bd1cf71-efdd-476c-ae63-f6813cd426ff
Text: work and contribute to the responsible development of LLMs.
∗Equal contribution, corresponding authors: {tscialom,
htouvron}@meta.com †Second author Contributions for all the authors
can be found in Section A.1. arXiv:2307.09288v2  [cs.CL]
Node ID: a419b2de-57a0-4d13-8316-89ca3c388038
Text: authors: {tscialom, htouvron}@meta.com †Second author
Contributions for all the authors can be found in Section A.1.
arXiv:2307.09288v2  [cs.CL]  19 Jul 2023
Node ID: c14bc803-2b66-4791-acf1-40c3c3824248
Text: Contents 1 Introduction 3 2 Pretraining 5 2.1 Pretraining Data .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 5 2.2
Node ID: cd88caab-f720-44d0-b60f-52d3f5a8c47a
Text: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 5 2.2 Training Details . . . . . . . . . . . .
. . . . . .
Node ID: 060e2c44-e137-48fc-997a-fef9e1253af1
Text: . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Training
Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . .
Node ID: 84a3b614-fa9a-4a10-933f-4ea49eb42da5
Text: . . . . 5 2.2 Training Details . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Llama
2Pretrained Model
Node ID: d147d85b-a6f9-4c26-b7d7-61e897c67c67
Text: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . 5 2.3 Llama 2Pretrained Model Evaluation . . . . . . . . . .
. . . . . . . . .
Node ID: 74ecb943-d7b1-479e-b280-8a94be4c97f5
Text: . . . . . . . . . . . . . . . . . 5 2.3 Llama 2Pretrained Model
Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . 7 3 Fine-tuning
Node ID: d6860cfa-8486-41c4-80bd-4bc53788b13a
Text: Llama 2Pretrained Model Evaluation . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . 7 3 Fine-tuning 8 3.1 Supervised
Fine-Tuning (SFT) . . . . . . . .
Node ID: 3bd146c0-a554-4e4d-94a9-24cba2a3dbc2
Text: . . . . . . . . . . . . . . . . . . . . 7 3 Fine-tuning 8 3.1
Supervised Fine-Tuning (SFT) . . . . . . . . . . . . . . . . . . . . .
. . . . . . .
Node ID: b84e1605-0ac9-4c9f-9ff5-

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2382144.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

springboot使用xdoc-report包导出word

背景:项目需要使用xdoc-report.jar根据设置好的word模版,自动填入数据 导出word 框架使用 我的需求是我做一个模板然后往里面填充内容就导出我想要的word文件,问了下chatgpt还有百度,最后选用了xdocreport这个框架,主…

重拾GMP

目录 GMP总结 线程协程三家对比GMP调度模型 mgp过一遍流程 g 一个G的生命周期 mpschedt全局队列g0视角看看G的调度流程 四大调度类型 主动调度被动调度正常调度抢占调度 宏观的调度流程上面流程的具体细节 schedule()findRunnable()execute()gosched_m()park_m()与ready()goe…

实验分享|基于千眼狼sCMOS科学相机的流式细胞仪细胞核成像实验

实验背景 流式细胞仪与微流控技术,为细胞及细胞核成像提供新的路径。传统流式细胞仪在细胞核成像检测方面存在检测通量低,荧光信号微弱等局限,故某光学重点实验室开发一种基于高灵敏度sCMOS科学相机并集成在自组荧光显微镜的微流控细胞核成像…

【Linux笔记】——线程池项目与线程安全单例模式

🔥个人主页🔥:孤寂大仙V 🌈收录专栏🌈:Linux 🌹往期回顾🌹: 【Linux笔记】——简单实习一个日志项目 🔖流水不争,争的是滔滔不息 一、线程池设计二…

ZooKeeper 原理解析及优劣比较

大家好,这里是架构资源栈!点击上方关注,添加“星标”,一起学习大厂前沿架构! 引言 在分布式系统中,服务注册、配置管理、分布式锁、选举等场景都需要一个高可用、一致性强的协调服务。Apache ZooKeeper 凭…

是德科技 | 单通道448G未来之路:PAM4? PAM6? PAM8?

内容来源:是德科技 随着数据中心规模的不断扩大以及AI大模型等技术的兴起,市场对高速、大容量数据传输的需求日益增长。例如,AI训练集群中GPU等设备之间的互联需要更高的传输速率来提升效率。在技术升级方面,SerDes技术的不断进步…

OceanBase 开发者大会,拥抱 Data*AI 战略,构建 AI 数据底座

5 月 17 号以“当 SQL 遇见 AI”为主题的 OceanBase 开发者大会在广州举行,因为行程的原因未能现场参会,仍然通过视频直播观看了全部的演讲。总体来说,这届大会既有对未来数据库演进方向的展望,也有 OceanBase 新产品的发布&#…

STM32IIC协议基础及Cube配置

STM32IIC协议基础及Cube配置 一,IC协议简介1,核心特点2,应用场景 二,IC协议基础概念1,总线结构2,主从架构3,设备寻址4,起始和停止条件5,数据传输6,应答机制 三…

CNN vs ViT:图像世界的范式演进

一、图像建模,是不是也可以“大一统” 在前文中我们提到,多模态大模型打破“只能处理文字”的限制。 在 NLP 世界里,Transformer 已经证明自己是理解语言的王者。那么在图像世界,我们是否也能有一种“通用架构”,让模…

cocos creator使用jenkins打包微信小游戏,自动上传资源到cdn,windows版运行jenkins

cocos 版本2.4.11 在windows上jenkins的具体配置和部署,可参考上一篇文章cocos creator使用jenkins打包流程,打包webmobile_jenkins打包,发布,部署cocoscreator-CSDN博客 特别注意,windows上运行jenkins需要关闭windows自己的jenkins服务&a…

定时器的两种实现方式

1、基于优先级队列/堆 队列是先进先出,优先级队列是优先级越高就存放在队列之前,我们可以将过期时间越早设置为优先级越高,那么临近过期时间的任务就会在队列前面,距离过期时间越晚的任务就在队列后面。 可以分配一个线程&#…

[Java实战]Spring Boot整合MinIO:分布式文件存储与管理实战(三十)

[Java实战]Spring Boot整合MinIO:分布式文件存储与管理实战(三十) 一、MinIO简介与核心原理 MinIO 是一款高性能、开源的分布式对象存储系统,兼容 Amazon S3 API,适用于存储图片、视频、日志等非结构化数据。其核心特…

AI在人力资源领域的应用:把握时代浪潮

借鉴历史经验,引领技术变革 历史总是呈现出惊人的相似性。十年前,众多企业未能及时洞察移动技术与社交技术的潜在价值,迟迟没有将这些创新引入职场环境。随着时间推移,这些组织才意识到BYOD(自带设备办公)…

vr制作公司提供什么服务?

随着科技的迅猛进步,虚拟现实(Virtual Reality,简称VR)技术已经悄然渗透到我们的日常生活与工作中,成为推动数字化转型的重要力量。VR制作公司,作为前沿领域的探索者和实践者,以专业的技术和创新…

下一代电子电气架构(EEA)的关键技术

我是穿拖鞋的汉子,魔都中坚持长期主义的汽车电子工程师。 老规矩,分享一段喜欢的文字,避免自己成为高知识低文化的工程师: 钝感力的“钝”,不是木讷、迟钝,而是直面困境的韧劲和耐力,是面对外界噪音的通透淡然。 生活中有两种人,一种人格外在意别人的眼光;另一种人无论…

matlab慕课学习3.5

于20250520 3.5 用while 语句实现循环结构 3.5.1while语句 多用于循环次数不确定的情况,循环次数确定的时候用for更为方便。 3.5.2break语句和continue语句 break用来跳出循环体,结束整个循环。 continue用来结束本次循环,接着执行下一次…

Qt音视频开发过程中一个疑难杂症的解决方法/ffmpeg中采集本地音频设备无法触发超时回调

一、前言 最近在做实时音视频通话的项目中,遇到一个神奇的问题,那就是用ffmpeg采集本地音频设备,当音频设备拔掉后,采集过程会卡死在av_read_frame函数中,尽管设置了超时时间,也设置了超时回调interrupt_c…

PEFT库PromptTuningConfig 配置

PEFT库 PromptTuningConfig 配置 "Prompt Tuning"的参数高效微调 PromptTuningConfig 核心参数解析 1. task_type="CAUSAL_LM" 作用:指定任务类型为因果语言模型(Causal LM)。说明:因果语言模型从左到右生成文本(如GPT系列),这与任务需求匹配(模…

操作系统----软考中级软件工程师(自用学习笔记)

目录 1、计算机系统层次结构 2、程序顺序执行的特征 3、程序并发执行的特征 4、三态模型 5、同步与互斥 6、信号量机制 7、PV操作 8、死锁 9、进程资源图 10、死锁避免 11、线程 12、程序局部性原理 13、分页存储管理 14、单缓冲器 15、双缓冲区 16、磁盘调度算…

基于 Redis 实现短信验证码登录功能的完整方案

&#x1f9f1; 一、技术栈与依赖配置 使用 Spring Boot Redis 实现短信验证码登录&#xff0c;以下是推荐的 Maven 依赖&#xff1a; <dependencies><!-- Spring Boot Web --><dependency><groupId>org.springframework.boot</groupId><ar…