Transformer和LLM前沿内容(4):Long-Context LLM
文章目录1. Context Extension1.1 Rotary Position Embedding (RoPE)1.2 LongLoRA2. Evaluation of Long-Context LLMs2.1 The Lost in the Middle Phenomenon2.2 Long-Context Benchmarks: NIAH, LongBench3. Efficient Attention Mechanisms3.1 KV Cache3.2 StreamingLLM and Attention Sinks重点3.3 DuoAttention: Retrieval Heads and Streaming Heads 重点3.4 Quest: Query-Aware Sparsity重点4. Beyond Transformers4.1 State-Space Models (SSMs): Mamba4.2 Hybrid Models: Jamba1. Context Extension1.1 Rotary Position Embedding (RoPE)1.2 LongLoRA2. Evaluation of Long-Context LLMs2.1 The Lost in the Middle Phenomenon2.2 Long-Context Benchmarks: NIAH, LongBench3. Efficient Attention Mechanisms3.1 KV Cache3.2 StreamingLLM and Attention Sinks重点3.3 DuoAttention: Retrieval Heads and Streaming Heads 重点3.4 Quest: Query-Aware Sparsity重点4. Beyond Transformers4.1 State-Space Models (SSMs): Mamba4.2 Hybrid Models: Jamba
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2558322.html
如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!