【day63】
以前有个孩子他分分钟都在碎碎念。不过他的念头之间是有因果关系的。他会在本子里记录每一个念头并用箭头画出这个念头的来源于之前的哪一个念头。翻开这个本子你一定会被互相穿梭的箭头给搅晕现在他希望你用程序计算出这些念头中最长的一条因果链。将念头从1到n编号念头i来源于念头from[i]保证from[i]ifrom[i]0表示该念头没有来源念头只是脑袋一抽灵光一现。样例输入801032424样例输出3样例说明最长的因果链有1-2-5 (from[5]2,from[2]1,from[1]0)1-2-7 (from[7]2,from[2]1,from[1]0)3-4-6 (from[6]4,from[4]3,from[3]0)3-4-8 (from[8]4,from[4]3,from[3]0)#include iostream #includevector using namespace std; int main() { int n; cin n; vectorintfrom(n1); vectorboolmatch(n1,false); for (int i 1; i n; i) { cin from[i]; if (from[i] i)match[i] true; } int maxlen 0 ; for (int i 1; i n; i) { if (match[i]) { int len 1; int temp from[i]; while (match[temp]) { temp from[temp]; len; } if (len maxlen)maxlen len; } } cout maxlen endl; return 0; }现代诗如蚯蚓断成好几截都不会死字符串断成好几截有可能完全一样请编写程序输入字符串输出该字符串最多能断成多少截完全一样的子串样例输入abcabcabcabc样例输出4样例说明最多能断成四个”abc”也就是abc重复四遍便是原串同时也能断成两个”abcabc”最坏情况是断成一个原串”abcabcabcabc”#includebits/stdc.h using namespace std; int main() { string a; cin a; int maxnum 1; int len a.size(); for (int i 1; i len; i) { if (len % i ! 0)continue; int cur len / i; int l 0; string pre a.substr(l, i); bool flag true; for (int j 1; j cur; j) { l li; string temp a.substr(l, i); if (temp ! pre) { flag false; break; } pre temp; } if (flag cur maxnum) { maxnum cur; } } cout maxnum endl; return 0; }Large language models have become an important research direction in artificial intelligence in recent years. These models are usually based on the Transformer architecture and are trained on massive text datasets. By learning statistical patterns in language, large language models can perform various tasks such as text generation, question answering, and machine translation. As model sizes continue to increase, large language models have achieved performance close to or even surpassing human levels in many natural language processing tasks. However, training and deploying such large models require enormous computational resources, which leads to concerns about energy consumption and cost. Therefore, improving model efficiency has become an important research focus.大型语言模型近年来已经成为了人工智能领域重要的研究方向。这些模型通常基于Transformer架构并通过大量文本数据集进行训练。通过学习语言的统计规律大型语言模型可以执行多种任务比如说文本生成、问答系统和机器翻译。随着模型规模持续增加大型语言模型在许多自然语言处理任务中已经达到的性能接近或者甚至超过了人类水平。然而训练和部署这样大规模的模型需要非常多的计算资源这引起了关于能量消耗和成本的担忧。因此提升模型效率已经成为了重要的研究重点。
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2450617.html
如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!