
更新了论文的LaTeX和Markdown文件,包括绪论、相关技术介绍、需求分析、关键技术实现、总结与展望等章节。新增了详细的Markdown文件,涵盖了各章节的内容,并对LaTeX文件进行了相应的修改和补充,确保内容一致性和完整性。
36 lines
2.8 KiB
TeX
36 lines
2.8 KiB
TeX
% 参考文献章节
|
|
|
|
\renewcommand\refname{参考文献}
|
|
\begin{thebibliography}{99}
|
|
\addcontentsline{toc}{section}{参考文献\tiny{\quad}}
|
|
|
|
\bibitem{Topsakal2023Creating}Topsakal, O., Akinci, T. C. Creating large language model applications utilizing langchain: A primer on developing llm apps fast[C]. International Conference on Applied Engineering and Natural Sciences, 2023, 1(1): 1050-1056.
|
|
|
|
\bibitem{Liu2015Topical}Liu, Y., Liu, Z., Chua, T. S., et al. Topical word embeddings[C]. Proceedings of the AAAI Conference on Artificial Intelligence, 2015, 29(1).
|
|
|
|
\bibitem{Dettmers2024Qlora}Dettmers, T., Pagnoni, A., Holtzman, A., et al. Qlora: Efficient finetuning of quantized llms[J]. Advances in Neural Information Processing Systems, 2024, 36.
|
|
|
|
\bibitem{Hu2021Lora}Hu, E. J., Shen, Y., Wallis, P., et al. Lora: Low-rank adaptation of large language models[J]. arXiv preprint arXiv:2106.09685, 2021.
|
|
|
|
\bibitem{Yang2024Qwen2}Yang, A., Yang, B., Hui, B., et al. Qwen2 technical report[J]. arXiv preprint arXiv:2407.10671, 2024.
|
|
|
|
\bibitem{Zan2022Large}Zan, D., Chen, B., Zhang, F., et al. Large language models meet nl2code: A survey[J]. arXiv preprint arXiv:2212.09420, 2022.
|
|
|
|
\bibitem{Gunter2024Apple}Gunter, T., Wang, Z., Wang, C., et al. Apple Intelligence Foundation Language Models[J]. arXiv preprint arXiv:2407.21075, 2024.
|
|
|
|
\bibitem{Zhang2023Survey}Zhang, Z., Chen, C., Liu, B., et al. A survey on language models for code[J]. arXiv preprint arXiv:2311.07989, 2023.
|
|
|
|
\bibitem{Jeong2023Generative}Jeong, C. Generative AI service implementation using LLM application architecture: based on RAG model and LangChain framework[J]. Journal of Intelligence and Information Systems, 2023, 29(4): 129-164.
|
|
|
|
\bibitem{Fan2024Survey}Fan, W., Ding, Y., Ning, L., et al. A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models[C]. Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2024: 6491-6501.
|
|
|
|
\bibitem{Kusner2015From}Kusner, M., Sun, Y., Kolkin, N., et al. From word embeddings to document distances[C]. International conference on machine learning, PMLR, 2015: 957-966.
|
|
|
|
\bibitem{Wu2024Qingbao}吴娜, 沈思, 王东波. 基于开源LLMs的中文学术文本标题生成研究—以人文社科领域为例[J]. 情报科学, 2024: 1-22.
|
|
|
|
\bibitem{Li2024Qinghua}李佳沂, 黄瑞章, 陈艳平, et al. 结合提示学习和Qwen大语言模型的裁判文书摘要方法[J]. 清华大学学报(自然科学版), 2024: 1-12.
|
|
|
|
\bibitem{Wei2024Shuju}韦一金, 樊景超. 基于ChatGLM2-6B的农业政策问答系统[J]. 数据与计算发展前沿(中英文), 2024, 6(04): 116-127.
|
|
|
|
\bibitem{Xiang2024Jisuanji}向小伟, 申艳光, 胡明昊, et al. 大模型驱动的科技政策法规问答系统研究[J]. 计算机科学与探索, 2024: 1-13.
|
|
\end{thebibliography} |