
在verification.tex、technology.tex和introduction.tex章节中添加了多处文献引用,并更新了部分技术描述内容。同时更新了references.tex中的参考文献列表,新增了多篇相关文献。 主要修改包括: - 在技术描述部分补充了相关文献引用 - 更新了监督式微调等技术的描述内容 - 新增了多篇关于大语言模型微调、提示工程等领域的参考文献 - 对部分章节内容进行了优化和补充
51 lines
2.8 KiB
TeX
51 lines
2.8 KiB
TeX
% 参考文献章节
|
|
|
|
\renewcommand\refname{参考文献}
|
|
\begin{thebibliography}{99}
|
|
\addcontentsline{toc}{section}{参考文献\tiny{\quad}}
|
|
\bibitem{zhang2024}
|
|
张钦彤, 王昱超, 王鹤羲, 等. 大语言模型微调技术的研究综述[J]. Journal of Computer Engineering \& Applications, 2024, 60(17).
|
|
|
|
\bibitem{haque2025}
|
|
Haque M Z, Afrin S, Mastropaolo A. A Systematic Literature Review of Parameter-Efficient Fine-Tuning for Large Code Models[J]. arXiv preprint arXiv:2504.21569, 2025.
|
|
|
|
\bibitem{vmk2024}
|
|
VM K, Warrier H, Gupta Y. Fine tuning llm for enterprise: Practical guidelines and recommendations[J]. arXiv preprint arXiv:2404.10779, 2024.
|
|
|
|
\bibitem{Meskó2023}
|
|
Meskó B. Prompt engineering as an important emerging skill for medical professionals: tutorial[J]. Journal of medical Internet research, 2023, 25: e50638.
|
|
|
|
\bibitem{wang2024}
|
|
王耀祖, 李擎, 戴张杰, 等. 大语言模型研究现状与趋势[J]. 工程科学学报, 2024, 46(8): 1411-1425.
|
|
|
|
\bibitem{Zhang2023Survey}
|
|
Zhang, Z., Chen, C., Liu, B., et al. A survey on language models for code[J]. arXiv preprint arXiv:2311.07989, 2023.
|
|
|
|
\bibitem{Chen2023}
|
|
Chen B, Zhang Z, Langrené N, et al. Unleashing the potential of prompt engineering in large language models: a comprehensive review[J]. arXiv preprint arXiv:2310.14735, 2023.
|
|
|
|
\bibitem{Lin2024Awq}
|
|
Lin J, Tang J, Tang H, et al. Awq: Activation-aware weight quantization for on-device llm compression and acceleration[J]. Proceedings of Machine Learning and Systems, 2024, 6: 87-100.
|
|
|
|
\bibitem{Dong2023}
|
|
Dong G, Yuan H, Lu K, et al. How abilities in large language models are affected by supervised fine-tuning data composition[J]. arXiv preprint arXiv:2310.05492, 2023.
|
|
|
|
\bibitem{Dettmers2024Qlora}
|
|
Dettmers, T., Pagnoni, A., Holtzman, A., et al. Qlora: Efficient finetuning of quantized llms[J]. Advances in Neural Information Processing Systems, 2024, 36.
|
|
|
|
\bibitem{Hu2021Lora}
|
|
Hu, E. J., Shen, Y., Wallis, P., et al. Lora: Low-rank adaptation of large language models[J]. arXiv preprint arXiv:2106.09685, 2021.
|
|
|
|
\bibitem{Han2024Unsloth}
|
|
Han D, Han M. Unsloth[J]. URL: https://github. com/unslothai/unsloth. git. The model overview web form is used to get the model architecture and information about the model The intent submission web form is for the LLMFed use case where task name, server IP address, client IPs, and intent for the FL task are taken as inputs, 2023.
|
|
|
|
\bibitem{Zhang2024Gradio}
|
|
Abid A, Abdalla A, Abid A, et al. Gradio: Hassle-free sharing and testing of ml models in the wild[J]. arXiv preprint arXiv:1906.02569, 2019.
|
|
|
|
\bibitem{Yang2024Qwen}
|
|
Yang A, Yang B, Zhang B, et al. \& Qiu, Z.(2024)[R]. Qwen2. 5 technical report.
|
|
|
|
\bibitem{Liu2024Deepseek}
|
|
Liu A, Feng B, Xue B, et al. Deepseek-v3 technical report[J]. arXiv preprint arXiv:2412.19437, 2024.
|
|
|
|
\end{thebibliography} |