From June 25 to 27, 2025, Associate Professor Nicolás José Fernández Martínez from the University of Jaén (Universidad de Jaén) delivered a series of online lectures titled “Prompt Engineering in NLP (Prompt Engineering in Natural Language Processing)” via DingTalk, with each session running from 18:30 to 20:30 every evening. The lecture series was moderated by Research Fellow SONG Chenchen from ZJU 100 Young Professor Program.
Associate Professor Fernández Martínez first introduced the core concept of prompt engineering, which refers to the design of the most appropriate, precise, and effective natural language instructions to guide large language models (LLMs) in generating desired outputs. Later, he elaborated on the basic principles of prompt engineering and demonstrated how to design efficient prompts for various natural language processing tasks, such as text classification, information extraction, and autoabstract. Besides, he presented the application value of prompt engineering in other computational linguistics tasks, including corpus synthesis.
On the first day, Associate Professor Fernández Martínez focused on the fundamental knowledge, structural analysis, and design principles of prompt engineering. He emphasized the crucial role of prompt clarity, conciseness, and relevance in enhancing model performance, and through practical examples, demonstrated how to use prompt engineering to guide models in generating responses with specific styles, formats, or structures. Meanwhile, taking into account factors such as hashrate and security, he recommended to the attendees several locally-deployed, offline large language model tools suitable for out-of-the-box application scenarios.

On the second day, Associate Professor Fernández Martínez provided a detailed explanation of the specific applications of prompt engineering across various computational linguistics tasks. Using tasks such as text classification, information extraction, autoabstract, machine translation, Q&A, dialogue, corpus synthesis, and corpus annotation as examples, he comprehensively demonstrated effective prompt design techniques tailored to different tasks. He also placed special emphasis on zero-shot prompting techniques and guided attendees through a series of hands-on exercises.

On the third day, Associate Professor Fernández Martínez introduced a variety of advanced prompting techniques, including few-shot prompting, Chain-of-Thought, calibrated confidence prompting, Tree of Thoughts, recursive self-improvement, structured prompting, role-playing mode, explanatory prompting, emotional priming techniques, transfer learning, self-consistency prompting, and meta-prompting, among others. He also shared insights into how these techniques can be applied to diverse natural language processing tasks. In the hands-on practice session, attendees experimented with these advanced techniques under various model architectures and parameter configurations to observe their effects.

Throughout the three-day lecture series, attendees engaged in in-depth discussions and exchanges with Associate Professor Fernández Martínez. The discussions centered around optimization strategies for prompt engineering, details of model training, best practices in prompt design, and the future prospects of this field. This lecture series not only deepened the understanding of attendees in prompt engineering but also provided them with new inspiration for their future studies.

Texts: LUO Jiaqi
Translated by YU Jinbo, Proofread by XU Xueying



