- Personal learning path for NLP and Large Language Model (LLM)
- Paper recommendations
Chinese version: README_CN.md (in progress)
**Notes: **
- I don't recommend older materials (even if they are "classic") because everything changes fast in NLP
- You need to know basic knowledge of machine learning and Python
- Machine learning courses:
- Home | CS 189/289A (eecs189.org)
- CS229: Machine Learning (stanford.edu) (theoretic perspective)
- Python: too many courses... (ex: CS50's Introduction to Programming with Python (harvard.edu) )
- Machine learning courses:
Courses / Tutorials:
- Stanford CS 224N | Natural Language Processing with Deep Learning
- CS224U: Natural Language Understanding - Spring 2023 (stanford.edu)
- Lena Voita (lena-voita.github.io)
- Introduction - Hugging Face NLP Course (good intro about huggingface libraries)
Books:
Articles:
-
Intro to Transformer model:
-
Reading lists:
Note: I haven't read all of the papers in detail so I may change this list frequently
- A Survey of Large Language Models
- Challenges and Applications of Large Language Models
- A Survey on Multimodal Large Language Models
- A Survey for In-context Learning
- Language Models are Few-Shot Learners
- Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
- In-Context Learning Paper List
- Finetuned Language Models Are Zero-Shot Learners
- Multitask Prompted Training Enables Zero-Shot Task Generalization
- Scaling Instruction-Finetuned Language Models
- Training Language Models to Follow Instructions with Human Feedback
- Self-Instruct: Aligning Language Model with Self Generated Instructions
Chain of Thought Prompting Elicits Reasoning in Large Language Models
- LoRA: Low-Rank Adaptation of Large Language Models
- Parameter-Efficient Transfer Learning for NLP
- The Power of Scale for Parameter-Efficient Prompt Tuning
- Prefix-Tuning: Optimizing Continuous Prompts for Generation
- GPT Understands, Too
- P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
- Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning
- Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning
- Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models