Parameter efficient fine-tuning (PEFT): Adapters in PyTorch
Apply parameter-efficient fine-tuning (PEFT) in PyTorch using adapters! This hands-on project walks you through fine-tuning a transformer-based neural network using a bottleneck adapter that improves the efficiency of training and storage. Upon completion, you will have enhanced your skills in incorporating adapters and fine-tuning pre-existing models, and you will have gained insights into the advantages and disadvantages of different fine-tuning methods.
4.8 (12 Reviews)

Language
- English
Topic
- Artificial Intelligence
Enrollment Count
- 89
Skills You Will Learn
- Artificial Intelligence, Generative AI, Python, PyTorch, NLP, Deep Learning
Offered By
- IBMSkillsNetwork
Estimated Effort
- 45 minutes
Platform
- SkillsNetwork
Last Update
- December 6, 2025
A look at the project ahead
- Efficient training: During the training process, a significantly smaller number of weights must be updated. This leads to a more efficient training process compared to full fine-tuning.
- Efficient storage: The models can be stored compactly by only saving the weights for the adapter's layers and the output layer. This is because the weights in the original model, except for the output layer, remain unchanged.
- Reduced overfitting: Adapter-based PEFT techniques, which preserve the original weights, are less prone to overfitting. This is largely due to the fact that the adapted model retains a substantial part of the original model’s structure.
Learning objectives
- Understand how adapters work
- Apply adapters to linear layers in a neural network
- Train a neural network in a parameter efficient way by training just the adapted layers
What you'll need

Language
- English
Topic
- Artificial Intelligence
Enrollment Count
- 89
Skills You Will Learn
- Artificial Intelligence, Generative AI, Python, PyTorch, NLP, Deep Learning
Offered By
- IBMSkillsNetwork
Estimated Effort
- 45 minutes
Platform
- SkillsNetwork
Last Update
- December 6, 2025
Instructors
Wojciech "Victor" Fulmyk
Data Scientist at IBM
Wojciech "Victor" Fulmyk is a Data Scientist and AI Engineer on IBM’s Skills Network team, where he focuses on helping learners build expertise in data science, artificial intelligence, and machine learning. He is also a Kaggle competition expert, currently ranked in the top 3% globally among competition participants. An economist by training, he applies his knowledge of statistics and econometrics to bring a distinctive perspective to AI and ML—one that considers both technical depth and broader socioeconomic implications.
Read moreJoseph Santarcangelo
Senior Data Scientist at IBM
Joseph has a Ph.D. in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.
Read moreAshutosh Sagar
Data Scientist
I am currently a Data Scientist at IBM with a Master’s degree in Computer Science from Dalhousie University. I specialize in natural language processing, particularly in semantic similarity search, and have a strong background in working with advanced AI models and technologies.
Read more