Back to Catalog

Parameter efficient fine-tuning (PEFT): Adapters in PyTorch

IntermediateGuided Project

Apply parameter-efficient fine-tuning (PEFT) in PyTorch using adapters! This hands-on project walks you through fine-tuning a transformer-based neural network using a bottleneck adapter that improves the efficiency of training and storage. Upon completion, you will have enhanced your skills in incorporating adapters and fine-tuning pre-existing models, and you will have gained insights into the advantages and disadvantages of different fine-tuning methods.

4.7 (11 Reviews)

Language

  • English

Topic

  • Artificial Intelligence

Enrollment Count

  • 70

Skills You Will Learn

  • Artificial Intelligence, Generative AI, Python, PyTorch, NLP, Deep Learning

Offered By

  • IBMSkillsNetwork

Estimated Effort

  • 45 minutes

Platform

  • SkillsNetwork

Last Update

  • October 15, 2025
About this Guided Project

A look at the project ahead

Adapter-based parameter efficient fine-tuning (PEFT) techniques are widely used for fine-tuning neural networks due to their efficiency. Here’s why:
  1. Efficient training: During the training process, a significantly smaller number of weights must be updated. This leads to a more efficient training process compared to full fine-tuning.
  2. Efficient storage: The models can be stored compactly by only saving the weights for the adapter's layers and the output layer. This is because the weights in the original model, except for the output layer, remain unchanged.
  3. Reduced overfitting: Adapter-based PEFT techniques, which preserve the original weights, are less prone to overfitting. This is largely due to the fact that the adapted model retains a substantial part of the original model’s structure.
In this hands-on project, you gain an understanding of how adapters function. You’ll apply an adapter to a transformer-based neural network. The adapter that you use, called a bottleneck adapter, includes a non-linear activation function, ensuring that the resulting model isn’t just a linear combination of the original model’s weights.

Learning objectives

Upon completion of this project, you have the ability to:
  • Understand how adapters work
  • Apply adapters to linear layers in a neural network
  • Train a neural network in a parameter efficient way by training just the adapted layers

What you'll need

For this project, you need an intermediate level of proficiency in Python, PyTorch, and deep learning. Additionally, the only equipment that you need is a computer equipped with a modern browser, such as the latest versions of Chrome, Edge, Firefox, or Safari.

Instructors

Wojciech "Victor" Fulmyk

Data Scientist at IBM

I am a data scientist and economist with a strong background in econometrics, time series analysis, causal inference, and statistics. I stand out for my ability to combine technical expertise with clear communication, turning complex data findings into practical insights for stakeholders at every level. Follow my projects to learn about data science principles, machine learning algorithms, and artificial intelligence agents.

Read more

Joseph Santarcangelo

Senior Data Scientist at IBM

Joseph has a Ph.D. in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.

Read more

Ashutosh Sagar

Data Scientist

I am currently a Data Scientist at IBM with a Master’s degree in Computer Science from Dalhousie University. I specialize in natural language processing, particularly in semantic similarity search, and have a strong background in working with advanced AI models and technologies.

Read more