Repository Status: Under Construction
We are actively organizing the codebase and reproduction scripts. Stay tuned—code will be available here soon!
ALPS (Attention Localization and Pruning Strategy) is a novel PEFT method for efficiently aligning large language models by identifying and pruning attention heads that are less relevant to downstream tasks. This approach reduces computational and memory overhead while maintaining or even improving model performance. The full paper and reference implementation will be linked here upon release.
Code and detailed instructions are on the way!