Snehal Raj's Avatar

Snehal Raj

@snehalraj.bsky.social

PhD student at Sorbonne University Assoc. Staff Scientist at QC Ware www.snehalraj.com

96 Followers  |  49 Following  |  10 Posts  |  Joined: 03.02.2025  |  1.4466

Latest posts by snehalraj.bsky.social on Bluesky

Preview
Hyper Compressed Fine-Tuning of Large Foundation Models with Quantum Inspired Adapters Fine-tuning pre-trained large foundation models for specific tasks has become increasingly challenging due to the computational and storage demands associated with full parameter updates. Parameter-Ef...

Check out the full paper for more details on the method, experimental setup, and analysis! arxiv.org/abs/2502.06916 We welcome your feedback and questions! Special mention to @brianc2095.bsky.social for his expert guidance and mentorship.

12.02.2025 14:57 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Future directions include exploring more complex architectures, further optimising adapter design, and investigating potential quantum speedups for compound matrix operations.

12.02.2025 14:57 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Our findings suggest Quantum-Inspired Adapters offer a promising direction for efficient adaptation of language and vision models in resource-constrained environments. The method's adaptability across different benchmarks underscores its generalisability.

12.02.2025 14:57 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We found that combining multiple Hamming-weight orders with orthogonality and matrix compounding are essential for performant fine-tuning. Enforcing orthogonality is critical for the success of compound adapters.

12.02.2025 14:57 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

VTAB results are also promising! Our method achieves a comparable performance to LoRA with โ‰ˆ 13.6x fewer parameters. In some instances, such as CIFAR100, accuracy was significantly increased relative to other methods.

12.02.2025 14:57 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

On GLUE, we achieved 99.2% of LoRA's performance with a 44x parameter compression. Compared to OFT/BOFT, we achieved 98% relative performance with 25x fewer parameters.

12.02.2025 14:57 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We tested our adapters on GLUE and VTAB benchmarks. Results show our method achieves competitive performance with significantly fewer trainable parameters compared to LoRA, OFT, and BOFT.

12.02.2025 14:57 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Our approach draws inspiration from Hamming-weight preserving quantum circuits to create parameter-efficient adapters that operate in a combinatorially large space while preserving orthogonality in weight parameters.

12.02.2025 14:57 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Fine-tuning large models is computationally expensive. This challenge has spurred interest in parameter efficient methods like LoRA which aim to adapt large foundation models to new tasks by updating only a small subset of parameters or introducing lightweight adaptation modules.

12.02.2025 14:57 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Hyper Compressed Fine-Tuning of Large Foundation Models with Quantum Inspired Adapters Fine-tuning pre-trained large foundation models for specific tasks has become increasingly challenging due to the computational and storage demands associated with full parameter updates. Parameter-Ef...

Our work, "Hyper Compressed Fine-Tuning of Large Foundation Models with Quantum Inspired Adapters" is now on arXiv! scirate.com/arxiv/2502.0... Our methods can compress large models by up to 44x with minimal performance loss.

12.02.2025 14:57 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

@snehalraj is following 20 prominent accounts