LoRA or Low-Rank Adaptation has become something of a buzzword recently. We’ve heard LoRA thrown around in conversation frequently (sometimes where it doesn’t even make sense!), so here’s a short explainer of LoRA. The LoRA paper predates the current LLM craze by over a year.
You might want to fix the code for footnote 3 so it renders better in a browser
I guess "LoRA’s goal is to fine-tuning accessible" -> "LoRA’s goal is to make fine-tuning accessible"