QLoRA: Efficient Finetuning of Quantized LLMs
| [Paper](https://arxiv.org/abs/2305.14314) | [Adapter Weights](https://huggingface.co/timdettmers) | [Demo](https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi) |
Explore
4,151 skills indexed with the new KISS metadata standard.
| [Paper](https://arxiv.org/abs/2305.14314) | [Adapter Weights](https://huggingface.co/timdettmers) | [Demo](https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi) |
type: website
require_ci_to_pass: yes
services:
generic skill
configs
python: python3
- Can you train StableLM with this? Yes, but only with a single GPU atm. Multi GPU support is coming soon! Just waiting on this [PR](https://github.com/huggingface/transformers/pull/22874)
<picture>
exclude = tests
language: "en-US"
source = axolotl
[*]
__pycache__/
<img src="https://raw.githubusercontent.com/huggingface/alignment-handbook/main/assets/handbook.png">
> [!NOTE]
.gitattributes
- repo: https://github.com/astral-sh/ruff-pre-commit
We as members, contributors, and leaders pledge to make participation in our
Everyone is welcome to contribute, and we value everybody's contribution. Code contributions are not the only way to help the community. Answering questions, helping others, and improving the documentation are also immensely valuable.
<div style="text-align: center">
__pycache__/
- repo: https://github.com/astral-sh/ruff-pre-commit
Copyright 2023 The HuggingFace Team. All rights reserved.