<!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Explore
4,151 skills indexed with the new KISS metadata standard.
Copyright 2020 The HuggingFace Team. All rights reserved.
__pycache__/
<img src="assets/logo.png" alt="Stanford-Alpaca" style="width: 50%; min-width: 300px; display: block; margin: auto;">
To enable more open-source research on instruction following large language models, we use generate 52K instruction-followng demonstrations using OpenAI's text-davinci-003 model.
**Organization developing the model**
- repo: https://github.com/pre-commit/pre-commit-hooks
[](https://github.com/hiyouga/LLaMA-Factory/stargazers)
[](https://github.com/hiyouga/LLaMA-Factory/stargazers)
.git
* text=auto
__pycache__/
<a href="README_CN.md">中文</a>  |  <a href="README.md">English</a>  |  日本語 |  <a href="README_FR.md">Français</a> |  <a href="README_ES.md">Español</a>
Large language models have recently attracted an extremely large amount of
Qwen-7B uses BPE tokenization on UTF-8 bytes using the `tiktoken` package.
Qwen-7B は `tiktoken` パッケージを使用して、UTF-8 バイトを BPE トークン化します。
> 注:作为术语的“tokenization”在中文中尚无共识的概念对应,本文档采用英文表达以利说明。
中文</a>  |  <a href="README.md">English</a>  |  <a href="README_JA.md">日本語</a> |  <a href="README_FR.md">Français</a> |  <a href="README_ES.md">Español</a>
<a href="README_CN.md">中文</a>  |  <a href="README.md">English</a>  |  <a href="README_JA.md">日本語</a> |  <a href="README_FR.md">Français</a> |  Español
<a href="README_CN.md">中文</a>  |  <a href="README.md">English</a>  |  <a href="README_JA.md">日本語</a>  |  Français |  <a href="README_ES.md">Español</a>
Flash attention is an option for accelerating training and inference. Only NVIDIA GPUs of Turing, Ampere, Ada, and Hopper architecture, e.g., H100, A100, RTX 3090, T4, RTX 2080, can support flash attention. **You can use our models without installing it.**
Flash attention は、トレーニングと推論を加速するオプションです。H100、A100、RTX 3090、T4、RTX 2080 などの Turing、Ampere、Ada、および Hopper アーキテクチャの NVIDIA GPU だけが、flash attention をサポートできます。それをインストールせずに私たちのモデルを使用することができます。
flash attention是一个用于加速模型训练推理的可选项,且仅适用于Turing、Ampere、Ada、Hopper架构的Nvidia GPU显卡(如H100、A100、RTX 3090、T4、RTX 2080),您可以在不安装flash attention的情况下正常使用模型进行推理。
<a href="README_CN.md">中文</a>  |  English  |  <a href="README_JA.md">日本語</a> |  <a href="README_FR.md">Français</a> |  <a href="README_ES.md">Español</a>
*.so