ChatGLM3
<p align="center">
Explore
12,118 skills indexed with the new KISS metadata standard.
<p align="center">
finetune_demo/output
默认情况下,模型以 FP16 精度加载,运行上述代码需要大概 13GB 显存。如果你的 GPU 显存有限,可以尝试以量化方式加载模型,使用方法如下:
By default, the model is loaded with FP16 precision, running the above code requires about 13GB of VRAM. If your GPU's VRAM is limited, you can try loading the model quantitatively, as follows:
**Mac直接加载量化后的模型出现提示 `clang: error: unsupported option '-fopenmp'**
<p align="center">
<p align="center">
对 ChatGLM 进行加速或者重新实现的开源项目:
<p align="center">
<p align="center">
**[2023/05/17]** 发布 [VisualGLM-6B](https://github.com/THUDM/VisualGLM-6B),一个支持图像理解的多模态对话语言模型。
__pycache__/
**Mac直接加载量化后的模型出现提示 `clang: error: unsupported option '-fopenmp'**
site_url: https://github.com/binary-husky/gpt_academic
.github
*.cpp linguist-detectable=false
__pycache__/
- repo: https://github.com/pre-commit/pre-commit-hooks
>
# 「方法1: 适用于Linux,很方便,可惜windows不支持」与宿主的网络融合为一体,这个是默认配置
lui/

<div align="center">
<h1 align="center"> PeterCat</h1>