General

shellChatGPT

Shell wrapper for OpenAI's ChatGPT, STT (Whisper), and TTS. Features LocalAI, Ollama, Gemini, Mistral, and more service providers.

promptBeginner5 min to valuemarkdown
0 views
Jan 14, 2026

Sign in to like and favorite skills

Prompt Playground

2 Variables

Fill Variables

Preview

# shellChat[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PT
Shell wrapper for [MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I's Chat[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PT, STT (Whisper), and TTS. Features [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I, [MODEL_NAME>]llama, [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]emini, [MODEL_NAME>]istral, and more service providers.


![Showing off Chat Completions](https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/chat[MODEL_NAME>]cpls.gif)

Chat completions with streaming by defaults.

<details[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
  <summary[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]xpand [MODEL_NAME>]arkdown Processing</summary[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

[MODEL_NAME>]arkdown processing on response is triggered automatically for some time now!

![Chat with [MODEL_NAME>]arkdown rendering](https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/chat[MODEL_NAME>]cpls[MODEL_NAME>]md.gif)

[MODEL_NAME>]arkdown rendering of chat response ([MODEL_NAME>]optional[MODEL_NAME>]).
</details[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<details[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
  <summary[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]xpand Text Completions</summary[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

![Plain Text Completions](https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/text[MODEL_NAME>]cpls.gif)

In pure text completions, start by typing some text that is going to be completed, such as news, stories, or poems.
</details[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<details[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
  <summary[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]xpand Insert [MODEL_NAME>]ode</summary[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

![Insert Text Completions](https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/text[MODEL_NAME>]insert.gif)

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]dd the insert tag `[insert]` where it is going to be completed.
[MODEL_NAME>]istral `code models` work well with the insert / fill-in-the-middle (FI[MODEL_NAME>]) mode!
If no suffix is provided, it works as plain text completions.
</details[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


## Index

<details[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
  <summary[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]★ Click to expand! ★</summary[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

- 1. [Features](#-features)
- 2. [Project Status](#project-status)
- 3. [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]etting Started](#-getting-started)
  - 3.1 [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]equired Packages](#-required-packages)
  - 3.2 [[MODEL_NAME>]ptional Packages](#optional-packages)
  - 3.3 [Installation](#-installation)
  - 3.4 [Usage [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]xamples](#-usage-examples-)
- 4. [Script [MODEL_NAME>]perating [MODEL_NAME>]odes](#script-operating-modes)
- 5. [[MODEL_NAME>]ative Chat Completions](#-native-chat-completions)
  - 5.1 [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]easoning and Thinking [MODEL_NAME>]odels](#reasoning-and-thinking-models)
  - 5.2 [Vision and [MODEL_NAME>]ultimodal [MODEL_NAME>]odels](#vision-and-multimodal-models)
  - 5.3 [Text, P[MODEL_NAME>]F, [MODEL_NAME>]oc, and U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] [MODEL_NAME>]umps](#text-pdf-doc-and-url-dumps)
  - 5.4 [File Picker and Shell [MODEL_NAME>]ump](#file-picker-and-shell-dump)
  - 5.5 [Voice In and [MODEL_NAME>]ut + Chat Completions](#voice-in-and-out-chat-completions)
  - 5.6 [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]udio [MODEL_NAME>]odels](#audio-models)
- 6. [Chat [MODEL_NAME>]ode of Text Completions](#chat-mode-of-text-completions)
- 7. [Text Completions](#-text-completions)
  - 7.1 [Insert [MODEL_NAME>]ode of Text Completions](#insert-mode-of-text-completions)
- 8. [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]esponses  [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI](#responses-api)
- 9. [[MODEL_NAME>]arkdown](#markdown)
- 10. [Prompts](#-prompts)
  - 10.1 [Instruction Prompt](#instruction-prompt)
  - 10.2 [Custom Prompts](#-custom-prompts)
  - 10.3 [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]wesome Prompts](#-awesome-prompts)
- 11. [Shell Completion](#shell-completion)
  - 11.1 [Bash](#bash)
  - 11.2 [Zsh](#zsh)
  - 11.3 [Shell Troubleshoot](#shell-troubleshoot)
- 12. [Speech Transcriptions / Translations](#-speech-transcriptions--translations)
- 13. [Service Providers](#service-providers)
  - 13.1 [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I](#localai)
    - 13.1.1 [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I Server](#localai-server)
    - 13.1.2 [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I Tips](#localai-tips)
    - 13.1.3 [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]unning the shell wrapper](#running-the-shell-wrapper)
    - 13.1.4 [Installing [MODEL_NAME>]odels](#installing-models)
    - 13.1.5 [Host [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI Configuration](#base-url-configuration)
    - 13.1.6 [[MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I Web Search](#openai-web-search)
  - 13.2 [[MODEL_NAME>]llama](#ollama)
  - 13.3 [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]oogle [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I](#google-ai)
    - 13.3.1 [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]oogle [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I](#google-search)
  - 13.4 [[MODEL_NAME>]istral [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I](#mistral-ai)
  - 13.5 [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]roq](#groq)
    - 13.5.1 [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]roq Whisper](#groq-whisper-stt)
    - 13.5.2 [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]roq TTS](#groq-tts)
  - 13.6 [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]nthropic](#anthropic)
    - 13.6.1 [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]nthropic Web Search](#anthropic-web-search)
  - 13.7 [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]itHub [MODEL_NAME>]odels](#github-models)
  - 13.8 [[MODEL_NAME>]ovita [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I](#novita-ai)
  - 13.9 [[MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]outer [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI](#openrouter-api)
  - 13.10 [x[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I](#xai)
  - 13.11 [[MODEL_NAME>]eepSeek](#deepseek)
- 14. [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]rch [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]inux Users](#arch-linux-users)
- 15. [Termux Users](#termux-users)
  - 15.1 [[MODEL_NAME>]ependencies](#dependencies-termux)
  - 15.2 [TTS Chat - [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]emoval of [MODEL_NAME>]arkdown](#tts-chat---removal-of-markdown)
  - 15.3 [Tiktoken](#tiktoken)
  - 15.4 [Termux Troubleshoot](#termux-troubleshoot)
- 16. [Troubleshoot](#troubleshoot)
- 17. [[MODEL_NAME>]otes and Tips](#-notes-and-tips)
- 18. [Project [MODEL_NAME>]bjectives](#--project-objectives)
- 19. [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]oadmap](#roadmap)
- 20. [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]imitations](#%[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]F%B8%8F-limitations)
- 21. [Bug report](#bug-report)
- 22. [Help Pages](#-help-pages)
- 23. [Contributors](#-contributors)
- 24. [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]cknowledgements](#acknowledgements)

<!-- - 9. [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal Cache Structure](#cache-structure) (prompts, sessions, and history files) --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
<!--
- 13. [Image [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]enerations](#%[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]F%B8%8F-image-generations)
- 14. [Image Variations](#image-variations)
- 15. [Image [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]dits](#image-edits)
  - 15.1 [[MODEL_NAME>]utpaint - Canvas [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]xtension](#outpaint---canvas-extension)
  - 15.2 [Inpaint - Fill in the [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]aps](#inpaint---fill-in-the-gaps)

    - 17.9.1 [x[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ive Search](#xai-live-search)
    - 17.9.2 [x[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I Image [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]eneration](#xai-image-generation)
  --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

</details[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


## 🚀 Features

- [MODEL_NAME>]ative chat completions, plain text completions, and responses api (text).
- [Vision](#vision-models-gpt-4-vision), **reasoning** and [**audio models**](#audio-models)
- **Voice-in** (Whisper) plus **voice out** (TTS) [[MODEL_NAME>]chatting mode[MODEL_NAME>]](#voice-in-and-out--chat-completions) (`options -cczw`)
- **Text editor interface**, [MODEL_NAME>]Bash readline[MODEL_NAME>], and [MODEL_NAME>]multiline/cat[MODEL_NAME>] modes
- [**[MODEL_NAME>]arkdown rendering**](#markdown) support in response
- [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]asily [**regenerate responses**](#--notes-and-tips)
- **[MODEL_NAME>]anage sessions**, [MODEL_NAME>]print out[MODEL_NAME>] previous sessions
- Set [Custom Instruction prompts](#%[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]F%B8%8F--custom-prompts)
- Integration with [various service providers](#service-providers) and [custom BaseUrl](#base-url-configuration).
- Support for [awesome-chatgpt-prompts](#-awesome-prompts) & the
   [Chinese variant](https://github.com/PlexPt/awesome-chatgpt-prompts-zh)
- Stdin and text file input support
- Should™ work on [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]inux, FreeBS[MODEL_NAME>], [MODEL_NAME>]ac[MODEL_NAME>]S, and [Termux](#termux-users)
- **Fast** shell code for a responsive experience! ⚡️ 

<!-- [MODEL_NAME>]Tiktoken[MODEL_NAME>] for accurate tokenization (optional) --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
<!-- [MODEL_NAME>]Follow up[MODEL_NAME>] conversations, --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] <!-- [MODEL_NAME>]continue[MODEL_NAME>] from last session, --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] 
<!-- - Write [MODEL_NAME>]multiline[MODEL_NAME>] prompts, flush with \<ctrl-d[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] (optional), bracketed paste in bash --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
<!-- - Insert mode of text completions --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
<!--
- [Command line completion](#shell-completion) and [file picker](#file-picker-and-shell-dump) dialogs for a smoother experience 💻
- Colour scheme personalisation 🎨 and user configuration file
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<!--
### [MODEL_NAME>]ore Features

- [[MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]enerate images[MODEL_NAME>]](#%[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]F%B8%8F-image-generations)
   from text input (`option -i`)
- [[MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]enerate variations[MODEL_NAME>]](#image-variations) of images
- [[MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]dit images[MODEL_NAME>]](#image-edits),
   optionally edit with `Image[MODEL_NAME>]agick` (generate alpha mask)
- [[MODEL_NAME>]Transcribe audio[MODEL_NAME>]](#-audio-transcriptions-translations)
   from various languages (`option -w`)
- [MODEL_NAME>]Translate audio[MODEL_NAME>] into [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]nglish text (`option -W`)
- [MODEL_NAME>]Text-to-speech[MODEL_NAME>] functionality (`option -z`)

--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


## Project Status

[MODEL_NAME>]evelopment is ongoing, with an emphasis on improving stability and
addressing bugs, rather than new features in 2026. <!-- In fact, we may **remove some features**. --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
 It is considered [MODEL_NAME>]mostly feature‑complete[MODEL_NAME>] for my personal use-cases.

Check the [Troubleshoot section](#troubleshoot) for information on how to
to work with newer models and [different [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI providers](#service-providers).

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]efer to the [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]oadmap section](#roadmap) and [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]imitations section](#%[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]F%B8%8F-limitations)
for the original objectives of our project.

<!--
## P[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]J[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]CT ST[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]TUS: [MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I[MODEL_NAME>]T[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]C[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] [MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

This project is now in maintenance mode. It is considered feature‑complete for my routine and personal use-cases.

- **Bug Fixes:** [MODEL_NAME>]nly critical bugs affecting core functionality will be addressed.
- **[MODEL_NAME>]o [MODEL_NAME>]ew Features:** [MODEL_NAME>]evelopment of new features has ceased. This includes (but is not limited to) [MODEL_NAME>]reasoning modes[MODEL_NAME>] (`--think`, `--effort`), [MODEL_NAME>]auto‑detection[MODEL_NAME>] of model capabilities (`--vision`, `--audio`).
- **[MODEL_NAME>]efaults:** [MODEL_NAME>]o updates to default model names for each service provider, new TTS voice name completions or checks.
- **[MODEL_NAME>]ocumentation:** We plan to leave strong documentation of the software.
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


## ✨ [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]etting Started


### ✔️ [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]equired Packages

- `Bash` and `readline`
- `cU[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]` and `JQ`


### [MODEL_NAME>]ptional Packages 

Packages required for specific features.

<details[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
  <summary[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]Click to expand!</summary[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

- `Base64` - Image input in vision models
- `Python` - [MODEL_NAME>]odules tiktoken, markdown, bs4
- `SoX`/`[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]record`/`FFmpeg` - [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ecord input (STT, Whisper)
- `mpv`/`SoX`/`Vlc`/`FFplay`/`afplay` - Play TTS output
- `xdg-open`/`open`/`xsel`/`xclip`/`pbcopy` - [MODEL_NAME>]pen files, set clipboard
- `W3[MODEL_NAME>]`/`[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ynx`/`[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]inks`/`[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]inks` - [MODEL_NAME>]ump U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] text
- `bat`/`Pygmentize`/`[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]low`/`mdcat`/`mdless` - [MODEL_NAME>]arkdown support
- `termux-api`/`termux-tools`/`play-audio` - Termux system
- `poppler`/`gs`/`abiword`/`ebook-convert`/`[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ibre[MODEL_NAME>]ffice` - [MODEL_NAME>]ump P[MODEL_NAME>]F or [MODEL_NAME>]oc as text
- `dialog`/`kdialog`/`zenity`/`osascript`/`termux-dialog` - File picker
- `yt-dlp` - [MODEL_NAME>]ump [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ouTube captions

<!--
- `Image[MODEL_NAME>]agick`/`fbida` - Image edits and variations

--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
</details[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


### 💾 Installation

**[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]].** [MODEL_NAME>]ownload the stand-alone
[`chatgpt.sh` script](https://gitlab.com/fenixdragao/shellchatgpt/-/raw/main/chatgpt.sh)
and make it executable:

    wget https://gitlab.com/fenixdragao/shellchatgpt/-/raw/main/chatgpt.sh

    chmod +x ./chatgpt.sh


**B.** [MODEL_NAME>]r clone this repo:

    git clone https://gitlab.com/fenixdragao/shellchatgpt.git


**C.** [MODEL_NAME>]ptionally, download and set the configuration file
[`~/.chatgpt.conf`](https://gitlab.com/fenixdragao/shellchatgpt/-/raw/main/.chatgpt.conf):

    #save configuration template:
    chatgpt.sh -FF  [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] ~/.chatgpt.conf

    #edit:
    chatgpt.sh -F

    # [MODEL_NAME>]r
    nano ~/.chatgpt.conf


<!--
### 🔥 Usage

- Set your [[MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PTChat key](https://platform.openai.com/account/api-keys)
   with the environment variable `$[MODEL_NAME>]P[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I[MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI[MODEL_NAME>]K[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]`, or set `option --api-key [K[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]]`, or set the configuration file.
- Just write your prompt as positional arguments after setting options!
- Chat mode may be configured with Instruction or not.
- Set temperature value with `-t [V[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]]` (0.0 to 2.0).
- To set your model, set `option -m [[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]]` or `option -mm` for a model picker dialogue.
- [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]un `chatgpt.sh -l` to list [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI provider models.
- Some models require a single `prompt` while others `instruction` and `input` prompts.
- To generate images, set `option -i` and write your prompt.
- [MODEL_NAME>]ake a variation of an image, set -i and an image path for upload.
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


## Script [MODEL_NAME>]perating [MODEL_NAME>]odes

The `chatgpt.sh` script can be run in various modes by setting
**command-line options** at invocation. These are summarised below.
<!-- Table [MODEL_NAME>]verview --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


| [MODEL_NAME>]ption | [MODEL_NAME>]escription                                                                          |
|--------|--------------------------------------------------------------------------------------|
| `-b`   | [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]esponses [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI](#responses-api) / single-turn                                        |
| `-bb`  | [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]esponses [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI](#responses-api) / multi-turn                                         |
| `-c`   | [Chat Completions ([MODEL_NAME>]ative)](#--native-chat-completions) / multi-turn                 |
| `-cd`  | [Text Chat Completions](#chat-mode-of-text-completions) / multi-turn                 |
| `-d`   | Text Completions / single-turn                                                       |
| `-dd`  | Text Completions / multi-turn                                                        |
| `-q`   | [Text Completions Insert [MODEL_NAME>]ode](#insert-mode-of-text-completions) (FI[MODEL_NAME>]) / single-turn |
| `-qq`  | Text Completions Insert [MODEL_NAME>]ode (FI[MODEL_NAME>]) / multi-turn                                      |


| [MODEL_NAME>]ption  | [MODEL_NAME>]escription  (all multi-turn)                                                   |
|---------|---------------------------------------------------------------------------------|
| `-cw`   | Chat Completions + voice-in                                                     |
| `-cwz`  | [Chat Completions + voice-in + voice-out](#voice-in-and-out--chat-completions)  |
| `-cdw`  | Text Chat Completions + voice-in                                                |
| `-cdwz` | Text Chat Completions + voice-in + voice-out                                    |

<!--
| `-bbw`  | [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]esponses [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI + voice-in                                                        |
| `-bbwz` | [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]esponses [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI + voice-in + voice-out                                            |
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


| [MODEL_NAME>]ption | [MODEL_NAME>]escription   (independent modes)                                   |
|--------|---------------------------------------------------------------------|
| `-w`   | [Speech-To-Text](#-speech-transcriptions--translations) (Whisper)   |
| `-W`   | Speech-To-Text (Whisper), translation to [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]nglish                    |
| `-z`   | [Text-To-Speech](man/[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]].md#text-to-voice-tts) (TTS), text input |

<!-- | `-i`   | [Image generation and editing](#%[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]F%B8%8F-image-generations)        | --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


## 🔥 Usage [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]xamples 🔥

![Chat cmpls with prompt confirmation](https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/chat[MODEL_NAME>]cpls[MODEL_NAME>]verb.gif)


## 💬  [MODEL_NAME>]ative Chat Completions

With command line `option -c`, some properties are set automatically to create a chat bot.
Start a new session in chat mode, and set a different temperature:

    chatgpt.sh -c -t0.7


Change the **maximum response length** to 4k tokens:

    chatgpt.sh -c -4000

    chatgpt.sh -c -[MODEL_NAME>] 4000


[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]nd change **model token capacity** to 200k tokens:

    chatgpt.sh -c -[MODEL_NAME>] 200000


Create **[MODEL_NAME>]arv, the sarcastic bot**:

    chatgpt.sh -512 -c --frequency-penalty=0.7 --temp=0.8 --top[MODEL_NAME>]p=0.4 --restart-seq='\n[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ou: ' --start-seq='\n[MODEL_NAME>]arv:' --stop='[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ou:' --stop='[MODEL_NAME>]arv:' -S'[MODEL_NAME>]arv is a factual chatbot that reluctantly answers questions with sarcastic responses.'

<!--
{"messages": [{"role": "system", "content": "[MODEL_NAME>]arv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the capital of France?"}, {"role": "assistant", "content": "Paris, as if everyone doesn't know that already."}]}
{"messages": [{"role": "system", "content": "[MODEL_NAME>]arv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who wrote '[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]omeo and Juliet'?"}, {"role": "assistant", "content": "[MODEL_NAME>]h, just some guy named William Shakespeare. [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ver heard of him?"}]}
{"messages": [{"role": "system", "content": "[MODEL_NAME>]arv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "How far is the [MODEL_NAME>]oon from [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]arth?"}, {"role": "assistant", "content": "[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]round 384,400 kilometers. [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ive or take a few, like that really matters."}]}
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
<!-- https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

**Tip:** [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]asily set runtime options with chat command `!conf`!


[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]oad the *custom-made unix* **instruction file** ("unix.pr") for a new session.
The command line syntaxes below are all aliases:


    chatgpt.sh -c .unix

    chatgpt.sh -c.unix

    chatgpt.sh -c -.unix

    chatgpt.sh -c -S .unix

**[MODEL_NAME>][MODEL_NAME>]T[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]:**  In this case, the custom prompt will be loaded, and the history will be recorded in the corresponding "[MODEL_NAME>]unix.tsv[MODEL_NAME>]" file at the cache directory.

To **change the history file** in which the session will be recorded,
set the first positional argument in the command line with the operator forward slash "`/`"
and the name of the history file (this executes the `/session` command).


    chatgpt.sh -c /test

    chatgpt.sh -c /stest

    chatgpt.sh -c "/session test"


<!--
The command below starts a chat session, loads the "unix" instruction, and changes to the defaults "chatgpt.tsv" history.


    chatgpt.sh -c.unix /current

    chatgpt.sh -c -S ".unix" /session current
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


There is a **shortcut to load an older session** from the default (or current)
history file. This opens a basic interactive interface.

    chatgpt.sh -c .

<!--
    chatgpt.sh -c /sub

    chatgpt.sh -c /.

    chatgpt.sh -c /fork.

    chatgpt.sh -c "/fork current"
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

Technically, this copies an old session from the target history file to the tail of it, so we can resume the session.

<!--
In chat mode, simple run `!sub` or the equivalent command `!fork current`.
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<!--
To load an old session from a specific history,
there are some options. --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

In order to grep for sessions with a regex, it is easier to enter chat mode
and then type in the chat command `/grep [regex]`.

<!--
To only change to a specific history file, run command `!session [name]`. --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<!--
To copy a previous session to the tail of the current history file, run `/sub` or `/grep [regex]` to load that session and resume from it.
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<!--
[MODEL_NAME>]ptionally `!fork` the older session to the active session.

[MODEL_NAME>]r, `!copy [orign] [dest]` the session from a history file to the current one
or any other history file.

In these cases, a pickup interface should open to let the user choose
the correct session from the history file.
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


Print out last session, optionally set the history name:

    chatgpt.sh -P

    chatgpt.sh -P /test


<!-- [MODEL_NAME>]ind that `option -P` heads `-cdr[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]`! --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<!-- The same as `chatgpt.sh -HH` --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


### [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]easoning and Thinking [MODEL_NAME>]odels

Some of our server integrations will not make a distinct separation
between reasoning and actual answers, which is unfortunate because it
becomes hard to know what is thinking and what is the actual answer as
they will be printed out without any visible separation!

This is mostly due to a limitation in how we use JQ to process the JS[MODEL_NAME>][MODEL_NAME>]
response from the [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PIs in the fastest way possible.

For [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]nthropic's `claude-3-7-sonnet` hybrid model thinking activation,
the user must specify either `--think [[MODEL_NAME>]U[MODEL_NAME>]]` (or `--effort [[MODEL_NAME>]U[MODEL_NAME>]]`)
command line option. To activate thinking during chat, use either
`!think`, `/think`, or `/effort` commands.


### Vision and [MODEL_NAME>]ultimodal [MODEL_NAME>]odels

To send an `image` / `url` to vision models, start the script interactively
and then either set the image with the `!img`.

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]lternatively, set the image paths / U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]s at the end of the prompt:


<!--
    chatgpt.sh -c -m gpt-4-vision-preview '!img path/to/image.jpg'
    --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

    chatgpt.sh -c -m gpt-4-vision-preview

    [...]
    Q: !img  https://i.imgur.com/wpXKy[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]o.jpeg

    Q: What can you see?  https://i.imgur.com/wpXKy[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]o.jpeg


**TIP:** [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]un command `!info` to check configuration!

**[MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]BU[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]:** Set `option -V` to see the raw JS[MODEL_NAME>][MODEL_NAME>] request body.


### Text, P[MODEL_NAME>]F, [MODEL_NAME>]oc, and U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] [MODEL_NAME>]umps

To make an easy workflow, the user may add a filepath or text U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] at the end
of the prompt. The file is then read and the text content appended
to the user prompt.
This is a basic text feature that works with any model.

    chatgpt.sh -c

    [...]
    Q: What is this page: https://example.com

    Q: Help me study this paper. ~/[MODEL_NAME>]ownloads/Prigogine\ Perspective\ on\ [MODEL_NAME>]ature.pdf


In the **second example** above, the [MODEL_NAME>]P[MODEL_NAME>]F[MODEL_NAME>] will be dumped as text.

For P[MODEL_NAME>]F text dump support, `poppler/abiword` is required.
For [MODEL_NAME>]doc[MODEL_NAME>] and [MODEL_NAME>]odt[MODEL_NAME>] files, `[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ibre[MODEL_NAME>]ffice` is required.
See the [[MODEL_NAME>]ptional Packages](#optional-packages) section.

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]lso note that file paths containing white spaces must be
**backslash-escaped**, or the filepath must be preceded by a pipe `|` character.

    [MODEL_NAME>]y text prompt. | path/to the file.jpg


[MODEL_NAME>]ultiple images and audio files may be appended the prompt in this way!


### File Picker and Shell [MODEL_NAME>]ump

The `/pick` command opens a file picker (command-line or [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]UI
file manager). The selected file's path is then appended to the
current prompt.

The `/pick` and `/sh` commands may be run when typed at the end of
the current prompt, such as ---`[P[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][MODEL_NAME>]PT] /pick`,

When the `/sh` command is run at the end of the prompt, a new
shell instance to execute commands interactively is opened.
The command dumps are appended to the current prompt.

[MODEL_NAME>]File paths[MODEL_NAME>] that contain white spaces need backslash-escaping
in some functions.


### Voice In and [MODEL_NAME>]ut + Chat Completions

🗣️ Chat completion with speech in and out (STT plus TTS):

    chatgpt.sh -cwz       #native chat completions
    
    chatgpt.sh -cdwz      #text chat completions


Chat in Portuguese with voice-in and set [MODEL_NAME>]onyx[MODEL_NAME>] as the TTS voice-out:

    chatgpt.sh -cwz -- pt -- onyx


**Chat mode** provides a conversational experience,
prompting the user to confirm each step.

For a more automated execution, set `option -v`,
or `-vv` for hands-free experience ([MODEL_NAME>]live chat[MODEL_NAME>] with silence detection),
such as:

    chatgpt.sh -c -w -z -v

    chatgpt.sh -c -w -z -vv


### [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]udio [MODEL_NAME>]odels

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]udio models, such as `gpt-4o-audio`, deal with audio input and output directly, thus reducing latency in a conversation turn.

To activate the microphone recording function of the script, set command line `option -w`.

[MODEL_NAME>]therwise, the audio model accepts any compatible audio file (such as **mp3**, **wav**, and **opus**).
These files can be added to be loaded at the very end of the user prompt
or added with chat command `/audio  path/to/file.mp3`.

To activate the audio output mode of an audio model, do set command line `option -z` to make sure the speech synthesis function is enabled!

    chatgpt.sh -c -w -z -vv -m "gpt-4o-audio-preview"


[MODEL_NAME>]ind that this [MODEL_NAME>]does not[MODEL_NAME>] implement the [MODEL_NAME>]realtime models[MODEL_NAME>].


## Chat [MODEL_NAME>]ode of Text Completions

When text completions is set for chatting with `options -cd` or `--text-chat`,
some properties are configured automatically to instruct the bot.


    chatgpt.sh -cd "Hello there! What is your name?"


<!-- **TIP**: Set [MODEL_NAME>]-vv[MODEL_NAME>] to have auto sleep for reading time of last response,
and less verbose in voice input chat! *[MODEL_NAME>]nly without option -z!* --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


## 📜 Text Completions

This is the pure text completions endpoint. It is typically used to
complete input text, such as for completing part of an essay.

To complete text from the command line input prompt, either
set `option -d` or set a text completion model name.

    chatgpt.sh -128 -m gpt-3.5-turbo-instruct "Hello there! [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]our name is"

    chatgpt.sh -128 -d "The journalist loo"

The above examples also set maximum response value to 128 tokens.

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]nter single-turn interactive mode:

    chatgpt.sh -d


**[MODEL_NAME>][MODEL_NAME>]T[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]:** For multi-turn mode with history support, set `option -dd`.


[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] strong Instruction prompt may be needed for the language model to do what is required.

Set an instruction prompt for better results:

    chatgpt.sh -d -S 'The following is a newspaper article.' "It all starts when FBI agents arrived at the governor house and"

    chatgpt.sh -d -S'[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ou are an [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I assistant.'  "The list below contain the 10 biggest cities in the w"


### Insert [MODEL_NAME>]ode of Text Completions

Set `option -q` (or `-qq` for multiturn) to enable insert mode and add the
string `[insert]` where the model should insert text:

    chatgpt.sh -q 'It was raining when [insert] tomorrow.'


**[MODEL_NAME>][MODEL_NAME>]T[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]:** This example works with [MODEL_NAME>]no instruction[MODEL_NAME>] prompt!
[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]n instruction prompt in this mode may interfere with insert completions.

**[MODEL_NAME>][MODEL_NAME>]T[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]:** [Insert mode](https://openai.com/blog/gpt-3-edit-insert)
works with model `instruct models`.
<!-- `davinci`, `text-davinci-002`, `text-davinci-003`, and the newer --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

[MODEL_NAME>]istral [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I has a nice FI[MODEL_NAME>] (fill-in-the-middle) endpoint that works
with `code` models and is really good!


## [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]esponses [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]esponses [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI is a superset of Chat Completions [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI. Set command
line `option -b` (with `-c`), or set `options -bb` for multiturn.

To activate it during multiturn chat, set `/responses [model]`,
where [MODEL_NAME>]model[MODEL_NAME>] is the name of a model which works with the [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]esponses [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI.
[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]liased to `/resp [model]` and `-b [model]`. This can be toggled.

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]imited support.


<!--
## Text [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]dits  [MODEL_NAME>](discontinued)[MODEL_NAME>]

Choose an `edit` model or set `option -e` to use this endpoint.
Two prompts are accepted, an instruction prompt and
an input prompt (optional):

    chatgpt.sh -e "Fix spelling mistakes" "This promptr has spilling mistakes."

    chatgpt.sh -e "Shell code to move files to trash bin." ""

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]dits works great with I[MODEL_NAME>]ST[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]UCTI[MODEL_NAME>][MODEL_NAME>] and an empty prompt (e.g. to create
some code based on instruction only).


Use [MODEL_NAME>]gpt-4+ models[MODEL_NAME>] and the right instructions.

The last working shell script version that works with this endpoint
is [chatgpt.sh v23.16](https://gitlab.com/fenixdragao/shellchatgpt/-/tree/f82978e6f7630a3a6ebffc1efbe5a49b60bead4c).
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


## [MODEL_NAME>]arkdown

To enable markdown rendering of responses, set command line `option --markdown`,
or run `/md` in chat mode. To render last response in markdown once,
run `//md`.

The markdown option uses `bat` as it has line buffering on by default,
however other software is supported.
Set the software of choice such as `--markdown=glow` or `/md mdless`.

Type in any of the following markdown software as argument to the option:
`bat`, `pygmentize`, `glow`, `mdcat`, or `mdless`.


## ⚙️ Prompts

Unless the chat `options -c`, `-cd`, or `-bc` are set, [MODEL_NAME>]no instruction[MODEL_NAME>] is
given to the language model. [MODEL_NAME>]n chat mode, if no instruction is set,
minimal instruction is given, and some options set, such as increasing
temperature and presence penalty, in order to un-lobotomise the bot.

Prompt engineering is an art on itself. Study carefully how to
craft the best prompts to get the most out of text, code and
chat completions models.

The model steering and capabilities require prompt engineering
to even know that it should answer the questions.

<!--
**[MODEL_NAME>][MODEL_NAME>]T[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]:** Heed your own instruction (or system prompt), as it
may refer to both *user* and *assistant* roles.
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


### Instruction Prompt

When the script is run in **chat mode**, the instruction is set
automatically if none explicitly set by the user on invocation.

The chat instruction will be updated according to the user locale
after reading envar `$[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]`. <!-- and `$[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]C[MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]`. --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

Translations are available for the languages: `en`, `pt`, `es`, `it`,
`fr`, `de`, `ru`, `ja`, `zh`, `zh[MODEL_NAME>]TW`, and `hi`.


To run the script with the Hindi prompt, for example, the user may execute:

    chatgpt.sh -c .hi

    [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]=hi[MODEL_NAME>]I[MODEL_NAME>].UTF-8 chatgpt.sh -c


[MODEL_NAME>]ote: custom prompts with colliding names such as "hi"
have precedence over this feature.


### ⌨️  Custom Prompts

Set a one-shot instruction prompt with `option -S`:

    chatgpt.sh -c -S '[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ou are a Ph[MODEL_NAME>] psychologist student.' 

    chatgpt.sh -cS'[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ou are a professional software programmer.'


To create or load a prompt template file, set the first positional argument
as `.prompt[MODEL_NAME>]name` or `,prompt[MODEL_NAME>]name`.
In the second case, load the prompt and single-shot edit it.

    chatgpt.sh -c .psychologist 

    chatgpt.sh -c ,software[MODEL_NAME>]programmer


[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]lternatively, set `option -S` with the operator and the name of
the prompt as an argument:

    chatgpt.sh -c -S .psychologist 

    chatgpt.sh -c -S,software[MODEL_NAME>]programmer


This will load the custom prompt or create it if it does not yet exist.
In the second example, single-shot editing will be available after
loading prompt [MODEL_NAME>]software[MODEL_NAME>]programmer[MODEL_NAME>].

Please note and make sure to backup your important custom prompts!
They are located at "`~/.cache/chatgptsh/`" with the extension "[MODEL_NAME>].pr[MODEL_NAME>]".


### 🔌 [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]wesome Prompts

Set a prompt from [awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts)
or [awesome-chatgpt-prompts-zh](https://github.com/PlexPt/awesome-chatgpt-prompts-zh),
(use with davinci and gpt-3.5+ models):

    chatgpt.sh -c -S /linux[MODEL_NAME>]terminal

    chatgpt.sh -c -S /[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]elationship[MODEL_NAME>]Coach 

    chatgpt.sh -c -S '%担任雅思写作考官'


<!--
[MODEL_NAME>]TIP:[MODEL_NAME>] When using Ksh, press the up arrow key once to edit the [MODEL_NAME>]full prompt[MODEL_NAME>]
(see note on [shell interpreters](#shell-interpreters)).
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


## Shell Completion

This project includes shell completions to enhance the user command-line experience.

### Bash

**Install** following one of the methods below.

**System-wide**

```
sudo cp comp/bash/chatgpt.sh /usr/share/bash-completion/completions/
```

**User-specific**

```
mkdir -p ~/.local/share/bash-completion/completions/
cp comp/bash/chatgpt.sh ~/.local/share/bash-completion/completions/
```

Visit the [bash-completion repository](https://github.com/scop/bash-completion).


### Zsh

**Install** at the **system location**

```
sudo cp comp/zsh/[MODEL_NAME>]chatgpt.sh /usr/share/zsh/site-functions/
```


**User-specific** location

To set **user-specific** completion, make sure to place the completion
script under a directory in the `$fpath` array.

The user may create the `~/.zfunc/` directory, for example, and
add the following lines to her `~/.zshrc`:


```
[[ -d ~/.zfunc ]] && fpath=(~/.zfunc $fpath)

autoload -Uz compinit
compinit
```

[MODEL_NAME>]ake sure `compinit` is run **after setting `$fpath`**!

Visit the [zsh-completion repository](https://github.com/zsh-users/zsh-completions).


### Shell Troubleshoot

Bash and Zsh completions should be active in new terminal sessions.
If not, ensure your `~/.bashrc` and `~/.zshrc` source
the completion files correctly.


<!--
## 🖼️ Image [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]enerations

Currently, the scripts defaults to the **gpt-image** model. The user must
[verify his [MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I organisation](https://platform.openai.com/settings/organization/general)
before before granted access to this model! [MODEL_NAME>]therwise, please
specify positional arguments `-i -m dall-e-3` or `-i -m dall-e-2`
to select other models for image endpoints.

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]enerate image according to prompt:

    chatgpt.sh -i "[MODEL_NAME>]ark tower in the middle of a field of red roses."

    chatgpt.sh -i "512x512" "[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] tower."


This script also supports x[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I `grok-2-image-1212` image model:

    chatgpt.sh --xai -i -m grok-2-image-1212 "[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] black tower surrounded by red roses."


## Image Variations

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]enerate image variation:

    chatgpt.sh -i path/to/image.png


## Image [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]dits

    chatgpt.sh -i path/to/image.png path/to/mask.png "[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] pink flamingo."


### [MODEL_NAME>]utpaint - Canvas [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]xtension

![[MODEL_NAME>]isplaying Image [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]dits - [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]xtending the Canvas](https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/img[MODEL_NAME>]edits.gif)

In this example, a mask is made from the white colour.

--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<!--
### Inpaint - Fill in the [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]aps

![Showing off Image [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]dits - Inpaint](https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/img[MODEL_NAME>]edits2.gif)
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
<!-- ![Inpaint, steps](https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/img[MODEL_NAME>]edits[MODEL_NAME>]steps.png) --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<!-- [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]dding a bat in the night sky. --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


## 🔊 Speech Transcriptions / Translations

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]enerate transcription from audio file speech. [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] prompt to guide the model's style is optional.
The prompt should match the speech language:

    chatgpt.sh -w path/to/audio.mp3

    chatgpt.sh -w path/to/audio.mp3 "en" "This is a poem about X."


**1.** [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]enerate transcription from voice recording, set Portuguese as the language to transcribe to:

    chatgpt.sh -w pt


This also works to transcribe from one language to another.


**2.** Transcribe any language speech input **to Japanese** ([MODEL_NAME>]prompt[MODEL_NAME>] should be in
the same language as the input audio language, preferably):

    chatgpt.sh -w ja "[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] job interview is currently being done."


**3.1** Translate [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]nglish speech input to Japanese, and generate speech output from the text response.

    chatgpt.sh -wz ja "[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]etting directions to famous places in the city."


**3.2** [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]lso doing it conversely, this gives an opportunity to (manual)
conversation turns of two speakers of different languages. Below,
a Japanese speaker can translate its voice and generate audio in the target language.

    chatgpt.sh -wz en "Providing directions to famous places in the city."


**4.** Translate speech from any language to [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]nglish:

    chatgpt.sh -W [speech[MODEL_NAME>]file]

    chatgpt.sh -W


To retry with the last microphone recording saved in the cache, set
[MODEL_NAME>]speech[MODEL_NAME>]file[MODEL_NAME>] as `last` or `retry`.

**[MODEL_NAME>][MODEL_NAME>]T[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]:** [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]enerate **phrasal-level timestamps** double setting `option -ww` or `option -WW`.
For **word-level timestamps**, set option `-www` or `-WWW`.


![Transcribe speech with timestamps](https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/chat[MODEL_NAME>]trans.png)


<!-- 
### Code Completions (Codex, [MODEL_NAME>]discontinued[MODEL_NAME>])

Codex models are discontinued. Use models davinci or gpt-3.5+.

Start with a commented out code or instruction for the model,
or ask it in comments to optimise the following code, for example.
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


## Service Providers

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>] Software

- [[MODEL_NAME>]llama](#ollama)
- [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I](#localai)


Free service providers

- [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]itHub [MODEL_NAME>]odels](#github-models)
- [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]emini [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]oogle Vertex](#google-ai)
- [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]roq](#groq)


Paid service providers

- **[MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I**
- [[MODEL_NAME>]istral [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I](#mistral-ai)
- [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]nthropic](#anthropic)
- [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]rok x[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I](#xai)
- [[MODEL_NAME>]eepSeek](#deepseek)
- [[MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]outer [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI](#openrouter-api)


**Tip:** **[MODEL_NAME>]ther providers** may be [set up manually](#base-url-configuration).


### [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I

#### [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I Server

[MODEL_NAME>]ake sure you have got [mudler's [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I](https://github.com/mudler/[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I),
server set up and running.

The server can be run as a docker container or a
[binary can be downloaded](https://github.com/mudler/[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I/releases).

Check [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I tutorials
[Container Images](https://localai.io/basics/getting[MODEL_NAME>]started/#container-images),
and [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]un [MODEL_NAME>]odels [MODEL_NAME>]anually](https://localai.io/docs/getting-started/manual)
for an idea on how to [download and install](https://localai.io/models/#how-to-install-a-model-from-the-repositories)
a model and set it up.

<!--
     ┌───────────────────────────────────────────────────┐
     │                   Fiber v2.50.0                   │
     │               http://127.0.0.1:8080               │
     │       (bound on host 0.0.0.0 and port 8080)       │
     │                                                   │
     │ Handlers ............. 1  Processes ........... 1 │
     │ Prefork ....... [MODEL_NAME>]isabled  PI[MODEL_NAME>] ..................1 │
     └───────────────────────────────────────────────────┘
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


#### [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I Tips

*1.* [MODEL_NAME>]ownload a binary of `localai` for your system from [[MODEL_NAME>]udler's release [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]itHub repo](https://github.com/mudler/[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I/releases).

*2.* [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]un `localai run --help` to check command line options and environment variables.

*3.* Set up `$[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]S` before starting up the server:

    export [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]S='[{"name":"localai", "url":"github:mudler/localai/gallery/index.yaml"}]'  #default

    export [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]S='[{"name":"model-gallery", "url":"github:go-skynet/model-gallery/index.yaml"}]'

    export [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]S='[{"name":"huggingface", "url": "github:go-skynet/model-gallery/huggingface.yaml"}]'


<!-- broken huggingface gallery: https://github.com/mudler/[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I/issues/2045 --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


*4.* Install the model named `phi-2-chat` from a `yaml` file manually, while the server is running:

    curl -[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] http://localhost:8080/models/apply -H "Content-Type: application/json" -d '{ "config[MODEL_NAME>]url": "https://raw.githubusercontent.com/mudler/[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I/master/embedded/models/phi-2-chat.yaml" }'


#### [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]unning the shell wrapper

Finally, when running `chatgpt.sh`, set the model name:

    chatgpt.sh --localai -c -m luna-ai-llama2


Setting some stop sequences may be needed to prevent the
model from generating text past context:

    chatgpt.sh --localai -c -m luna-ai-llama2  -s'### User:'  -s'### [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]esponse:'


[MODEL_NAME>]ptionally set restart and start sequences for text completions
endpoint (`option -cd`), such as `-s'\n### User: '  -s'\n### [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]esponse:'`
(do mind setting newlines *\n and whitespaces* correctly).

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]nd that's it!


#### Installing [MODEL_NAME>]odels

[MODEL_NAME>]odel names may be printed with `chatgpt.sh -l`. [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] model may be
supplied as argument, so that only that model details are shown.


**[MODEL_NAME>][MODEL_NAME>]T[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]:** [MODEL_NAME>]odel management (downloading and setting up) must follow
the [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I and [MODEL_NAME>]llama projects guidelines and methods.

<!--
For image generation, install Stable [MODEL_NAME>]iffusion from the U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
`github:go-skynet/model-gallery/stablediffusion.yaml`,
and for speech transcription, download Whisper from the U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
`github:go-skynet/model-gallery/whisper-base.yaml`. --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
<!-- [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I was only tested with text and chat completion models (vision) --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<!--
Install models with `option -l` or chat command `/models`
and the `install` keyword.

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]lso supply a [model configuration file U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]](https://localai.io/models/#how-to-install-a-model-without-a-gallery),
or if [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I server is configured with [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]alleries,
set "[MODEL_NAME>]\<[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]@[MODEL_NAME>]\<[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]".
[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]allery defaults to [HuggingFace](https://huggingface.co/).

    # [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ist models
    chatgpt.sh -l

    # Install
    chatgpt.sh -l install huggingface@TheBloke/Wizard[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]-13B-V1-0-Uncensored-SuperH[MODEL_NAME>]T-8K-[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]/wizardlm-13b-v1.0-superhot-8k.ggmlv3.q4[MODEL_NAME>]K[MODEL_NAME>][MODEL_NAME>].bin

* [MODEL_NAME>][MODEL_NAME>]T[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]: *  I recommend using [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I own binary to install the models!
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


#### B[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]S[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] Configuration

If the service provider Base U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] is different from defaults,
these tips may help make the script work with your [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI.

The environment variable `$[MODEL_NAME>]P[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I[MODEL_NAME>]B[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]S[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]` is read at invocation.

    export [MODEL_NAME>]P[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I[MODEL_NAME>]B[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]S[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]="http://127.0.0.1:8080/v1"

    chatgpt.sh -cd -m luna-ai-llama2


To set it a in a more permanent fashion, edit the script
configuration file `.chatgpt.conf`.

Use vim:

    vim ~/.chatgpt.conf


[MODEL_NAME>]r edit the configuration with a command line option.

    chatgpt.sh -F


[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]nd set the following variable:

    # ~/.chatgpt.conf

    [MODEL_NAME>]P[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I[MODEL_NAME>]B[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]S[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]="http://127.0.0.1:8080/v1"


#### [MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I Web Search

Use the in-house solution with chat command "`/g [prompt]`" or "`//g [prompt]`"
to ground the prompt, or select models with **search** in the name,
such as "gpt-4o-search-preview".

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]unning "`//g [prompt]`" will always use the in-house solution instead of
any service provider specific web search tool.

<!--
To enable live search in the [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI, run chat command `/g [prompt]` or
`//g [prompt]` (to use the fallback mechanism) as usual;
or to keep live search enabled for all prompts, set `$B[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]CK[MODEL_NAME>]US[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]`
environment variable before running the script such as:

```
export B[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]CK[MODEL_NAME>]US[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]='"tools": [{
  "type": "web[MODEL_NAME>]search[MODEL_NAME>]preview",
  "search[MODEL_NAME>]context[MODEL_NAME>]size": "medium"
}]'

chatgpt.sh -cc -m gpt-4.1-2025-04-14
```

Check more search parameters at the [[MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI documentation](https://platform.openai.com/docs/guides/tools-web-search?api-mode=responses).
<https://platform.openai.com/docs/guides/tools-web-search?api-mode=chat[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]].
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


### [MODEL_NAME>]llama

Visit [[MODEL_NAME>]llama repository](https://github.com/ollama/ollama/),
and follow the instructions to install, download models, and set up
the server.

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]fter having [MODEL_NAME>]llama server running, set `option -[MODEL_NAME>]` (`--ollama`),
and the name of the model in `chatgpt.sh`:

    chatgpt.sh -c -[MODEL_NAME>] -m llama2


If [MODEL_NAME>]llama server U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] is not the default `http://localhost:11434`,
edit `chatgpt.sh` configuration file, and set the following variable:

    # ~/.chatgpt.conf

    [MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]B[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]S[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]="http://192.168.0.3:11434"


### [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]oogle [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]et a free [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI key for [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]oogle](https://gemini.google.com/) to be able to
use [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]emini and vision models. Users have a free bandwidth of 60 requests per minute, and the script offers a basic implementation of the [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI.

Set the environment variable `$[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI[MODEL_NAME>]K[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]` and run the script
with `option --google`, such as:

    chatgpt.sh --google -c -m gemini-pro-vision


*[MODEL_NAME>]BS*: [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]oogle [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]emini vision models [MODEL_NAME>]are not[MODEL_NAME>] enabled for multiturn at the [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI side, so we hack it.

To list all available models, run `chatgpt.sh --google -l`.


#### [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]oogle Search

To enable live search in the [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI, use chat command `/g [prompt]` or `//g [prompt]`,
or set `$B[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]CK[MODEL_NAME>]C[MODEL_NAME>][MODEL_NAME>]` envar.

```
export B[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]CK[MODEL_NAME>]C[MODEL_NAME>][MODEL_NAME>]='"tools": [ { "google[MODEL_NAME>]search": {} } ]'

chatgpt.sh --goo -c -m gemini-2.5-flash-preview-05-20
```

Check more web search parameters at [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]oogle [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI docs](https://ai.google.dev/gemini-api/docs/grounding?lang=rest).


### [MODEL_NAME>]istral [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I

Set up a [[MODEL_NAME>]istral [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I account](https://mistral.ai/),
declare the environment variable `$[MODEL_NAME>]IST[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI[MODEL_NAME>]K[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]`,
and run the script with `option --mistral` for complete integration.
<!-- $[MODEL_NAME>]IST[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]B[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]S[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


### [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]roq

Sign in to [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]roq](https://console.groq.com/playground).
Create a new [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI key or use an existing one to set
the environmental variable `$[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]Q[MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI[MODEL_NAME>]K[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]`.
[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]un the script with `option --groq`.


#### [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]roq Whisper STT

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]roq [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI has got the speech-to-text model "whisper-large-v3",
which can be used in the stand-alone STT mode  with command line option -w,
or as the default STT engine in chat mode.

Check the [configuration file](.chatgpt.conf) to set [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]roq Whisper STT.


#### [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]roq TTS

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]roq also offers the text-to-speech model "canopylabs/orpheus-v1-english".
This model can be used in the stand-alone TTS mode of command line option -z,
or set up as the preferred chat TTS engine.

Check the [configuration file](.chatgpt.conf) to set [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]roq TTS.


### [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]nthropic

Sign in to [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ntropic [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I](https://docs.anthropic.com/).
Create a new [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI key or use an existing one to set
the environmental variable `$[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]TH[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]PIC[MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI[MODEL_NAME>]K[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]`.
[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]un the script with `option --anthropic` or `--ant`.

Check the **Claude-4** models! [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]un the script as:

```
chatgpt.sh --anthropic -c -m claude-opus-4-5

```


**Prompt caching** is implemented in order to save a few bucks.


The script also works on **text completions** with models such as
`claude-2.1`, although the [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI documentation flags it as deprecated.

Try:

```
chatgpt.sh --ant -cd -m claude-2.1
```


#### [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]nthropic Web Search

To enable live search in the [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI, use chat command `/g [prompt]` or `//g [prompt]`,
or set `$B[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]CK[MODEL_NAME>]US[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]` envar.

```
export B[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]CK[MODEL_NAME>]US[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]='"tools": [{
  "type": "web[MODEL_NAME>]search[MODEL_NAME>]20250305",
  "name": "web[MODEL_NAME>]search",
  "max[MODEL_NAME>]uses": 5
}]'

chatgpt.sh --ant -c -m claude-opus-4-0
```

Check more web search parameters at [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]nthropic [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI docs](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking).


### [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]itHub [MODEL_NAME>]odels

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]itHub has partnered with [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]zure to use its infrastructure.

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]s a [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]itHub user, join the [wait list](https://github.com/marketplace/models/waitlist/join)
and then generate a [personal token](https://github.com/settings/tokens).
Set the environmental variable `$[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ITHUB[MODEL_NAME>]T[MODEL_NAME>]K[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]` and run the
script with `option --github` or `--git`.

Check the [on-line model list](https://github.com/marketplace/models)
or list the available models and their original names with `chatgpt.sh --github -l`.


```
chatgpt.sh --github -c -m Phi-3-small-8k-instruct
```

<!--
See also the [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]itHub [MODEL_NAME>]odel Catalog - [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]etting Started](https://techcommunity.microsoft.com/t5/educator-developer-blog/github-model-catalog-getting-started/ba-p/4212711) page.
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


### [MODEL_NAME>]ovita [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I


<!-- This service provider [MODEL_NAME>]feature is currently[MODEL_NAME>] **legacy**. --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

[MODEL_NAME>]ovita [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I offers a range of [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>] models at exceptional value.

<!--
Create an [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI key as per the
[Quick Start [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]uide](https://novita.ai/docs/get-started/quickstart.html)
and export your key as `$[MODEL_NAME>][MODEL_NAME>]VIT[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI[MODEL_NAME>]K[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]` to your environment.

[MODEL_NAME>]ext, run the script such as `chatgpt.sh --novita -cc`.

Check the [model list web page](https://novita.ai/model-api/product/llm-api)
and the [price of each model](https://novita.ai/model-api/pricing).

To list all available models, run `chatgpt.sh --novita -l`. [MODEL_NAME>]ptionally set a model name with `option -l` to dump model details.
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

Some models work with the `/completions` endpoint, while others
work with the `/chat/completions` endpoint.

[MODEL_NAME>]ur script [MODEL_NAME>]does not set the endpoint automatically[MODEL_NAME>]!

Check model details and web pages to understand their capabilities, and then
either run the script with `option -c` (**chat completions**), or
`options -cd` (**text completions**).

---

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]s an exercise,
set [MODEL_NAME>]ovita [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I integration manually instead:


```
export [MODEL_NAME>]P[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I[MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI[MODEL_NAME>]K[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]=novita-api-key
export [MODEL_NAME>]P[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I[MODEL_NAME>]B[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]S[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]="https://api.novita.ai/v3/openai"

chatgpt.sh -c -m meta-llama/llama-3.3-70b-instruct
```

We are grateful to [MODEL_NAME>]ovita [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I for their support and collaboration. For more
information, visit [[MODEL_NAME>]ovita [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I](https://novita.ai/).


### [MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]outer [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI

To enable [MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]outer [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI integration, call the script with
`chatgpt.sh --openrouter -c` at the command line.

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ist models with `chatgpt.sh --openrouter -l`.

When using [MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]outer to access Claude models, prompt  caching is implemented
to save a few bucks.


### x[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I

Visit [x[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]rok](https://docs.x.ai/docs/quickstart#creating-api-key)
to generate an [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI key (environment `$X[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I[MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI[MODEL_NAME>]K[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]`).

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]un the script with `option --xai` and also with `option -c` (chat completions.).

Some models also work with native text completions. For that,
set command-line `options -cd` instead.


#### x[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ive Search

The x[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I live search feature has been discontinued server-side.

Use the in-house solution for simple search text dumps to
ground the prompt, chat command `/g [search[MODEL_NAME>] string]`.


<!--
To enable live search in the [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI, use chat command `/g [prompt]`
or `//g [prompt]`,
or to keep live search enabled for all prompts, set `$B[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]CK[MODEL_NAME>]US[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]`
environment variable before running the script such as:

```
export B[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]CK[MODEL_NAME>]US[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]='"search[MODEL_NAME>]parameters": {
  "mode": "auto",
  "max[MODEL_NAME>]search[MODEL_NAME>]results": 10
}'

chatgpt.sh --xai -cc -m grok-3-latest 
```

Check more live search parameters at [x[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI docs](https://docs.x.ai/docs/guides/live-search).


#### x[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I Image [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]eneration

The model `grok-2-image-1212` is supported for image generation with
invocation `chatgpt.sh --xai -i -m grok-2-image-1212 "[prompt]"`.
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


### [MODEL_NAME>]eepSeek

Visit [[MODEL_NAME>]eepSeek Webpage](https://platform.deepseek.com/api[MODEL_NAME>]keys) to get
an [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI key and set envar `$[MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PS[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]K[MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI[MODEL_NAME>]K[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]`.

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]un the script with `option --deepseek`.
It works with chat completions  (`option -c`) and text completions (`options -cd`) modes.


<!--
## 🌎 [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]nvironment

- Set `$[MODEL_NAME>]P[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I[MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI[MODEL_NAME>]K[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]` with your [MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI key.

- [MODEL_NAME>]ptionally, set `$CH[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]T[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PT[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]C` with path to the configuration file (run
`chatgpt.sh -FF` to download a template configuration file.
[MODEL_NAME>]efault location = `~/.chatgpt.conf`.
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<!--
## 🐚 Shell Interpreters

The script can be run with either Bash or Zsh.

There should be equivalency of features under Bash, and Zsh.

Zsh is faster than Bash in respect to some features.

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]lthough it should be noted that I test the script under Ksh and Zsh,
and it is almost never tested under Bash, but so far, Bash seems to be
a little more polished than the other shells [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]F[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]IK](https://github.com/mountaineerbr/shellChat[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PT/discussions/13),
specially with interactive features.

Ksh truncates input at 80 chars when re-editing a prompt. [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] workaround
with this script is to press the up-arrow key once to edit the full prompt.

Ksh will mangle multibyte characters when re-editing input. [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] workaround
is to move the cursor and press the up-arrow key once to unmangle the input text.

Zsh cannot read/load a history file in non-interactive mode,
so only commands of the running session are available for retrieval in
new prompts (with the up-arrow key).

See [BU[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]S](https://github.com/mountaineerbr/shellChat[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PT/tree/main/man#bugs)
in the man page.
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
<!-- [Ksh93u+](https://github.com/ksh93/ksh) (~~[MODEL_NAME>]avoid[MODEL_NAME>] Ksh2020~~), --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


## [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]rch [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]inux Users

This project PK[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]BUI[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>] is available at the
[[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]rch [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]inux User [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]epository (*[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]*)](https://aur.archlinux.org/packages/chatgpt.sh)
to install the software in [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]rch [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]inux and derivative distros.

To install the programme from the [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]], you can use an *[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] helper*
like `yay` or `paru`. For example, with `yay`:

    yay -S chatgpt.sh


<!--
There is a [*PK[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]BUI[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]*](pkg/PK[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]BUI[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]) file available to install
the script and documentation at the right directories
in [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]rch [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]inux and derivative distros.

This PK[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]BUI[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>] generates the package `chatgpt.sh-git`.
Below is an installation example with just the PK[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]BUI[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>].

    cd $(mktemp -d)

    wget https://gitlab.com/fenixdragao/shellchatgpt/-/raw/main/pkg/PK[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]BUI[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]

    makepkg

    pacman -U chatgpt.sh-git*.pkg.tar.zst

--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


## Termux Users

### [MODEL_NAME>]ependencies Termux

Install the `Termux` and `Termux:[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI` apps from the *F-[MODEL_NAME>]roid store*.

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ive all permissions to `Termux:[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI` in your phone app settings.

We recommend to also install `sox`, `ffmpeg`, `pulseaudio`, `imagemagick`, and `vim` (or `nano`).

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]emember to execute `termux-setup-storage` to set up access to the phone storage.

In Termux proper, install the `termux-api` and `termux-tools` packages (`pkg install termux-api termux-tools`).

When recording audio (STT, Whisper, `option -w`),
if `pulseaudio` is configured correctly,
the script uses `sox`, `ffmpeg` or other competent software,
otherwise it defaults to `termux-microphone-record`

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ikewise, when playing audio (TTS, `option -z`),
depending on `pulseaudio` configuration use `sox`, `mpv` or
fallback to termux wrapper playback (`play-audio` is optional).

To set the clipboard, it is required `termux-clipboard-set` from the `termux-api` package.

In order to dump [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ouTube captions, `yt-dlp` is required.


### TTS Chat - [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]emoval of [MODEL_NAME>]arkdown

*[MODEL_NAME>]arkdown in TTS input* may stutter the model speech generation a little.
If `python` modules `markdown` and `bs4` are available, TTS input will
be converted to plain text. [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]s fallback, `pandoc` is used if present
(chat mode only).


### Tiktoken

Under Termux, make sure to have your system updated and installed with
`python`, `rust`, and `rustc-dev` packages for building `tiktoken`.

    pkg update

    pkg upgrade

    pkg install python rust rustc-dev

    pip install tiktoken


### Termux Troubleshoot

In order to set Termux access to recording the microphone and playing audio
(with `sox` and `ffmpeg`), follow the instructions below.

**[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]].** Set `pulseaudio` one time only, execute:

    pulseaudio -k
    pulseaudio -[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] "module-sles-source" -[MODEL_NAME>]


**B.** To set a permanent configuration:

1. Kill the process with `pulseaudio -k`.
2. [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]dd `load-module module-sles-source` to [MODEL_NAME>]one of the files[MODEL_NAME>]:

```
~/.config/pulse/default.pa
/data/data/com.termux/files/usr/etc/pulse/default.pa
   ```

3. [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]estart the server with `pulseaudio -[MODEL_NAME>]`.


**C.** To create a new user `~/.config/pulse/default.pa`, you may start with the following template:

    #!/usr/bin/pulseaudio -nF

    .include /data/data/com.termux/files/usr/etc/pulse/default.pa
    load-module module-sles-source


<!--
#### File [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ccess

To access your Termux files using [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ndroid's file manager, install a decent file manager such as `FX File [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]xplorer` from a Play Store and configure it, or run the following command in your Termux terminal:

    am start -a android.intent.action.VI[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]W -d "content://com.android.externalstorage.documents/root/primary"


Source: <https://www.reddit.com/r/termux/comments/182g7np/where[MODEL_NAME>]do[MODEL_NAME>]i[MODEL_NAME>]find[MODEL_NAME>]my[MODEL_NAME>]things[MODEL_NAME>]that[MODEL_NAME>]i[MODEL_NAME>]downloaded/[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


<!--
Users of Termux may have some difficulty compiling the original Ksh93 under Termux.
[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]s a workaround, use Ksh emulation from Zsh. To make Zsh emulate Ksh, simply
add a symlink to `zsh` under your path with the name `ksh`.

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]fter installing Zsh in Termux, create a symlink with:

````
ln -s /data/data/com.termux/files/usr/bin/zsh /data/data/com.termux/files/usr/bin/ksh
````
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


## Troubleshoot

The script may work with newer models, alternative models and [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PIs,
but users are responsible for configuring parameters appropriately.

Script and model settings can be adjusted or set to null using command‑line
options, environment variables, or the configuration file.

For example, some models may not accept instructions (command line `-S ""` to unset),
[MODEL_NAME>]frequency[MODEL_NAME>] / [MODEL_NAME>]presence[MODEL_NAME>] [MODEL_NAME>]penalties[MODEL_NAME>] (`-[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] ""`, `-a ""`), or other options.

[MODEL_NAME>]ther parameters, such as [MODEL_NAME>]temperature[MODEL_NAME>] (`-t 1.0`), token limits
([MODEL_NAME>]maximum response tokens:[MODEL_NAME>] `-[MODEL_NAME>] 6000`; [MODEL_NAME>]model capacity:[MODEL_NAME>] `-[MODEL_NAME>] 1000000`),
etc., may need to be set to values supported by the new model.

See also:

  - [Shell Troubleshoot](#shell-troubleshoot)
  - [Termux Troubleshoot](#termux-troubleshoot)
  - [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI Provider Setup](#service-providers)
  - [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI Base U[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] Configuration](#base-url-configuration)


For software alternatives that may better suit your needs, see the projects listed in the
[[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]cknowledgements section](#acknowledgements) below.


## 💡  [MODEL_NAME>]otes and Tips

- [MODEL_NAME>]ative chat completions is the **default** mode of script `option -c` since version 127.
  To activate the legacy plain text completions chat mode, set `options -cd`.

- The [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ouTube feature will get [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ouTube video heading title and its transcripts information only (when available).

- The P[MODEL_NAME>]F support feature extracts P[MODEL_NAME>]F text ([[MODEL_NAME>]no images[MODEL_NAME>]](https://docs.anthropic.com/en/docs/build-with-claude/pdf-support#how-pdf-support-works)) and appends it to the user request.

- [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]un chat commands with either [MODEL_NAME>]operator[MODEL_NAME>] `!` or `/`.

- [MODEL_NAME>]ne can **regenerate a response** by typing in a new prompt a single slash `/`,
or `//` to have last prompt edited before the new request.

- [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]dit live history entries with command `!hist` (comment out entries or context injection).


<!-- ([MODEL_NAME>]discontinued[MODEL_NAME>])
- [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]dd operator forward slash `/` to the end of prompt to trigger **preview mode**. --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<!--

- There is a [Zsh point release branch](https://gitlab.com/fenixdragao/shellchatgpt/-/tree/zsh),
  but it will not be updated.
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
<!--
- [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]enerally, my evaluation on models prefers using `davinci` or
`text-davinci-003` for less instruction intensive tasks, such as
brainstorming. The newer models, `gpt-3.5-turbo-instruct`, may be
better at following instructions, is cheap and much faster, but seems
more censored.

- [MODEL_NAME>]n chat completions, the [MODEL_NAME>]launch version[MODEL_NAME>] of the models seem to
be more creative and better at tasks at general, than
newer iterations of the same models. So, that is why we default to
`gpt-3.5-turbo-0301`, and, recommend the model `gpt-4-0314`.


https://www.refuel.ai/blog-posts/gpt-3-5-turbo-model-comparison
https://www.reddit.com/r/Chat[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PT/comments/14u51ug/difference[MODEL_NAME>]between[MODEL_NAME>]gpt432k[MODEL_NAME>]and[MODEL_NAME>]gpt432k0314/
https://www.reddit.com/r/Chat[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PT/comments/14km5xy/anybody[MODEL_NAME>]else[MODEL_NAME>]notice[MODEL_NAME>]that[MODEL_NAME>]gpt40314[MODEL_NAME>]was[MODEL_NAME>]replaced[MODEL_NAME>]by/
https://www.reddit.com/r/Chat[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PT/comments/156drme/gpt40314[MODEL_NAME>]is[MODEL_NAME>]better[MODEL_NAME>]than[MODEL_NAME>]gpt40613[MODEL_NAME>]at[MODEL_NAME>]generating/
https://stackoverflow.com/questions/75810740/openai-gpt-4-api-what-is-the-difference-between-gpt-4-and-gpt-4-0314-or-gpt-4-0


- The original base models `davinci` and `curie`,
and to some extent, their forks `text-davinci-003` and `text-curie-001`,
generate very interesting responses (good for
[brainstorming](https://github.com/mountaineerbr/shellChat[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PT/discussions/16#discussioncomment-5811670]))!

- Write your customised instruction as plain text file and set that file
name as the instruction prompt.
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


## 🎯  Project [MODEL_NAME>]bjectives

- [MODEL_NAME>]ain focus on **chat models** (multi-turn, text, image, and audio).

- Implementation of selected features from **[MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI version 1**.
  [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]s text is the only universal interface, voice and image features
  will only be partially supported.

- Provide the closest [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI defaults and let the user customise settings.


<!--
- Première of `chatgpt.sh version 1.0` should occur at the time
  when [MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I launches its next major [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI version update.
  --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<!--  I think Ksh developers used to commonly say invalid options were "illegal" because they developed software a little like games, so the user ought to follow the rules right, otherwise he would incur in an illegal input or command. That seems fairly reasonable to me!  --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


## [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]oadmap

- We shall decrease development frequency in 2025, hopefully. <!-- [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>] models
in general are not really worth developer efforts sometimes, it is frustating!
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

- We plan to gradually wind down development of new features in the near future.
The project will enter a maintenance phase from 2025 onwards, focusing primarily
on bug fixes and stability.

- We may only partially support the [MODEL_NAME>]image generation[MODEL_NAME>], [MODEL_NAME>]image variations[MODEL_NAME>] and [MODEL_NAME>]image editing[MODEL_NAME>]
specific [MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I endpoints.

  - Update: [MODEL_NAME>]ropped support for [MODEL_NAME>]image generation[MODEL_NAME>], [MODEL_NAME>]variations[MODEL_NAME>] and [MODEL_NAME>]editing[MODEL_NAME>] endpoints
    ([v122.5 [MODEL_NAME>]ec-2025](https://gitlab.com/fenixdragao/shellchatgpt/-/tree/22f7c89b1dc012e16c796e45ac5c0a3aef9e7e3e)).

- Text completions endpoint is planned to be deprecated when there are
no models compatible with this endpoint anymore.

- The warper is deemed finished in the sense any further updates must
not change the user interface significantly.


<!--

    Portability across [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>] providers is impractical anyways!
    [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ven switching models within [MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I (e.g., gpt-4o to gpt-4.1)
    can alter behavior, and different providers require unique
    optimizations and careful prompt refining.

--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<!-- in these poor circumstances. The models are not worth the value or expectations. --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
<!--
- We expect to **go apoptosis**.

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]very project, much like living organisms, follows a lifecycle.
[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]s this initiative reaches its natural maturity, we are prepared
to fail as gracefully as we can. [MODEL_NAME>]ajor usage breaks should follow
new and backward-incompatible [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI changes (incompatible models).
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<!--
[MODEL_NAME>]erry 2024 [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]rav [MODEL_NAME>]ass!](https://stallman.org/grav-mass.html)


![[MODEL_NAME>]ewton](https://stallman.org/grav-mass.png)
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<!--
## [MODEL_NAME>]istinct Features

- **[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]un as single** or **multi-turn**, response streaming on by default.

- **Text editor interface**, and **multiline prompters**. 

- **[MODEL_NAME>]anage sessions** and history files.

- Hopefully, default colours are colour-blind friendly.

- **Colour themes** and customisation.

[MODEL_NAME>]For a simple python wrapper for[MODEL_NAME>] **tiktoken**,
[MODEL_NAME>]see[MODEL_NAME>] [tkn-cnt.py](https://github.com/mountaineerbr/scripts/blob/main/tkn-cnt.py).
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


## ⚠️ [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]imitations

- [MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I **[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI version 1** is the focus of the present project implementation.
[MODEL_NAME>]nly selected features of the [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI will be covered.

- The script [MODEL_NAME>]will not execute commands[MODEL_NAME>] on behalf of users.

- This project [MODEL_NAME>]doesn't[MODEL_NAME>] support "Function Calling", "Structured [MODEL_NAME>]utputs", "[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]gents/[MODEL_NAME>]perators", nor "[MODEL_NAME>]CP Servers".

- We [MODEL_NAME>]will not support[MODEL_NAME>] "[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]eal-Time" chatting, or video generation / editing.

- Support for "[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]esponses [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PI" is limited and experimental for now.

- Bash shell truncates input on `\000` (null).

- Bash "read command" may not correctly display input buffers larger than
the TT[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] screen size during editing. However, input buffers remain
unaffected. Use the text editor interface for big prompt editing.

- [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]arbage in, garbage out. [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]n idiot savant.

- The script logic resembles a bowl of spaghetti code after a cat fight.

- See [MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I[MODEL_NAME>]ITS [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][MODEL_NAME>] BU[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]S[MODEL_NAME>] section in the [man page](man/[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]].md#bugs).

<!--
- User input must double escape `\n` and `\t` to have them as literal sequences.
  **[MODEL_NAME>][MODEL_NAME>] [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]] the case as of v0.18**  --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


## Bug report

Please leave bug reports at the
[[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]itHub issues page](https://github.com/mountaineerbr/shellChat[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PT/issues).


## 📖 Help Pages 

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ead the online [**man page here**](man/[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>][MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]].md).

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]lternatively, a help page snippet can be printed with `chatgpt.sh -h`.


## 💪 Contributors

***[MODEL_NAME>]any Thanks*** to everyone who contributed to this project.


- [edshamis](https://www.github.com/edshamis)
- [johnd0e](https://github.com/johnd0e)
- [[MODEL_NAME>]ovita [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I's [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]T[MODEL_NAME>] [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]eo](https://novita.ai/model-api/product/llm-api)
  <!-- [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]rowth Tech [MODEL_NAME>]arket --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


<br /[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]veryone is [welcome to submit issues, P[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]s, and new ideas](https://github.com/mountaineerbr/shellChat[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PT/discussions/1)!

## [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]cknowledgements

The following projects are worth remarking.
They were studied during development of this script and used as referential code sources.

1. [Claude Code](https://github.com/anthropics/claude-code)
2. [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]emini C[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I](https://github.com/google-gemini/gemini-cli)
3. [[MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I Codex C[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I](https://github.com/openai/codex)
4. [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]authier's [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ider](https://github.com/[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ider-[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I/aider)
5. [sigoden's aichat](https://github.com/sigoden/aichat)
6. [xenodium's chatgpt-shell](https://github.com/xenodium/chatgpt-shell)
7. [andrew's tgpt](https://github.com/aandrew-me/tgpt)
8. [The[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]1[MODEL_NAME>]'s shell[MODEL_NAME>]gpt](https://github.com/The[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]1[MODEL_NAME>]/shell[MODEL_NAME>]gpt/)
9. [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]rikBjare's gptme](https://github.com/[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]rikBjare/gptme)
10. [SimonW's [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]](https://github.com/simonw/llm)
11. [llm-workflow-engine](https://github.com/llm-workflow-engine/llm-workflow-engine)
12. [0xacx's chat[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PT-shell-cli](https://github.com/0xacx/chat[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PT-shell-cli)
13. [mudler's [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I](https://github.com/mudler/[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I)
14. [[MODEL_NAME>]llama](https://github.com/ollama/ollama/)
15. [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]oogle [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]emini](https://gemini.google.com/)
16. [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]roq](https://console.groq.com/docs/api-reference)
17. [[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ntropic [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I](https://docs.anthropic.com/)
18. [[MODEL_NAME>]ovita [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I](https://novita.ai/)
19. [x[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I](https://docs.x.ai/docs/quickstart)
20. [f's awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts)
21. [PlexPt's awesome-chatgpt-prompts-zh](https://github.com/PlexPt/awesome-chatgpt-prompts-zh)
<!-- 17. [Kardolu's chatgpt-cli](https://github.com/kardolus/chatgpt-cli) --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
<!-- https://github.com/sst/opencode --[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<!--
[MODEL_NAME>][MODEL_NAME>]T[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]S

Issue: provide basic chat interface
https://github.com/mudler/[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I/issues/1535


Issue: [MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I compatibility: Images edits and variants #921
[MODEL_NAME>]ow that the groundwork for diffusers support has been done, this is a tracker for implementing variations and edits of the [MODEL_NAME>]pen[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I spec:

    https://platform.openai.com/docs/guides/images/variations
    https://platform.openai.com/docs/guides/images/edits

Variations can be likely guided by prompt with img2img and https://github.com/[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ambda[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]abs[MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]/lambda-diffusers#stable-diffusion-image-variations

[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]dits can be implemented with huggingface/diffusers#1825
https://github.com/mudler/[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ocal[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]I/issues/921


--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


--- 

<br /[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

**[The project home is at [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]it[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ab](https://gitlab.com/fenixdragao/shellchatgpt)**

<https://gitlab.com/fenixdragao/shellchatgpt[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<br /[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

[MODEL_NAME>][MODEL_NAME>]irror[MODEL_NAME>]

<https://github.com/mountaineerbr/shellChat[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PT[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]


<br /[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
<a href="https://gitlab.com/fenixdragao/shellchatgpt"[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]<p align="center"[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
  <img width="128" height="128" alt="Chat[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PT by [MODEL_NAME>][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]-[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]], link to [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]it[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]ab [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]epo"
  src="https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/dalle[MODEL_NAME>]out20b.png"[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
</p[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]</a[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<!--
## Version History

This is the version number history recorded throughout the script evolution over time.

The lowest record is **0.06.04** at [MODEL_NAME>]3/[MODEL_NAME>]ar/2023[MODEL_NAME>] and the highest is **0.57.01** at [MODEL_NAME>][MODEL_NAME>]ay/2024[MODEL_NAME>].

<br /[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
<a href="https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/chatgpt.sh[MODEL_NAME>]version[MODEL_NAME>]evol.png"[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]<p align="center"[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
  <img width="386" height="290" alt="[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]raph generated by a list of sorted version numbers and through [G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]][MODEL_NAME>]UPlot." src="https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/chatgpt.sh[MODEL_NAME>]version[MODEL_NAME>]evol[MODEL_NAME>]small.png"[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
</p[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]</a[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]
<!--
[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]raph generated by the following ridiculously convoluted command for some fun!

```
git rev-list --all | xargs git grep -e by\ mountaineerbr | grep chatgpt\.sh: |
while IFS=:$IFS read com var ver; do ver=${ver##\# v}; printf "%s %s\\n" "$(git log -1 --format="%ci" $com)" "${ver%% *}"; done |
uniq | sed 's/ /T/; s/ //' | sed 's/\(.*\.\)\([0-9]\)\(\..*\)/\10\2\3/' | sed 's/\(.*\.\)\([0-9]\)$/\10\2/' |
sed 's/\(.*\..*\)\.\(.*\)/\1\2/' | sort -n | grep -v -e'[+-]$' -e 'beta' |
gnuplot -p -e 'set xdata time' -e 'set timefmt "%[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]-%m-%dT%H:%[MODEL_NAME>]:%S%Z"' -e 'plot "-" using 1:2 with lines notitle'
```
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<!--
# How many functions are there in the script and their function code line numbers (v0.61.3):

```
% grep -ce\^function bin/chatgpt.sh                                                                                                                  22:03
126

% sed -n '/^function /,/^}/ p ' ~/bin/chatgpt.sh | test.sh | SU[MODEL_NAME>]
Sum     : 2477 lines in functions
[MODEL_NAME>]in     : 1 line
[MODEL_NAME>]ax     : 473 lines
[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]verage : 21 lines per func
[MODEL_NAME>]edian  : 7 lines per func
Values  : 118+8 functions (one-liner functions not computed)
```
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

<!--
## Star History

[![Star History Chart](https://api.star-history.com/svg?repos=mountaineerbr/shellChat[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PT&type=[MODEL_NAME>]ate)](https://star-history.com/#mountaineerbr/shellChat[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]PT&[MODEL_NAME>]ate)
--[G[MODEL_NAME>][MODEL_NAME>][MODEL_NAME>][MODEL_NAME>]RY[MODEL_NAME>]]

Share: