纯IPV4/纯IPV6的VPS直接运行一键脚本
bash <(wget -qO- https://gitlab.com/rwkgyg/x-ui-yg/raw/main/install.sh 2> /dev/null)
Explore
34,099 skills indexed with the new KISS metadata standard.
bash <(wget -qO- https://gitlab.com/rwkgyg/x-ui-yg/raw/main/install.sh 2> /dev/null)
------------------------------------------------------------------------------------------------------------------------------
[](https://awesome.re)
This repository is tested on Python 3.7+, openai 0.25+.
As you can read on the site built from these notes with MDBook, [here](https://thomashighbaugh.github.io/gpt_notes/), this is my notes on all things relating to LLMs, Generative AI and Machine Learning, with the emphasis primarily (for the time being at least) focused mainly on the GPT family of LLMs such as ChatGPT and GPT-3 made available by OpenAI. Because the keeping of notes in Markdown format enables them to be shared with trivial additional effort using MDBook, Github Pages and Github Actions, they are available for browsing by the general public, which may find the resources section interesting as I have both useful links that I have come across along the way and a number of prompts templates that I use conveniently stored in codeblocks that are arranged by subject that anyone is free to take and use to their hearts content (attribution would be nice, but I would settle with just getting a star on the repo :wink: ).
This repository is tested on Python 3.7+, openai 0.25+.
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
Repository to track my trials and errors with perplexity.ai collections
<div class="Box-sc-g0xbh4-0 bJMeLZ js-snippet-clipboard-copy-unpositioned" data-hpc="true"><article class="markdown-body entry-content container-lg" itemprop="text"><div class="markdown-heading" dir="auto"><h1 tabindex="-1" class="heading-element" dir="auto"><font style="vertical-align: inherit;"><font style="vertical-align: inherit;">解密提示</font></font></h1><a id="user-content-decryptprompt" class="anchor" aria-label="永久链接:解密提示" href="#decryptprompt"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path d="m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z"></path></svg></a></div>
Fractalize Prompt is a tool that uses a dynamic, fractal network of agents instead of a
This fork of Bolt.new allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
[](https://bolt.diy)
[](https://bolt.diy)
[](https://bolt.diy)
[](https://github.com/codespaces/new?hide_repo_select=true&machine=basicLinux32gb&repo=725257907&ref=main&devcontainer_path=.devcontainer%2Fdevcontainer.json&geo=UsEast)
[](https://bolt.diy)
[](https://bolt.diy)
This fork of Bolt.new (oTToDev) allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
[](https://bolt.diy)
I've authored an e-book called **"The Art of ChatGPT Prompting: A Guide to
**Status:** Pre‑alpha | **Architecture:** Agenctic Build Server Leveraging Open & Proprietary LLMs
_Unmute video for voice-over_
This is the repository associated with the Master’s Degree Project, [Automated Reproducible Malware Analysis: A Standardized Testbed for Prompt-Driven LLMs](https://www.diva-portal.org/smash/get/diva2:1973561/FULLTEXT01.pdf) (120 ECTS) in Informatics with a Specialization in Privacy, Information and Cyber Security, created during the Spring term 2025. The entire project is released under the Apache 2.0 license and is free to use in further research or other types of development. Although it is not explicitly required to include any citations to the original degree project, it is greatly appreciated if the material is used.
<p align="center">