See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
/node_modules
Explore
20,877 skills indexed with the new KISS metadata standard.
/node_modules
public-hoist-pattern[]=@aws-sdk/client-s3
v24.6.0
packages/shared/prisma/generated
Langfuse is an **open source LLM engineering** platform for developing, monitoring, evaluating and debugging AI applications. See the README for more details.
skip = .git,*.pdf,*.svg,package-lock.json,*.prisma,pnpm-lock.yaml
We take the security of our software products seriously, which includes not only the code base but also the scanners provided within. If you have found any issues that might have security implications, please send a report to [[email protected]].
site_author: Protect AI, Inc.
__pycache__/
MD004: false # Unordered list style
- repo: https://github.com/pre-commit/pre-commit-hooks
:tada: Thanks for taking the time to contribute! :tada:
LLM Guard by [Protect AI](https://protectai.com/llm-guard) is a comprehensive tool designed to fortify the security of Large Language Models (LLMs).
[*]
Welcome and thank you for your interest in contributing to Guardrails! We appreciate all contributions, big or small, from bug fixes to new features. Before diving in, let's go through some guidelines to make the process smoother for everyone.
Guardrails docs are served as a docusaurus site. The docs are compiled from various sources
<img src="https://raw.githubusercontent.com/guardrails-ai/guardrails/main/docs/dist/img/Guardrails-ai-logo-for-dark-bg.svg#gh-dark-mode-only" alt="Guardrails AI Logo" width="600px">
- "guardrails/version.py"
*.pyc
*__pycache__*
- repo: https://github.com/astral-sh/ruff-pre-commit
NVIDIA is dedicated to the security and trust of our software products and services, including all source code repositories managed through our organization.
No description available.
All notable changes to the Colang language and runtime will be documented in this file.