<h1 align="center">
<a href="https://prompts.chat">
Thanks for checking out AgentOps. We're building tools to help developers like you make AI agents that actually work reliably. If you've ever tried to build an agent system, you know the pain - they're a nightmare to debug, impossible to monitor, and when something goes wrong... good luck figuring o
Sign in to like and favorite skills
Thanks for checking out AgentOps. We're building tools to help developers like you make AI agents that actually work reliably. If you've ever tried to build an agent system, you know the pain - they're a nightmare to debug, impossible to monitor, and when something goes wrong... good luck figuring out why.
We created AgentOps to solve these headaches, and we'd love your help making it even better. Our SDK hooks into all the major Python frameworks (AG2, CrewAI, LangChain) and LLM providers (OpenAI, Anthropic, Cohere, etc.) to give you visibility into what your agents are actually doing.
There are tons of ways to contribute, and we genuinely appreciate all of them:
Even if you're not ready to contribute code, we'd love to hear your thoughts. Drop into our Discord, open an issue, or start a discussion. We're building this for developers like you, so your input matters.
Fork and Clone: First, fork the repository by clicking the 'Fork' button in the top right of the AgentOps repository. This creates your own copy of the repository where you can make changes.
Then clone your fork:
git clone https://github.com/YOUR_USERNAME/agentops.git cd agentops
Add the upstream repository to stay in sync:
git remote add upstream https://github.com/AgentOps-AI/agentops.git git fetch upstream
Before starting work on a new feature:
git checkout main git pull upstream main git checkout -b feature/your-feature-name
Install Dependencies:
pip install -e .
Set Up Pre-commit Hooks:
pre-commit install
Environment Variables: Create a
.env file:
AGENTOPS_API_KEY=your_api_key OPENAI_API_KEY=your_openai_key # For testing ANTHROPIC_API_KEY=your_anthropic_key # For testing # Other keys...
Virtual Environment: We recommend using
poetry or venv:
python -m venv venv source venv/bin/activate # Unix .\venv\Scripts\activate # Windows
Pre-commit Setup: We use pre-commit hooks to automatically format and lint code. Set them up with:
pip install pre-commit pre-commit install
That's it! The hooks will run automatically when you commit. To manually check all files:
pre-commit run --all-files
We use a comprehensive testing stack to ensure code quality and reliability. Our testing framework includes pytest and several specialized testing tools.
Install all testing dependencies:
pip install -e ".[dev]"
We use the following testing packages:
pytest==7.4.0: Core testing frameworkpytest-depends: Manage test dependenciespytest-asyncio: Test async codepytest-vcr: Record and replay HTTP interactionspytest-mock: Mocking functionalitypyfakefs: Mock filesystem operationsrequests_mock==1.11.0: Mock HTTP requestsWe use tox to automate and standardize testing. Tox:
Run tox:
tox
This will:
Run All Tests:
tox
Run Specific Test File:
pytest tests/llms/test_anthropic.py -v
Run with Coverage:
coverage run -m pytest coverage report
Test Structure:
import pytest from pytest_mock import MockerFixture from unittest.mock import Mock, patch @pytest.mark.asyncio # For async tests async def test_async_function(): # Test implementation @pytest.mark.depends(on=['test_prerequisite']) # Declare test dependencies def test_dependent_function(): # Test implementation
Recording HTTP Interactions:
@pytest.mark.vcr() # Records HTTP interactions def test_api_call(): response = client.make_request() assert response.status_code == 200
Mocking Filesystem:
def test_file_operations(fs): # fs fixture provided by pyfakefs fs.create_file('/fake/file.txt', contents='test') assert os.path.exists('/fake/file.txt')
Mocking HTTP Requests:
def test_http_client(requests_mock): requests_mock.get('http://api.example.com', json={'key': 'value'}) response = make_request() assert response.json()['key'] == 'value'
Test Categories:
Fixtures: Create reusable test fixtures in
conftest.py:
@pytest.fixture def mock_llm_client(): client = Mock() client.chat.completions.create.return_value = Mock() return client
Test Data:
tests/data/VCR Cassettes:
tests/cassettes/We use Jupyter notebooks as integration tests for LLM providers. This approach:
Notebook Tests:
examples/ directoryTest Workflow: The
test-notebooks.yml workflow:
name: Test Notebooks on: pull_request: paths: - "agentops/**" - "examples/**" - "tests/**"
Provider Coverage: Each provider should have notebooks demonstrating:
Adding Provider Tests:
examples/provider_name/exclude_notebooks in workflow if manual testing neededThe
agentops/llms/ directory contains provider implementations. Each provider must:
Inherit from BaseProvider:
@singleton class NewProvider(BaseProvider): def __init__(self, client): super().__init__(client) self._provider_name = "ProviderName"
Implement Required Methods:
handle_response(): Process LLM responsesoverride(): Patch the provider's methodsundo_override(): Restore original methodsHandle Events: Track:
Example Implementation Structure:
def handle_response(self, response, kwargs, init_timestamp, session=None): llm_event = LLMEvent(init_timestamp=init_timestamp, params=kwargs) try: # Process response llm_event.returns = response.model_dump() llm_event.prompt = kwargs["messages"] # ... additional processing self._safe_record(session, llm_event) except Exception as e: self._safe_record(session, ErrorEvent(trigger_event=llm_event, exception=e))
Formatting:
Documentation:
Error Handling:
Branch Naming:
feature/descriptionfix/descriptiondocs/descriptionCommit Messages:
PR Requirements:
Review Process:
Types of Documentation:
Documentation Location:
docs/examples/Documentation Style:
We encourage active community participation and are here to help!
GitHub Issues & Discussions:
Discord Community:
Contact Form:
By contributing to AgentOps, you agree that your contributions will be licensed under the MIT License.