Nano Banana Pro
Agent skill for nano-banana-pro
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Sign in to like and favorite skills
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
This is an AI-enhanced CPU performance optimization suite for Intel i7-1240P systems with 32GB DDR4-5200 memory. The codebase implements hardware-specific optimizations, cloud GPU integration, machine learning-based workload prediction, and advanced AI coding assistance with Model Context Protocol (MCP) integration for enhanced context windows.
# Main launcher with all tools python performance_tools_launcher.py # Individual components python src/optimization/ai_coding_optimizer.py # AI workload optimization python src/dashboard/unified_performance_dashboard.py # GUI dashboard python src/dashboard/realtime_dashboard_server.py # WebSocket server (port 8765) # New AI coding optimization components python src/optimization/profile_manager.py # Profile management system python src/optimization/mcp_manager.py # MCP server management python src/optimization/mcp_context_server.py # Custom MCP context server
# Test individual modules python src/optimization/hybrid_cpu_optimizer.py # Test hybrid CPU detection python src/monitoring/memory_bandwidth_monitor.py # Test memory bandwidth python src/monitoring/gpu_acceleration_monitor.py # Test GPU/QuickSync python src/analysis/ml_workload_predictor.py # Test ML predictions # Verify async operations python src/core/async_operations.py # Test async framework
# Install dependencies pip install -r requirements.txt # Main config in cpu_monitor_config.json # RunPod API key in .env (RUNPOD_API_KEY) # ML models stored in ml_models/ directory # AI Coding Optimization Setup # Master config: ai_coding_master_config.json # MCP servers config: mcp_servers_config.json # IDE optimizations: ide_optimizations_config.json # Profiles: profiles_config.json # Complete guide: AI_CODING_OPTIMIZATION_GUIDE.md # Hybrid development setup with RunPod # SSH connection: ssh [email protected] -i ~/.ssh/id_ed25519 # Use Cursor Remote-SSH for direct cloud GPU development
The codebase follows a modular architecture with specialized components:
Shared Utilities Layer (
shared_utils.py)
Hardware Optimization Stack (
src/optimization/, src/monitoring/)
src/optimization/hybrid_cpu_optimizer.py: Intel Thread Director integration, P/E core managementsrc/monitoring/memory_bandwidth_monitor.py: DDR4-5200 bandwidth tracking, channel utilizationsrc/monitoring/gpu_acceleration_monitor.py: Intel QuickSync, GPU memory, AI framework GPU usagesrc/optimization/intel_xtu_integration.py: Voltage control, power limits, thermal managementAI/Cloud Integration Layer (
src/cloud/)
src/cloud/runpod_gpu_integration.py: Cloud GPU offloading, job queue managementsrc/cloud/workload_distributor.py: Local vs cloud decision enginesrc/cloud/cost_tracker.py: Budget management, ROI calculationssrc/cloud/runpod_template_manager.py: Docker templates for cloud executionIntelligence Layer (
src/analysis/, src/optimization/)
src/analysis/ml_workload_predictor.py: Scikit-learn models for workload forecastingsrc/optimization/ai_coding_optimizer.py: Main optimization engine, profile managementsrc/optimization/profile_manager.py: Environment profile management with auto-switchingAI Coding Enhancement Layer (
src/optimization/)
src/optimization/mcp_manager.py: Model Context Protocol server orchestrationsrc/optimization/mcp_context_server.py: Custom MCP server for project contextUser Interface Layer (
src/dashboard/)
src/dashboard/unified_performance_dashboard.py: Tkinter GUI with matplotlib chartssrc/dashboard/realtime_dashboard_server.py: WebSocket server with aiohttpperformance_tools_launcher.py: CLI menu systemEvent-Driven Architecture: The system uses threading and async patterns throughout:
src/core/async_operations.py for non-blocking eventsFactory Pattern: Workload detection and optimization profiles:
Data Pipeline: Metrics flow through the system:
Hardware → SystemMetrics → Collectors → Analyzers → Optimizers → Actions ↓ ML Predictor → Recommendations
External Services:
Platform-Specific:
When adding new features:
When debugging performance issues:
.env file as RUNPOD_API_KEYFor proper initialization:
The src/dashboard/realtime_dashboard_server.py runs on port 8765 and provides:
/ - HTML dashboard/ws - WebSocket endpoint for metrics/api/metrics - REST endpoint for current metrics/api/optimize - Trigger optimizationML models are stored in
ml_models/ directory:
predictor.train_models() after data collectionLocal System (Intel i7-1240P):
Cloud GPU (RunPod RTX 4090):
Integration Points:
~/.ssh/config for seamless connectionThe system now supports MCP servers for enhanced AI assistant context:
MCP Server Management (
src/optimization/mcp_manager.py):
Available MCP Servers (configured in
mcp_servers_config.json):
filesystem: Project file accessgit: Repository history and branch informationdatabase: System metrics and performance dataweb: External documentation and researchcustom_context: Project-specific AI coding contextcode_analysis: Code structure analysisperformance_metrics: Real-time system statusrunpod_integration: Cloud GPU status and managementContext Window Expansion:
Profile System (
src/optimization/profile_manager.py, profiles_config.json):
Profile Features:
Expanded Process Patterns (in
shared_utils.py):
Detection Categories:
language_models: LLaMA, Mistral, CodeLLaMA, WizardCoder, StarCoderinference_engines: Production inference servers and optimized runtimesvector_databases: Semantic search and RAG systemsmodel_training: Training and fine-tuning frameworksSupported IDEs (
ide_optimizations_config.json):
Global Optimizations:
Intel i7-1240P Specific Optimizations:
Thermal Management:
Configuration Files Hierarchy:
ai_coding_master_config.json - Master configuration and feature flagscpu_monitor_config.json - Core system optimization settingsprofiles_config.json - Environment-specific profilesmcp_servers_config.json - MCP server configurationide_optimizations_config.json - IDE-specific optimizationsAI Coding Setup Commands:
# Apply AI intensive profile for heavy development python -c "from src.optimization.profile_manager import switch_to_profile; switch_to_profile('ai_intensive')" # Start MCP servers for enhanced context python -c "from src.optimization.mcp_manager import start_mcp_system; import asyncio; asyncio.run(start_mcp_system())" # Get AI coding recommendations python -c "from src.optimization.ai_coding_optimizer import AICodingOptimizer; optimizer = AICodingOptimizer(); print(optimizer.get_optimization_recommendations())"
This enhanced architecture provides a comprehensive AI coding platform with intelligent resource management, expanded context windows, and hardware-optimized performance for the Intel i7-1240P system.