Markdown Converter
Agent skill for markdown-converter
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Sign in to like and favorite skills
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
This is a Python-based game automation bot for Lineage that uses computer vision (OpenCV, YOLO) to detect targets and PyAutoGUI to perform automated actions. The bot captures game window screenshots, detects objects using trained models, and automates character movement and combat.
lineage-bot/ ├── src/ │ ├── core/ # Reusable core components │ │ ├── window_capture.py # Win32 API window screenshot capture │ │ ├── vision.py, vision_v2.py # Detection rectangle to click point conversion │ │ ├── detection/ # Object detection modules │ │ │ ├── yolo.py # YOLO v1 detection (threaded) │ │ │ ├── yolo_v2.py # YOLO v2 with enhanced result parsing │ │ │ └── cascade.py # Cascade classifier detection (legacy) │ │ └── utils/ │ │ └── bot_utils.py # Target sorting and positioning utilities │ │ │ ├── bots/ # Bot implementations organized by area │ │ ├── chi2/ # Chi2 cave bot │ │ │ ├── bot.py # Chi2Bot class with state machine │ │ │ ├── mage.py # Chi2 mage variant │ │ │ └── runner.py # Main entry point │ │ ├── la3/ # LA3 area bot │ │ ├── sea3/ # Sea3 area bot │ │ └── dragon6/ # Dragon6 area bot │ │ │ └── login/ # Login automation │ └── auto_login.py │ ├── models/ # All trained models │ ├── yolo/ # YOLO .pt model files │ └── cascade/ # Cascade classifier XML files │ ├── scripts/ # Clean entry point scripts │ ├── run_chi2.py │ ├── run_la3.py │ └── ... │ ├── tools/ # Development and training tools │ ├── capture.py # Screenshot capture utility │ ├── train/ # Model testing scripts │ └── legacy/tutorial/ # Learning examples (archived) │ └── bin/ # Batch scripts (Windows)
Create Python virtual environment:
python -m venv .venv
Activate virtual environment:
.\.venv\Scripts\activatesource .venv/bin/activateInstall dependencies:
pip install -r requirements.txt
The project includes Windows batch scripts in
bin/ that handle environment setup and bot execution:
bin/chi2.bat - Runs Chi2 cave bot (requires LIN_BOT_PATH environment variable)bin/la3.bat - Runs LA3 botbin/sea3.bat - Runs Sea3 botbin/dragon6.bat - Runs Dragon6 botEach batch script:
LIN_BOT_PATH environment variablescripts/# Run from project root python scripts/run_chi2.py python scripts/run_la3.py python scripts/run_sea3.py python scripts/run_dragon6.py python scripts/run_login.py
Window Capture (
src/core/window_capture.py):
get_screen_position()Detection Systems (
src/core/detection/):
yolo.py: YOLO-based object detection with plot_bboxes() returning [x, y, w, h, name]yolo_v2.py: Enhanced YOLO with plot_result() returning [x, y, x2, y2, name, conf]cascade.py: Cascade classifier-based detection (older approach, less accurate)Vision (
src/core/vision.py, vision_v2.py):
get_click_points(): Returns list of (x, y) tuples for clickingdraw_rectangles(): Visualization for debuggingBot State Machine (
src/bots/*/bot.py):
Runner Scripts (
src/bots/*/runner.py):
main() functionUtilities (
src/core/utils/bot_utils.py):
targets_ordered_by_distance(): Filters and sorts targets by distance with inner/outer radius constraintsfind_next_target(): Finds non-ignored target from sorted listget_screen_position(): Translates screenshot coordinates to screen coordinatesis_duplicated_target(): Checks if target position already attackedEach bot class defines tunable constants:
INNER_IGNORE_RADIUS / OUTER_IGNORE_RADIUS: Target detection range (typically 0-500 pixels)SKILL_F7_DELAY, SKILL_F9_DELAY: Cooldown timers for skills (seconds)SKILL_MOVE_DELAY: Time before triggering movement (typically 20-30s)DETECTION_WAITING_THRESHOLD: Timeout before moving when no targets found (5-8s)ATTACK_INTERVAL: Duration of attack action (1-1.5s)ENABLE_F7, ENABLE_F9, ENABLE_MOVING: Feature flags to enable/disable specific behaviorsAll major components run in separate threads with Lock-based synchronization:
Lock.acquire() / Lock.release() patternModels are stored in
models/ directory at project root:
models/yolo/): .pt files for different areas (chi2.pt, la3-v3.pt, stairs2.pt, etc.)models/cascade/): XML files for cascade classifiers (legacy)Runner scripts reference models with relative paths from project root:
'models/yolo/chi2.pt'
src/bots/new_area/chi2/bot.py)runner.py with proper importsmodels/yolo/scripts/run_new_area.pybin/new_area.battools/capture.py.pt file in models/yolo/tools/train/test_yolo_img.pyDEBUG = True in runner script to enable OpenCV visualizationwindow_w/2, window_h/2)get_screen_position() for translation.ignore_positions list with summed coordinates (x + y + distance) to avoid re-clicking. Cleared on F5 press.last_detect_time, last_move_time, last_search_time)from src.core.detection.yolo import YoloDetection)tools/legacy/tutorial/ (learning progression from basic template matching to full bot)tools/train/