Coding
PromptBeginner5 minmarkdown
Markdown Converter
Agent skill for markdown-converter
7
**COUNT(\*) returns 0 for vector tables** - libSQL's vector extension has a quirk where `SELECT COUNT(*) FROM embeddings` returns 0. Always count a specific column instead:
Sign in to like and favorite skills
COUNT(*) returns 0 for vector tables - libSQL's vector extension has a quirk where
SELECT COUNT(*) FROM embeddings returns 0. Always count a specific column instead:
-- WRONG: returns 0 SELECT COUNT(*) FROM embeddings -- CORRECT: returns actual count SELECT COUNT(chunk_id) FROM embeddings
Vector index shadow tables are MASSIVE - The
*_idx_shadow tables store neighbor graphs for HNSW search. Each row averages ~100KB. For 500k embeddings, expect ~48GB just for the index.
┌─────────────────────────────────────────────────────────────────────┐ │ DB SIZE BREAKDOWN (500k chunks) │ ├─────────────────────────────────────────────────────────────────────┤ │ embeddings_idx_shadow ~48GB (92%) - HNSW neighbor graphs │ │ embeddings ~1.9GB (4%) - 500k × 1024 dims × 4 bytes │ │ chunks ~180MB (<1%) - actual text content │ │ chunks_fts ~200MB - full-text search index │ └─────────────────────────────────────────────────────────────────────┘
Potential optimizations:
compress_neighbors=float8 in index (already enabled)Use the simple model string pattern with Vercel AI Gateway:
import { generateObject } from "ai"; import { z } from "zod"; const { object } = await generateObject({ model: "anthropic/claude-haiku-4-5", schema: MyZodSchema, prompt: "...", });
No provider setup needed - uses
AI_GATEWAY_API_KEY env var automatically.
src/services/LibSQLDatabase.ts - Database layer with Effectsrc/services/AutoTagger.ts - LLM enrichment logicsrc/services/TaxonomyService.ts - SKOS concept managementsrc/cli.ts - CLI commandsdata/taxonomy.json - Starter taxonomy seed data