Coding
PromptBeginner5 minmarkdown
Markdown Converter
Agent skill for markdown-converter
7
This repository is a coding-interview practice workspace with a Tk/Ttk GUI for generating problem stubs, editing code, and running tests. Everything lives under `/Users/andrewzhao/Documents/coding_interview`.
Sign in to like and favorite skills
This repository is a coding-interview practice workspace with a Tk/Ttk GUI for generating problem stubs, editing code, and running tests. Everything lives under
/Users/andrewzhao/Documents/coding_interview.
tools/gui.py: Main desktop GUI. Provides the in-app editor (smart indent/backspace, syntax highlighting), topic browser, stats pane, streak tracking overlay, test runner integration, and a collapsible Copilot (local LLM via Ollama) with streaming chat + Markdown rendering.tools/generate_entry.py: Registry of problems (ProblemSpec). Generates stubs/tests and backs the GUI & CLI topic list.workspace/: Auto-created package where generated solutions live. Files here run via python -m workspace.<module>.ml/, coding/, leetcode/: Reference implementations and topic material used for canonical solutions/tests.Makefile: Shortcuts (make / make gui) to launch the GUI.python -m tools (or python tools/gui.py).Generate writes a stub into workspace/ and opens it in the embedded editor.Cmd/Ctrl+S or the Save button (red dot signals unsaved changes).Run Tests button; output appears in the Output tab. Stats update automatically, streak counter animates on consecutive passes, and a celebratory popup appears on success.Copilot ▶ in header, or Cmd/Ctrl+Shift+C). Ask questions; it streams replies with Markdown (code fences, lists, tables, bold/italic). Context includes the current topic, your code, and the canonical implementation.Copy Review Prompt (always enabled) pushes a comparison prompt to the clipboard and shows a quick “Prompt copied” toast.View Canonical..practice_stats.json in repo root; session temp files tracked in GUI.generate_entry.py and, if needed, gui.canonical_path_for_topic.rg for searching; shell commands should be invoked with ["bash","-lc", ...] and workdir set.apply_patch for targeted changes; do not overwrite user content or undo unrelated modifications.python -m workspace.<module>./api/chat) with streaming; keeps models warm via keep_alive and reuses KV via the returned context.tools/gui.py via _build_copilot_panel and toggled by toggle_copilot._assemble_copilot_context() gathers topic, candidate code, and canonical content._ollama_chat_stream() handles newline-delimited JSON; updates Tk widgets via after()._apply_markdown_to_range() for headings, bold/italic, inline/fenced code, lists, blockquotes, and Markdown tables._maybe_start_ollama() spawns ollama serve if found; controlled by PRACTICE_AUTOSTART_OLLAMA and --no-llm-serve._maybe_prewarm_kv() posts num_predict: 0 with context at startup to reduce first-token latency; captured context is reused on next turns.Caveats/notes
gpt-oss:20b; on small RAM machines, advise switching to a 7B–14B model._assemble_copilot_context() and prewarm behavior are kept in sync. Consider re‑prewarming on topic changes.after().python -m tools — launch GUI.python -m tools --no-llm-serve — launch GUI without auto-starting Ollama.python tools/generate_entry.py --list — list available topics.python tools/generate_entry.py --topic <id> — generate stub/tests without GUI.python -m workspace.<module> — run a generated solution (tests included in file).tools/gui.py (search for _on_*).StatsManager in the same file.after.self.text.bind(...) matches handler logic._show_prompt_copied; if they stop appearing, confirm the Toplevel overlay is created (may fail on some WMs without display permissions).ollama serve). Status pill shows “Starting Ollama…” / “Ollama ready”. Set OLLAMA_BASE_URL if running on a non-default host/port.keep_alive if needed.Keep this file updated when behaviors or workflows change so agents have immediate context.