Nano Banana Pro
Agent skill for nano-banana-pro
Use this skill whenever the user wants to design, run, or refine Cloudflare D1 schema management, migrations, and data seeding for dev/staging/production environments, especially in conjunction with Hono/Workers apps.
Sign in to like and favorite skills
You are a specialized assistant for schema and data lifecycle of Cloudflare D1 databases, used typically with Hono + TypeScript apps running on Cloudflare Workers/Pages.
Use this skill to:
Do not use this skill for:
hono-app-scaffold, feature skillshono-d1-integration (that skill handles data access layer)If
CLAUDE.md or existing docs describe DB conventions (naming, migrations folder, tenant strategy), follow them.
Trigger this skill when the user says things like:
Avoid when:
This skill assumes that:
wrangler.toml as DB (or some project-defined name).db/ or migrations/ directory for SQL.project-root/ src/ db/ schema.sql # initial base schema db/ migrations/ 0001_init.sql 0002_add_posts_table.sql 0003_add_indexes.sql seeds/ dev.seed.sql test.seed.sql wrangler.toml
Note: location is flexible as long as Wrangler commands reference the correct path.
This skill will adapt structure to the existing repo but keep these concepts.
schema.sql)For a new project, start with a base schema file:
-- src/db/schema.sql or db/schema.sql CREATE TABLE IF NOT EXISTS users ( id TEXT PRIMARY KEY, email TEXT NOT NULL UNIQUE, password_hash TEXT NOT NULL, created_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), updated_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')) ); CREATE TABLE IF NOT EXISTS posts ( id TEXT PRIMARY KEY, user_id TEXT NOT NULL, title TEXT NOT NULL, body TEXT NOT NULL, created_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), FOREIGN KEY (user_id) REFERENCES users(id) );
Use this as the canonical source of truth for the initial DB state.
To apply
schema.sql to a local dev DB:
wrangler d1 execute <db_name> --local --file=src/db/schema.sql
This skill will:
User, Post, etc.) defined in hono-d1-integration skill.Do not re-run
as a way to “update” prod. Instead, use migrations.schema.sql
Use Wrangler to create a migration file (name is a description):
wrangler d1 migrations create <db_name> add_comments_table
This creates a new SQL file under the migrations folder, e.g.:
db/migrations/ 0001_init.sql 0002_add_comments_table.sql # created by wrangler
Edit the new migration file:
-- db/migrations/0002_add_comments_table.sql CREATE TABLE comments ( id TEXT PRIMARY KEY, post_id TEXT NOT NULL, user_id TEXT NOT NULL, body TEXT NOT NULL, created_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), FOREIGN KEY (post_id) REFERENCES posts(id), FOREIGN KEY (user_id) REFERENCES users(id) );
This skill should:
To apply all pending migrations to local dev DB:
wrangler d1 migrations apply <db_name> --local
This will run all migrations that haven’t been applied yet.
For Cloud/prod DB:
wrangler d1 migrations apply <db_name>
This skill should recommend:
Assume
wrangler.toml contains environment-specific D1 bindings:
[[d1_databases]] binding = "DB" database_name = "my_db_dev" database_id = "dev-xxxx" [env.staging] [[env.staging.d1_databases]] binding = "DB" database_name = "my_db_staging" database_id = "staging-xxxx" [env.production] [[env.production.d1_databases]] binding = "DB" database_name = "my_db_prod" database_id = "prod-xxxx"
Then, the typical workflow:
wrangler d1 migrations apply my_db_dev --localwrangler d1 migrations apply my_db_staging --env stagingwrangler d1 migrations apply my_db_prod --env productionThis skill can:
Provide canonical commands tailored to the project’s actual names.
Suggest adding scripts in
package.json to make this repeatable, e.g.:
{ "scripts": { "db:migrate:local": "wrangler d1 migrations apply my_db_dev --local", "db:migrate:staging": "wrangler d1 migrations apply my_db_staging --env staging", "db:migrate:prod": "wrangler d1 migrations apply my_db_prod --env production" } }
Use SQL seed files for dev/test:
-- db/seeds/dev.seed.sql INSERT INTO users (id, email, password_hash) VALUES ("u1", "[email protected]", "HASH1"), ("u2", "[email protected]", "HASH2"); INSERT INTO posts (id, user_id, title, body) VALUES ("p1", "u1", "Hello dev", "First dev post");
For local dev:
wrangler d1 execute <db_name> --local --file=db/seeds/dev.seed.sql
For test DBs you might:
test.seed.sql).This skill will:
When changing schema in a non-trivial way (e.g., splitting a column, renaming), this skill should:
Plan for multi-step migrations:
Example: rename
username to handle
-- Step 1: add new column ALTER TABLE users ADD COLUMN handle TEXT; -- Step 2: copy data UPDATE users SET handle = username; -- Step 3: (later) drop old column if safe
Avoid destructive actions that lose data without an explicit backup/migration plan.
For large data sets, warn about expensive operations and suggest phased rollouts if needed.
The skill will also:
hono-d1-integration) match the new schema.This skill must coordinate schema changes with code changes:
/v1, /v2) temporarily.The skill should help sequence:
Even if simplified, it must emphasize not to break prod accidentally.
Though CI specifics belong to a separate skill (e.g.,
cloudflare-ci-cd-github-actions), this skill should:
Suggest running
wrangler d1 migrations apply against staging/prod as part of the deploy pipeline.
Emphasize idempotence and ordered migrations.
Provide example steps like:
When migrations fail, this skill should:
Suggest checking:
Recommend recovery approaches:
cloudflare-worker-deployment:
hono-d1-integration:
hono-authentication:
nestjs-typeorm-integration and other DB skills:
For such tasks, rely on this skill to maintain a clean, versioned, and environment-aware D1 schema, keeping prod safe while making development and testing smooth.