Nano Banana Pro
Agent skill for nano-banana-pro
**Date:** 2025-11-01
Sign in to like and favorite skills
Date: 2025-11-01
You are working in a repo called cf_ai_askmycampus, created from the Cloudflare vite-react template.
Task:
Create a basic chat UI in React:
A chat history area that shows messages in order.
An input box and a Send button.
Local state for
messages (array of { role: "user" | "assistant", content: string }).
On Send:
Append the user's message to local state immediately.
POST to
/api/chat with { sessionId, message } as JSON.
Read the JSON response
{ reply } and append that as { role: "assistant", content: reply }.
Generate or reuse a
sessionId and store it in localStorage so it persists across page reloads:
On first load, if no sessionId exists, create a random UUID and store it.
Include
sessionId in the POST body.
Add very simple styling (just flex/column, scrollable history div, nothing fancy).
Update the main App component (or create a Chat component and render it from App) so that when I run the dev server I see the chat box.
Also, append THIS EXACT prompt text and a short summary of what code you generated to a file called
PROMPTS.md at the repo root. If PROMPTS.md doesn't exist yet, create it.
Notes:
Don't wire up the backend call logic yet beyond the fetch(
/api/chat). The /api/chat Worker doesn't exist yet – that's the next step.
Keep the code clean and readable, TypeScript if the template is TS, JS if the template is JS.
Do not add any external component libraries.
Goal:
I want a minimal working chat UI that can talk to
/api/chat once I implement the backend.
Created a fully functional chat UI with the following components:
Chat.tsx - Main chat component with:
/api/chat endpoint with sessionId and message in POST bodyChat.css - Simple, clean styling with:
App.tsx - Updated to render the Chat component instead of boilerplate content
index.css - Modified body and #root styles to support full-height chat layout
The implementation uses no external libraries, is fully TypeScript-typed, and ready to connect to the backend
/api/chat endpoint once implemented.
Date: 2025-11-01
You are editing the cf_ai_askmycampus project created from the Cloudflare vite-react template.
Goal:
Implement a Worker route
/api/chat that will act as the coordination layer for the AI assistant.
Requirements:
Create (or update) the Worker entrypoint so it handles a POST request to
/api/chat.
Parse JSON from the request body:
{ sessionId, message }.
If either is missing, return 400.
Conversation memory:
We'll be using a KV namespace for memory. For now:
Assume we have a binding called
CHAT_HISTORY (Cloudflare KV).
Key is the
sessionId.
Value is a JSON string of an array of messages shaped like:
{ role: "user" | "assistant", content: string }[].
On each request:
Get the history from KV (if missing, default to
[]).
Append the new user message to the history.
Build a prompt string for the LLM from that history. For now just join messages like:
"user: ...\nassistant: ...\nuser: ...\n"
Call a placeholder
generateAssistantReply(prompt) function (see #3).
Append the assistant reply to history.
Save the updated history back to KV under that same
sessionId.
Stub the LLM call:
Implement
async function generateAssistantReply(prompt: string, env: Env): Promise<string>.
For now, DO NOT call a real model. Just return something like:
"This is a placeholder AI response based on: " + prompt.slice(0, 200)
We will later replace this with a real Workers AI (Llama 3.3) call.
Response:
{ reply: <assistantReply> } so the frontend can render it.Types / bindings:
Define an
Env interface that includes:
CHAT_HISTORY: KVNamespace (for KV)Make sure the fetch handler is
export default { fetch(request, env) { ... } } or whatever pattern the template uses.
Use TypeScript if the Worker is TS, JS if the Worker is JS. Match the template.
Add/update
wrangler.toml:
Add a KV binding stub so we remember to wire it up for real deploy:
[[kv_namespaces]]
binding = "CHAT_HISTORY"
id = "CHAT_HISTORY_DEV"
If the template already has wrangler.toml, modify it instead of creating a duplicate. If something similar already exists, extend it.
If wrangler.toml is managed differently in this template, follow that structure.
Make sure CORS / headers are fine for local dev:
Return a Response with
content-type: application/json.
You can assume same-origin fetch from the React dev server for now, so no need to overcomplicate CORS.
IMPORTANT FOR THE ASSIGNMENT:
Append THIS EXACT prompt and a short summary of the code you generated to
PROMPTS.md at the repo root. If PROMPTS.md exists, append. If it doesn't, create it.
The summary in PROMPTS.md should mention:
Added /api/chat route
KV-based memory per sessionId
placeholder LLM function
updated wrangler.toml with KV binding
After this change, I should be able to:
run wrangler dev
send a POST to /api/chat with { sessionId, message }
get back { reply: "placeholder..." }
and see that memory is persisted per session in KV.
Implemented the
/api/chat Worker endpoint with full conversation memory using Cloudflare KV:
worker/index.ts - Updated Worker with:
Env interface with CHAT_HISTORY: KVNamespace bindingMessage interface for type safety (role: "user" | "assistant", content: string)/api/chat route that:
generateAssistantReply() function{ reply: assistantReply }generateAssistantReply() placeholder function that returns mock response with prompt previewbuildPrompt() helper function to format conversation history as "role: content" lineswrangler.json - Added KV namespace binding:
kv_namespaces array with CHAT_HISTORY bindingThe implementation:
wrangler dev or npm run devDate: 2025-11-01
Create a concise README.md for this repo (
cf_ai_askmycampus).
Include:
A short intro explaining this is an AI-powered chat assistant built on Cloudflare (Workers AI + KV + Pages) for the Cloudflare internship assignment.
A section explaining how it meets the 4 requirements (LLM, workflow, user input via chat, memory).
A "Run Locally" section with exact commands:
npm install -g wrangler wrangler login npm install npx wrangler dev (needs npm run build)