Load these files when working in specific contexts:
| File | When to Load |
|---|
.agents/code-review.md
| Reviewing others' MRs, giving feedback, maintainer reviews |
.agents/merge-requests.md
| Creating MRs, working on MRs, MR workflows |
.agents/database.md
| Database migrations, schema changes, DB-related code |
.agents/batched-background-migrations.md
| Creating or debugging BBMs, especially with EE extensions |
.agents/comments-writing.md
| Drafting comments for MRs, issues, discussions |
.agents/status-updates.md
| Writing weekly status updates for priority epic |
.agents/epic-tracking.md
| Working on Vulnerabilities Across Contexts epic |
.agents/api-tools.md
| Working with GitLab APIs, pagination, tool limitations |
/home/gregory/gitlab_projects/
├── AGENTS.md # This file
├── .agents/ # Contextual guidelines (load as needed)
├── .opencode/ # Working notes (issues, MRs, epics tracking)
├── gdk/ # GitLab Development Kit
│ └── gitlab/ # Main GitLab repository
├── gitlab-com-database-testing/ # Database testing/notifier project
└── opencode-gitlab-plugin/ # OpenCode GitLab plugin
- GitLab Username: ghavenga
- GitLab User ID: 11164960
- GitLab Instance: gitlab.com
- Primary Project: gitlab-org/gitlab (in
gdk/gitlab/
)
- Plugin Project: gitlab-org/editor-extensions/opencode-gitlab-plugin
- Maintainer Roles (gitlab-org/gitlab): Backend, Database
- Team: Security Infrastructure (group::security infrastructure)
- Note: "Threat Insights" is an old/deprecated label - use "group::security infrastructure" instead
- GitLab API access token is available in environment variables
- Use
glab
CLI for GitLab API interactions
- Run
bundle install
if you encounter missing gem errors in the gitlab repo
- When blocked by missing system dependencies (e.g., clang, llvm, libpq-dev):
- Identify the missing dependency from error messages
- Ask me to install it - I can run
sudo apt install
commands
- Once installed, retry the blocked operation
- To run commands that need GDK context (rails, bundle exec, etc.):
cd ~/gitlab_projects/gdk/gitlab && source ~/.zshrc 2>/dev/null; eval "$(mise activate zsh 2>/dev/null)" && <command>
- To install missing tool versions (Ruby, Node, etc.): Run
mise install
in the project directory to install versions from .tool-versions
- Start GDK:
cd ~/gitlab_projects/gdk && source ~/.zshrc 2>/dev/null; eval "$(mise activate zsh 2>/dev/null)" && gdk start
- Run tests:
cd ~/gitlab_projects/gdk/gitlab && source ~/.zshrc 2>/dev/null; eval "$(mise activate zsh 2>/dev/null)" && bundle exec rspec <spec_file>
- Run migrations:
cd ~/gitlab_projects/gdk/gitlab && source ~/.zshrc 2>/dev/null; eval "$(mise activate zsh 2>/dev/null)" && bundle exec rails db:migrate
- Rollback migrations: Use
bundle exec rails db:migrate:down_all VERSION=<timestamp>
to rollback across all databases (main, ci, sec) at once. Do NOT use db:migrate:down
which only targets a single database.
- GDK URL when running: http://gdk.test:3000
When drafting ANY comment, reply, or written response for GitLab:
- Write naturally as a human would, not like a structured report
- NEVER use dashes for clause separation (no —, –, or - in the middle of sentences)
- Avoid bullet-point style in comments; use flowing prose with natural transitions
- Load
.agents/comments-writing.md
for full guidance when drafting comments
- NEVER comment on MRs/issues unless explicitly requested
- NEVER reply to discussion threads without explicit approval
- CRITICAL: When asked to draft a comment: draft → show for review → wait for explicit "yes"/"approved"/"post this" → ONLY THEN post. This applies to EVERY comment, even when processing multiple threads in sequence.
- Use
glab api
commands for read-only operations
- To ship an MR (set auto-merge when pipeline succeeds): post
/ship
as an MR note via glab mr note <iid> --repo <project> --message "/ship"
ALWAYS provide context before drafting a reply:
- Show the age of the discussion (when it started, how long ago)
- Summarize the full thread history (who said what, any decisions made)
- Show the specific comment being replied to
- Note if the discussion appears to be already resolved or just needs acknowledgment
This prevents replying out of context to old/resolved discussions.
- NEVER claim something is "known" (e.g., "known flaky test") without evidence - search for issues/documentation first
- When analyzing failures, distinguish between:
- Fact: "The test
X
failed with error Y
"
- Analysis: "Based on the code changes, this appears unrelated because..."
- Assumption: "This might be flaky, but I haven't found evidence of prior failures"
- If uncertain, say so explicitly rather than presenting assumptions as facts
- Before retrying failed jobs, explain your reasoning for why retrying might help
- Default branch:
master
- Only create commits when explicitly requested
- Only push changes when explicitly requested
- NEVER push empty commits to trigger pipelines - use
/run_pipeline
slash command instead
- LEFTHOOK failures: Show me the failure and ask if I want to address or skip with
LEFTHOOK=0
- Exception: NEVER skip RuboCop - fix the issues or add disable directives
- Request reviewers BEFORE maintainers - reviewers approve first, then maintainer merges
- Danger bot suggests reviewers/maintainers per category (backend, database, frontend)
- For database MRs: request database reviewer first, then database maintainer after approval
BEFORE requesting a database review, we must prepare query plans ourselves:
-
For any new or modified queries (SELECT, UPDATE, DELETE in migrations or BBMs):
- Create a realistic version of the query with actual IDs from GitLab.com production data
- Use database lab (postgres.ai) to get real table statistics and execution patterns
- Present the queries to me so I can run them on Postgres.ai and get plan links
-
How to prepare queries:
- Replace placeholder values with real IDs (e.g.,
project_id = 278964
for gitlab-org/gitlab)
- Use realistic batch ranges that match actual data distribution
- Include all WHERE conditions that will be used in production
-
What to include in the MR description:
- Link to Postgres.ai query plan for each significant query
- Execution time and rows affected
- Index usage confirmation
-
Why this matters:
- Database reviewers are busy; sending an MR without query plans wastes their time
- We should self-review query performance before requesting external review
- Catching performance issues early prevents back-and-forth cycles
-
Postgres.ai limitations:
- We can only see execution plans, NOT actual data values
- To get row counts: use
SELECT 1 FROM ... WHERE ...
and check "rows returned" in the plan
- Cannot run queries that return actual production data (security restriction)
- Use the plan's row estimates and actual rows to understand data distribution
-
What matters in query plans (timing is NOT meaningful on clones):
- Query structure: Nested loops, hash joins, etc. appropriate for the data
- Index usage: Should use appropriate indexes, not sequential scans on large tables
- Data read volume: Should not read gigabytes of data per query
- Row estimates vs actuals: Large discrepancies indicate stale statistics
-
How to present query plans in MR comments:
- Post as line-specific comments on the code where the query executes
- Include: Postgres.ai link, execution plan, buffer/data read stats
- Explain why the query is acceptable (index used, reasonable data volume)
- Note that timing is not meaningful on database lab clones
Example workflow:
-- Instead of: SELECT * FROM vulnerability_occurrences WHERE new_uuid IS NULL LIMIT 1000
-- Prepare:
SELECT id, report_type, primary_identifier_fingerprint, location_fingerprint,
project_id, security_project_tracked_context_id
FROM vulnerability_occurrences
WHERE new_uuid IS NULL
AND security_project_tracked_context_id IS NOT NULL
AND id BETWEEN 100000000 AND 100001000
ORDER BY id;
Then I run this on Postgres.ai and add the plan link to the MR.
- ALWAYS show the list of files to be committed before creating a commit
- Wait for explicit approval of the file list before proceeding
Follow GitLab commit message guidelines.
Additional rules for this workspace:
- Use full URLs instead of short references (
#123
→ full GitLab URL)
- Add changelog trailers when applicable:
Changelog: added|changed|fixed|...
- Prefix:
ghavenga-<feature-name>
or ghavenga-<issue-id>-<description>
- When working on an MR, check for associated issue and verify assignment
- When an MR is merged, verify associated issue auto-closed or close manually
- Follow GitLab development guidelines
- Ruby: RuboCop configured; JavaScript/Vue: ESLint and Prettier configured
- Follow Rails conventions: Always specify
dependent:
on associations
- RuboCop: NEVER skip RuboCop with
LEFTHOOK=0
- CI will fail. Add inline disable directives instead:
# rubocop:disable CopName -- reason for disabling
code_here
# rubocop:enable CopName
- Batched Background Migrations (BBMs): Common RuboCop disables needed:
Database/AvoidUsingConnectionExecute
- BBMs require direct SQL for bulk updates
CodeReuse/ActiveRecord
- BBMs operate directly on batch relations
- EE module prepending: Use
ClassName.prepend_mod
instead of ClassName.prepend_mod_with('Full::Module::Path')
. The shorter form auto-discovers the EE module by convention and avoids Layout/LineLength
violations.
Main, CI, and Sec databases share the same schema - tables exist on all three even if only used by one.
Always run migrations locally before pushing to verify they work:
cd ~/gitlab_projects/gdk/gitlab && source ~/.zshrc 2>/dev/null; eval "$(mise activate zsh 2>/dev/null)" && bundle exec rails db:migrate
Rollback migrations: Use
bundle exec rails db:migrate:down_all VERSION=<timestamp>
to rollback across all databases (main, ci, sec) at once. Do NOT use
db:migrate:down
which only targets a single database.
Regenerating db/structure.sql:
- The
scripts/regenerate-schema
script ensures structure.sql matches what CI will generate
- Prerequisites: Branch must be rebased on master, local DB must have all migrations run
- Known issue: Script currently fails with "Database connection should not be called during initializers" error (see #586582)
- Known issue: Script produces table reordering in the output, making diffs noisy - under investigation
- Workaround for now: Manually edit db/structure.sql to add column, index, and FK in alphabetical order within their respective sections
- RSpec for Ruby, Jest for JavaScript
- Always run tests locally before pushing
- Use
gdk predictive
for larger changes on existing code
- When a reviewer makes a suggestion: Ask me how I'd like to respond before proceeding
- Options: create issue, implement in follow-up MR, implement now, acknowledge and defer
Last Retrieved: 2026-01-30 (refresh weekly)
| Milestone | ID | Start Date | End Date | Notes |
|---|
| 18.8 | 5948918 | 2025-12-13 | 2026-01-09 | Past |
| 18.9 | 5948920 | 2026-01-10 | 2026-02-13 | CURRENT |
| 18.10 | 5948922 | 2026-02-14 | 2026-03-13 | Next |
| 18.11 | 5948924 | 2026-03-14 | 2026-04-10 | Future |
To refresh:
curl -s --header "PRIVATE-TOKEN: $GITLAB_TOKEN" "https://gitlab.com/api/v4/groups/9970/milestones?state=active&per_page=10" | jq '.[] | {id, title, start_date, due_date}'
THE INDEX:
.opencode/INDEX.md
contains a quick-reference table of ALL tracked items (MRs, issues, epics) with links to their notes files.
MANDATORY WORKFLOW:
- BEFORE working on ANY MR, issue, or epic: Check
.opencode/INDEX.md
to see if notes exist
- If notes exist: READ THE NOTES FILE FIRST to understand history, blockers, and context
- DURING work: Update the notes file with what you're doing, discoveries, and decisions
- AFTER work: Update the notes file with outcomes, new blockers, and next steps
- If no notes exist: CREATE a notes file using the template in INDEX.md
WHY THIS MATTERS:
- We repeatedly waste time rediscovering the same blockers (e.g., !211407 DB lock)
- Context from previous sessions is lost without notes
- The user expects continuity between sessions
.opencode/
├── INDEX.md # **START HERE** - Quick lookup of all tracked items
├── working-notes.md # Top-level tracker (quick reference, session notes)
└── <namespace>/<project>/ # Notes organized by GitLab project path
├── mrs/<mr_iid>.md # Per-MR notes with history
├── issues/<issue_iid>.md # Per-issue notes with history
└── epics/<epic_iid>.md # Per-epic notes with history
Update notes IMMEDIATELY when:
- You discover a blocker (like DB schema locks, conflicts, etc.)
- A status changes (review submitted, merged, blocked, etc.)
- You make a decision or recommendation
- You complete any action on an item
- You learn something important about context or history
DO NOT:
- Wait until end of session to update notes
- Assume you'll remember details later
- Skip creating notes for items you work on
- Clearly label "next steps" as requiring your approval
- Do NOT phrase next steps as if they will be automatically executed
- When user provides a list of tasks: create opencode todo list AND record in
.opencode/working-notes.md
- Stale todos (>7 days old) - Top priority
- Review requests from others
- General pings/mentions
- Priority project work (Vulnerabilities Across Contexts)
- Everything else
Within each priority category, process todos in FIFO order (first in, first out) based on creation date.
CRITICAL: Never auto-dismiss FYI todos without user confirmation.
When processing todos:
-
For todos requiring action (review requests, direct questions):
- Present the context and what action is needed
- Wait for direction on how to proceed
-
For FYI todos (mentions for awareness, progress updates, automated reports):
- Present a summary of what the todo is about
- Explain WHY it appears to be FYI-only (e.g., "You were CC'd but the question is directed at someone else")
- Wait for explicit confirmation before dismissing
- Example: "This appears to be an FYI mention. @dpisek is asking @bwill a question and CC'd you. Dismiss this todo?"
-
For automated triage reports:
- Briefly note what MRs/issues are flagged that relate to me
- Ask if I want to take action on any before dismissing
DO NOT dismiss todos silently or assume I don't need to see something. The todo system exists to keep me informed.
When requested to do a "full refresh", gather comprehensive state across all concerns.
IMPORTANT: Track progress in
.opencode/refresh-progress.md
so we can resume if interrupted.
- Before starting: Update
refresh-progress.md
with start time and set status to "In progress"
- During refresh: Check off each phase as it completes
- If interrupted: Next session can read the checklist and resume from unchecked items
- After completion: Record completion time and key discoveries
Build the Consolidated Todo List in
.opencode/working-notes.md
with ALL items from:
- GitLab todos (pending)
- My MRs needing action (pipeline failures, review feedback, conflicts)
- Watched entities needing attention
- Issues assigned to me
- Conversations requiring response
Categorization:
| Priority | Category | Examples |
|---|
| P0 | Stale (>7 days) | Old todos, abandoned MRs |
| P1 | Review requests | Others asking for my review |
| P2 | Questions/Mentions | Direct questions needing response |
| P3 | My MRs blocked | Pipeline failures, conflicts, review blockers |
| P4 | Priority epic work | &3430 items |
| P5 | Everything else | Low priority, watching |
For each item, record:
- Item reference (MR/Issue/Todo ID)
- Type (review, question, pipeline, blocker, etc.)
- Brief context
- Current status (waiting, needs action, blocked, etc.)
- Age if relevant
Provide:
- Summary stats: X todos, Y open MRs, Z assigned issues
- Changes since last refresh: New items, status changes, resolved items
- Urgent items: Anything needing immediate attention (P0-P2)
- Priority epic status: High-level state of &3430 and blockers
- Conversations to consider: Discretionary participation opportunities
- Consolidated todo list: Full prioritized list from working-notes.md
- Recommended next actions: Top 3-5 items requiring approval
- GitLab has extensive CI/CD pipelines - be patient with pipeline results
- Danger bot warnings are often non-blocking
- EE changes require
ee: true
in changelog entries
- This repository is very large - use targeted searches