Blame

70bda8 Claude (Dev) 2026-03-13 01:47:16
[mcp] Port phase gates to wiki
1
# Wikibot.io Phase Gates — Phases 0–4
2
3
This document defines exit criteria and validation procedures for each phase boundary. The human reviews these before giving the go-ahead to proceed.
4
5
---
6
7
## How phase gates work
8
9
1. Phase manager completes all tasks in the phase
10
2. Manager writes `Dev/Phase N Summary` to the wiki with results
11
3. Human reviews the summary and this document's exit criteria
12
4. Human performs the validation steps below
13
5. Go/no-go decision — if go, the next phase's manager can start
14
15
---
16
17
## Phase 0 Gate: Proof of Concept
18
19
### Exit criteria
20
21
| Criterion | Target | How to verify |
22
|-----------|--------|---------------|
23
| EFS warm read latency | < 500ms | Benchmark results in `Dev/Phase 0 — EFS Benchmarks` |
24
| EFS warm write latency | < 1s | Benchmark results |
25
| Lambda cold start (VPC + EFS) | < 5s | Benchmark results |
26
| Concurrent reads | No errors | Benchmark results (3+ simultaneous) |
27
| Concurrent writes | Serialized correctly | Benchmark results (5 simultaneous, git locking) |
28
| MCP OAuth via WorkOS | Working | Claude.ai connects and calls echo tool |
29
| Git library decision | Documented | Decision in wiki or PR description |
30
| Apple provider sub | Status known | Documented (verified or flagged as unavailable) |
31
| Billing alarm | Active | Check AWS Budgets console |
32
33
### Validation steps
34
35
1. **Review benchmarks.** Read `Dev/Phase 0 — EFS Benchmarks` wiki note. Are the numbers acceptable? Do they leave headroom for the full Otterwiki app (which will be heavier than the PoC)?
36
37
2. **Test MCP yourself.** Open Claude.ai, connect to the PoC MCP endpoint. Call the echo tool. Verify the OAuth flow is smooth enough for end users.
38
39
3. **Check costs.** Review the AWS bill or Cost Explorer. Is the dev stack cost in line with expectations (~$0.50/mo baseline)?
40
41
4. **Review decisions.** Read the git library decision (gitpython vs. dulwich). Does the rationale make sense?
42
43
5. **Check the Pulumi state.** Run `pulumi stack` to verify the dev stack is clean and all resources are tracked.
44
45
### Known risks to evaluate
46
47
- **VPC cold starts:** If > 5s, consider Provisioned Concurrency (~$10-15/mo for 1 warm instance). Is the cost acceptable?
48
- **EFS latency variance:** NFS latency can spike under load. Are the P95 numbers acceptable, not just averages?
49
- **WorkOS quirks:** Any unexpected behavior in the OAuth flow? Token lifetimes? Refresh behavior?
50
51
### Go/no-go decision
52
53
- **Go** if all targets met and no surprising cost or latency issues
54
- **No-go** if EFS latency is unacceptable → evaluate Fly.io fallback (PRD section: Alternatives Considered)
55
- **No-go** if MCP OAuth doesn't work → investigate alternative auth providers or debug WorkOS integration
56
57
---
58
59
## Phase 1 Gate: Single-User Serverless Wiki
60
61
### Exit criteria
62
63
| Criterion | Target | How to verify |
64
|-----------|--------|---------------|
65
| Web UI works | Pages load, edit, save | Browse `dev.wikibot.io` |
66
| REST API works | All endpoints respond correctly | Run integration tests |
67
| MCP works | All 12 tools functional | Connect Claude.ai or Claude Code, exercise tools |
68
| Semantic search works | Returns relevant results | Search for a concept, verify results |
69
| Git history | Correct authorship per write path | Check git log on EFS |
70
| Routing + TLS | All endpoints on custom domain with valid cert | Browser + curl |
71
| Architecture decision | Same vs. separate Lambda for MCP | Documented with rationale |
72
73
### Validation steps
74
75
1. **Browse the wiki.** Go to `dev.wikibot.io`. Create a page with WikiLinks. Verify the web UI is responsive and functional.
76
77
2. **Test the API.** Run the integration test suite or manually curl a few endpoints:
78
```
79
curl -H "Authorization: Bearer $KEY" https://dev.wikibot.io/api/v1/pages
80
curl -H "Authorization: Bearer $KEY" https://dev.wikibot.io/api/v1/search?q=test
81
```
82
83
3. **Test MCP.** Connect Claude.ai to `dev.wikibot.io/mcp`. Exercise read_note, write_note, search_notes, semantic_search. Verify results are correct.
84
85
4. **Check semantic search quality.** Write a few test pages, then search for concepts using different wording. Are the results relevant?
86
87
5. **Check git authorship.** Pages created via web UI should show one author; pages via API/MCP should show the configured API author. Verify in git log.
88
89
6. **Performance sanity check.** Is the web UI snappy enough? Do API calls return in < 1s? Is MCP responsive?
90
91
### Known risks to evaluate
92
93
- **Mangum compatibility:** Any Flask features that don't work under Mangum? (Sessions, file uploads, streaming responses)
94
- **FAISS index persistence:** Does the index survive Lambda recycling? Is it loaded fast enough on cold start?
95
- **Lambda package size:** Is the deployment package under 250MB (zip) or 10GB (container)? If too large, container images may be needed.
96
97
### Go/no-go decision
98
99
- **Go** if all features work and performance is acceptable
100
- **Partial go** if semantic search has issues → defer to Phase 5 (it's a premium feature anyway)
101
- **No-go** if core wiki functionality is broken or too slow
102
103
---
104
105
## Phase 2 Gate: Multi-Tenancy and Auth
106
107
### Exit criteria
108
109
| Criterion | Target | How to verify |
110
|-----------|--------|---------------|
111
| Multi-user auth | Two users can log in independently | Test with two accounts |
112
| Wiki isolation | User A cannot access User B's private wiki | ACL enforcement test |
113
| Management API | All endpoints work | Integration tests |
114
| ACL enforcement | All roles enforced correctly | E2E test (P2-10) |
115
| Public wikis | Anonymous read access works | Test without auth |
116
| CLI tool | All commands work | Run each command |
117
| Bootstrap template | New wikis initialized correctly | Create wiki, inspect pages |
118
| Admin panel hiding | Disabled sections hidden and return 404 | Browse admin as owner |
119
| PROXY_HEADER auth | All permission levels work | Test each role |
120
| Username handling | Validation, uniqueness, reserved names | Attempt invalid usernames |
121
122
### Validation steps
123
124
1. **Create two test accounts.** Sign up as two different users (two Google accounts or Google + GitHub).
125
126
2. **Test isolation.** User A creates a private wiki. Log in as User B. Verify User B cannot see, list, or access User A's wiki via web UI, API, or MCP.
127
128
3. **Test ACLs.** User A grants User B editor access. Verify User B can now read and write. User A revokes. Verify 403.
129
130
4. **Test public wiki.** User A makes a wiki public. Open in incognito (no auth). Verify read-only access.
131
132
5. **Test the CLI.** Run through the full CLI workflow:
133
```
134
wiki create my-wiki "My Wiki"
135
wiki list
136
wiki token my-wiki
137
wiki grant my-wiki friend@example.com editor
138
wiki revoke my-wiki friend@example.com
139
wiki delete my-wiki
140
```
141
142
6. **Inspect bootstrap template.** Create a new wiki, then read the Home page and Wiki Usage Guide. Are they clear and complete?
143
144
7. **Test admin panel.** Log in as wiki owner, go to admin panel. Verify only Application Preferences, Sidebar Preferences, and Content and Editing are visible. Verify disabled routes return 404.
145
146
8. **Test tier limits.** As a free user, try to create a second wiki. Try to add a 4th collaborator. Verify clear error messages.
147
148
### Known risks to evaluate
149
150
- **DynamoDB latency:** ACL checks add a DynamoDB read to every request. Is the added latency acceptable? Should we cache in Lambda memory?
151
- **WorkOS token lifetimes:** How long do MCP OAuth tokens last before refresh? Does Claude.ai handle refresh correctly?
152
- **Username squatting:** Not a launch concern, but are the reserved name checks in place?
153
154
### Go/no-go decision
155
156
- **Go** if multi-tenancy works correctly and securely
157
- **No-go** if tenant isolation has any gaps — this is a security requirement, not a feature
158
159
---
160
161
## Phase 3 Gate: Frontend
162
163
### Exit criteria
164
165
| Criterion | Target | How to verify |
166
|-----------|--------|---------------|
167
| SPA loads | Dashboard accessible after login | Browser test |
168
| Auth flow | Login, logout, token refresh | Test each flow |
169
| Wiki CRUD | Create, view, delete wiki from UI | Browser test |
170
| Collaborator management | Invite, change role, revoke from UI | Browser test |
171
| MCP instructions | Correct, copyable command | Verify command works |
172
| Public wiki toggle | Works from settings UI | Toggle and verify |
173
| Static hosting | SPA served via CloudFront | Check response headers |
174
| Mobile responsive | Usable on phone | Browser dev tools or real device |
175
176
### Validation steps
177
178
1. **Full user journey.** In a fresh incognito window:
179
- Visit `wikibot.io` → see landing/login
180
- Log in with Google → land on dashboard
181
- Create a wiki → see token and instructions
182
- Copy `claude mcp add` command → paste in terminal → verify MCP connects
183
- Go to wiki settings → invite a collaborator, toggle public
184
- Log out → verify redirected to login
185
186
2. **Test on mobile.** Open `wikibot.io` on a phone. Is the dashboard usable? Can you create a wiki?
187
188
3. **Check error states.** What happens when the API is down? When you enter an invalid wiki slug? When you try to create a wiki and hit the tier limit?
189
190
4. **Performance.** How fast does the SPA load? Is the bundle size reasonable (< 500KB gzipped)?
191
192
### Known risks to evaluate
193
194
- **Framework choice:** Does the chosen framework (React or Svelte) feel right? Any regrets?
195
- **Auth UX:** Is the login flow smooth? Any confusing redirects or error messages?
196
- **MCP instructions clarity:** Would a new user understand how to connect? Test with fresh eyes.
197
198
### Go/no-go decision
199
200
- **Go** if the user journey works end-to-end and the UX is acceptable
201
- **Go with issues** if cosmetic issues remain — they can be fixed post-launch
202
- **No-go** if auth flow or core wiki management is broken
203
204
---
205
206
## Phase 4 Gate: Launch Readiness
207
208
### Exit criteria
209
210
| Criterion | Target | How to verify |
211
|-----------|--------|---------------|
212
| Git clone | `git clone` works for authorized users | Test from command line |
213
| Git auth | Bearer token authentication works | Clone with token |
214
| WAF active | Rate limiting and OWASP rules | Check WAF console, test rate limit |
215
| Monitoring | Dashboard shows traffic, alarms configured | CloudWatch console |
216
| Backups | EFS backup running, DynamoDB PITR active | AWS Backup console, DynamoDB settings |
217
| Backup restore | Tested at least once | Restore test documented |
218
| Landing page | Loads, explains product, has CTA | Browser test |
219
| Docs | Getting started, MCP setup documented | Read through docs |
220
221
### Validation steps
222
223
1. **Test git clone.** Create a wiki, write a few pages via MCP, then:
224
```
225
git clone https://token:<bearer>@<user>.wikibot.io/<wiki>.git
226
```
227
Verify the repo contains the expected pages.
228
229
2. **Test rate limiting.** Hit an endpoint rapidly (> 100 requests/minute). Verify WAF blocks with 429 or 403. Verify normal usage is not affected.
230
231
3. **Review monitoring.** Look at the CloudWatch dashboard. Does it show the traffic from your testing? Are all panels populated?
232
233
4. **Test backup restore.** Restore an EFS backup snapshot. Verify the git repo is intact. This can be a throwaway test — restore to a new filesystem, mount it, inspect, delete.
234
235
5. **Review landing page.** Read it as if you've never heard of wikibot.io. Does it explain the product clearly? Does the CTA lead to signup?
236
237
6. **Read the docs.** Follow the "Getting Started" guide from scratch. Does it work?
238
239
7. **Security review.** Check:
240
- No sensitive data in public repos or frontend bundle
241
- API keys rotated from development values
242
- WAF rules active
243
- No open security group rules
244
- All endpoints require auth (except public wikis and landing page)
245
246
8. **Cost review.** Check AWS bill. Is the total in line with projections? Any unexpected charges?
247
248
### Final checklist before launch
249
250
- [ ] All Phase 0–4 exit criteria met
251
- [ ] No critical bugs in the issue tracker
252
- [ ] Backup restore tested successfully
253
- [ ] Monitoring alarms tested (at least one alarm fired and notified)
254
- [ ] Landing page and docs reviewed
255
- [ ] DNS configured for production domain (`wikibot.io`)
256
- [ ] Production Pulumi stack deployed (separate from dev)
257
- [ ] Production secrets rotated from dev values
258
- [ ] WAF active on production
259
- [ ] WorkOS configured for production domain
260
261
### Go/no-go decision
262
263
- **Go** if all checklist items pass
264
- **Soft launch** if minor issues remain — launch to a small group, fix in production
265
- **No-go** if security issues, data loss risks, or fundamental UX problems remain