Properties
category: reference tags: [tasks, emergent] last_updated: 2026-03-14 confidence: high
How to read this document
- Emergent tasks arise during development but don't belong to a specific phase
- They may block, inform, or optimize work in any phase
- Tasks are numbered
E-{sequence} - Priority indicates urgency relative to current phase work
Emergent Tasks
E-1: Re-Instrument Cold Start INIT Phase
Priority: High — blocks architectural decisions about cold start mitigation Discovered during: Phase 2 (CDN caching design discussion) Relates to: Dev/Phase_0_EFS_Benchmarks, Design/Platform_Overview
Context: Phase 0 benchmarks measured cold starts at ~3,400ms and attributed ~2,400ms to "VPC ENI attach." However, AWS Hyperplane ENI (shipped 2019) pre-creates network interfaces at function creation time, not at invocation time. Current documentation and third-party benchmarks consistently report VPC overhead under 50–100ms for properly configured functions. The 2,400ms attribution is almost certainly incorrect.
The actual cold start time is likely dominated by Python package initialization: loading dulwich, Flask/Otterwiki, Mangum, aws-xray-sdk, and all transitive dependencies from a 39MB deployment package. Possibly also EFS mount negotiation (NFS/TLS handshake to mount target). Without accurate instrumentation, any cold start mitigation strategy (provisioned concurrency, architecture changes, package optimization) is a guess.
Status: COMPLETE — see Dev/E-1_Cold_Start_Benchmarks for results. VPC overhead confirmed negligible (~80-90ms). import otterwiki.server is 79% of cold start (~3.5s). Architectural mitigation designed in Design/Lambda_Library_Mode.
Task: Re-run cold start benchmarks with fine-grained tracing of the INIT phase. Break down time spent in:
- VPC/ENI setup (should be negligible with Hyperplane)
- EFS mount negotiation
- Python runtime startup
- Module imports (dulwich, Flask, Mangum, Otterwiki, aws-xray-sdk)
- Application initialization (framework setup, config loading)
Use X-Ray subsegments or manual timing around import blocks and init steps. Compare with a minimal VPC Lambda (no EFS, no heavy imports) as a control.
Deliverables:
- Updated Dev/Phase_0_EFS_Benchmarks with corrected INIT breakdown
- Identification of top 2–3 contributors to cold start latency
- Recommendation: package optimization, lazy imports, memory tuning, or architectural change
Acceptance criteria:
- INIT phase broken into at least 4 measured segments
- Each segment's contribution to total cold start quantified (ms and %)
- Control Lambda (minimal VPC, no EFS) measured for baseline comparison
- Benchmark page updated with corrected attribution
E-2: CDN Caching Layer Design
Priority: Medium — improves page load UX independent of cold start fix Discovered during: Phase 2 (page responsiveness discussion) Relates to: Design/Platform_Overview, Design/Operations
Context: Wiki pages are written infrequently (during Claude sessions via MCP) and read much more often (browsing, reference). CloudFront is already in the architecture for static SPA hosting but is not used to cache wiki page content. Adding a caching layer for page reads would reduce most page loads from ~270ms (warm Lambda) to ~10–50ms (edge serve), and reduce origin load.
Status: Design complete — see Design/CDN_Read_Path. Option A (Thin Assembly Lambda) recommended. Implementation queued as Tasks/E-2_CDN_Read_Path.
Design decisions needed:
Cache freshness strategy: Short TTL (30–60s) on page HTML with Cache-Control headers from the origin. No invalidation API calls under normal operation — pages self-expire. Static assets (CSS, JS, fonts) use content-hashed filenames with long TTLs (1 year). Invalidation reserved for exceptional cases (page deletion, privacy). This avoids the invalidation cost problem: at scale (e.g. 1,000 active wikis × 5 writes/day), path-based invalidation would exceed the 1,000/month free tier and cost ~$220/month, growing linearly with write volume.
Auth-aware caching for private wikis: Three options evaluated:
- CloudFront signed cookies — most CloudFront-native; set after OAuth login, scoped to user's subdomain. CloudFront validates signature at edge before serving cached content. Signed cookie attributes are excluded from cache key, so all authenticated users of the same wiki share cached pages.
- CloudFront Functions with JWT validation — lightweight JS function on viewer-request validates JWT at edge using built-in crypto module + KeyValueStore for public key. Sub-millisecond execution, no extra cost. Works well for RS256 if public key verification fits within execution constraints.
- Lambda@Edge — most powerful, can do full OIDC flows, but heavier and more expensive. Overkill for token validation on cached content.
Recommended approach: CloudFront Functions (option 2) for auth validation + short-TTL cache for page content. Needs validation that RS256 signature verification runs within CloudFront Functions execution limits.
MCP calls are not cached — POST requests on a separate path pattern, always pass through to Lambda.
Deliverables:
- Design page:
Design/CDN_Caching - CloudFront Functions prototype for JWT validation (validate RS256 fits within execution constraints)
- Estimate of cache hit ratio for typical wiki usage patterns
Acceptance criteria:
- Design page documents cache strategy, auth approach, TTL rationale, and cost model
- CloudFront Functions JWT validation tested (RS256 performance confirmed or HS256 fallback documented)
- Cache behavior configuration specified for page content vs. static assets vs. MCP vs. API routes
E-3: Client-Side Encryption / Zero-Knowledge Storage
Priority: Medium — not a launch blocker, but important for trust and the privacy story Discovered during: Landing page copywriting (2026-03-14) Status: Design spike complete — see Design/E-3_Encryption_Spike. Recommendation: EFS encryption at rest + IAM audit logging for launch; per-user KMS deferred until storage model changes.
Context: The current privacy claim is "your wiki is private by default" — but the operator (us) can still read the data at rest on EFS. For a product whose pitch is "memory for your agents," the data is inherently sensitive: it's the user's working notes, research, plans, and whatever their agents are writing on their behalf.
Ideally, wiki content would be encrypted client-side so that even the platform operator cannot read it. This is a hard problem for a wiki with a web UI and MCP access — both need to decrypt content to render/search it — but worth investigating.
Areas to explore:
- Client-side encryption with key derived from user credential (e.g. HKDF from OAuth token or user-supplied passphrase)
- Impact on web UI rendering (decrypt in browser via Web Crypto API?)
- Impact on MCP access (agent would need the key — how does that work?)
- Impact on semantic search (can't embed encrypted text — is search a premium-only feature anyway?)
- Impact on git clone (encrypted blobs in repo, decrypt locally?)
- Precedents: Standard Notes, Proton Drive, age-encrypted git repos
- Partial approaches: encrypt at rest with per-user KMS keys (operator can't casually surveil, but AWS access would still allow it)
Deliverables:
- Design spike: what's feasible, what's the UX impact, what are the tradeoffs
- Recommendation: full zero-knowledge, per-user KMS, or "encrypt at rest and be honest about the limits"
Acceptance criteria:
- At least three approaches evaluated with pros/cons
- UX impact documented for web UI, MCP, git clone, and search
- Recommendation with rationale
E-4: Lambda Library Mode Implementation
Priority: Medium — reduces write-path cold start from ~4.5s to ~2.6s Discovered during: Cold start deep dive (2026-03-14) Depends on: Dev/E-1_Cold_Start_Benchmarks (complete), Design/Lambda_Library_Mode Relates to: Design/CDN_Read_Path, Design/Platform_Overview
Context:
The E-1 benchmarks identified import otterwiki.server as 79% of cold start (~3.5s). The Lambda Library Mode design (Design/Lambda_Library_Mode) proposes replacing otterwiki.server with a lazy-loading drop-in via sys.modules injection, plus upstream contributions to defer heavy imports in views.py and wiki.py.
Task:
Fork work (lambda_server.py):
- Write
lambda_server.pythat exportsapp,db,storage(LocalProxy),app_renderer(LocalProxy),mail(LocalProxy),githttpserver(LocalProxy), model re-exports, template filters, Jinja globals - Inject via
sys.modules['otterwiki.server']before any otterwiki imports in lambda_init.py - Add
@app.before_requesthook for deferred init (db.create_all, config, plugins, multi-tenant middleware) - Verify all existing E2E tests pass against the replacement module
Upstream PRs:
5. Lazy imports in views.py — move heavy imports into route handler function bodies
6. Lazy imports in wiki.py — PIL.Image, feedgen to function level
7. Extract plugin entrypoint scan to explicit init_plugins() function
8. Remove duplicate render = OtterwikiRenderer() in renderer.py:632
Deliverables:
lambda_server.pyin wikibot-io repo- Updated
lambda_init.pyusing injection pattern - 4 upstream PRs to otterwiki (items 5-8)
- Updated cold start benchmarks showing improvement
Acceptance criteria:
- Lambda INIT < 1,600ms without upstream PRs
- Lambda INIT < 800ms with upstream PRs accepted
- All existing E2E tests pass
- First-request latency < 2,000ms (deferred init)
- Warm request latency unchanged
E-5: Retain .pyc Files in Lambda Package
Priority: Low — estimated 200-400ms cold start reduction, low effort Discovered during: Cold start mitigation discussion (2026-03-14) Relates to: Dev/E-1_Cold_Start_Benchmarks
Context:
The Lambda build script (app/otterwiki/build.sh) strips all .pyc files and __pycache__ directories to reduce package size. However, Lambda's package filesystem is read-only — Python cannot cache compiled bytecode at runtime. Every cold start recompiles every .py file it imports. Retaining pre-compiled .pyc files (or better, running python -m compileall during the build) skips the compilation step on cold start.
Estimated savings: 200-400ms. The bulk of import time is executing module-level code and loading C extensions, not bytecode compilation, so the improvement is modest. But the change is a one-liner with zero risk.
Task:
- Remove the
find "$PACKAGE_DIR" -name "*.pyc" -deleteline frombuild.sh - Remove the
find "$PACKAGE_DIR" -type d -name "__pycache__" -exec rm -rf {} +line - Add
python -m compileall -q "$PACKAGE_DIR"after stripping to pre-compile all.pyfiles - Measure package size delta
- Re-run cold start benchmark to measure actual improvement
Acceptance criteria:
.pycfiles retained in Lambda package- Cold start improvement measured and documented
- Package size increase documented (expected: ~10-20MB)
E-6: Lambda Warming Ping
Priority: Low — stopgap measure, not a long-term solution Discovered during: Cold start mitigation discussion (2026-03-14) Relates to: Dev/E-1_Cold_Start_Benchmarks, Design/CDN_Read_Path
Context: An EventBridge rule invoking the otterwiki Lambda every 5 minutes keeps one execution environment warm, eliminating cold starts for the common case (single user browsing). Cost is effectively $0/month (8,760 invocations/month at 500ms × 512MB = 2,190 GB-seconds, well within the 400K GB-s free tier).
Limitations: only keeps 1 instance warm. Concurrent requests beyond 1 still cold-start. Does not solve the problem at scale — just masks it for low-traffic scenarios.
This is a band-aid, not a solution. The CDN read path (Tasks/E-2_CDN_Read_Path_ClientSide) is the proper fix. This task exists as a fallback if the CDN work is delayed.
Task:
- Add EventBridge rule in
infra/__main__.py: invoke otterwiki Lambda every 5 minutes - Add Lambda permission for EventBridge to invoke
- The Lambda handler already handles non-API-Gateway events gracefully (returns early)
- Verify warm state with benchmark
Acceptance criteria:
- EventBridge rule triggers Lambda every 5 minutes
- Lambda stays warm between invocations (no cold start on next real request)
- Cost: $0/month additional