Logging Implementation Plan - KISS & YAGNI
Executive Summary
This document outlines a pragmatic logging strategy for Asset360 v3 that follows KISS (Keep It Simple, Stupid) and YAGNI (You Aren't Gonna Need It) principles. The plan addresses the current mismatch between documentation and implementation while avoiding over-engineering.
Current State Analysis
What We Have
- Simple console-based structured logger in
shared/logger.ts - Documentation referencing Pino (but not implemented)
- Mixed logging patterns: Some files use structured logger, many still use
console.log - Working implementation that's Cloudflare Workers compatible
The Problem
- Documentation-Implementation Gap: Docs mention Pino, but we use a simple console logger
- Inconsistent Usage: Not all code uses the structured logger
- Over-Specification: Documentation suggests features we don't need yet
KISS & YAGNI Principles Applied
What We Actually Need (YAGNI)
✅ Keep:
- Structured JSON logging for parsing/querying
- Context support (service name, user ID, etc.)
- Basic log levels (info, warn, error)
- Cloudflare Workers compatibility
- Zero external dependencies for logging
❌ Don't Need (Yet):
- Pino library (adds complexity and bundle size)
- Trace/debug/fatal levels (3 levels are sufficient)
- Log rotation (handled by Cloudflare)
- Log aggregation libraries (Cloudflare handles this)
- Complex configuration systems
- Performance benchmarking (current logger is fast enough)
Keep It Simple (KISS)
The current shared/logger.ts implementation is already perfect for our needs:
- Uses native
consolemethods (no dependencies) - Outputs structured JSON
- Supports context via child loggers
- Works in both Workers and browser
- ~160 lines of simple, understandable code
Implementation Plan
Phase 1: Documentation Alignment (Immediate)
Goal: Make documentation match reality
Tasks:
-
✅ Update
docs/docs/guidelines/logging.md:- Remove Pino references
- Document our simple console-based approach
- Explain why this is the right choice (KISS/YAGNI)
- Keep all existing examples (they're good)
-
✅ Add this plan document to root as
logging_plan_kiss_yagni.md
Rationale: Fix the confusion between what docs say and what code does
Phase 2: Gradual Migration (Low Priority)
Goal: Replace console.log with structured logger where it adds value
Approach: Opportunistic migration, not big-bang
- When touching a file for other reasons, migrate logging
- Focus on services and use-cases first
- Leave test files alone (console.log is fine for tests)
- Don't create a separate migration ticket
Priority Order:
- High Value: Services (
worker/services/*/service.ts)- Business logic errors are most important to track
- Context (service name, IDs) is valuable
- Medium Value: Use cases (
worker/use-cases/)- Workflow orchestration benefits from structured logs
- Already have some logger usage
- Low Value: Infrastructure code
- Migration as needed
- Skip: Test files
console.logis perfectly fine for test output- Structured logging adds no value here
Migration Pattern (3 steps):
// 1. Add import at top
import { createLogger } from "@shared/logger";
const logger = createLogger({ service: "my-service" });
// 2. Replace console.log with logger
// Before:
console.log(`Processing fund ${fundId}`);
// After:
logger.info("Processing fund", { fundId });
// 3. That's it. No more steps.
Phase 3: Add Helpers (Only If Needed)
Trigger: If we see repeated patterns 3+ times
Potential Additions (add only when proven necessary):
- Helper for logging errors with stack traces
- Helper for timing operations
- Utility for sanitizing sensitive data
Implementation Rule: Add helpers only after seeing the pattern repeated, not speculatively
What We're NOT Doing
To maintain KISS/YAGNI, we explicitly avoid:
❌ No Pino Migration: Current logger is sufficient ❌ No Log Levels Beyond 3: info/warn/error covers all needs ❌ No Custom Transport Layer: Cloudflare handles log shipping ❌ No Log Rotation: Cloudflare manages this ❌ No Sampling/Throttling: Not needed at current scale ❌ No Correlation IDs: tRPC context already provides request context ❌ No Performance Metrics: Cloudflare Analytics provides this ❌ No Log Aggregation Library: Use Cloudflare Logs/Logpush ❌ No Configuration System: Environment is enough (production logs are always on) ❌ No Big-Bang Migration: Migrate opportunistically
Success Criteria
Short Term (1-2 weeks)
- Documentation accurately describes current implementation
- Key business services use structured logging
- No new code uses
console.login services
Long Term (Organic)
- Most services use structured logger (achieved through normal development)
- Test files still use console.log (this is fine!)
- No logging-related bugs or confusion
Decision Log
Why Not Pino?
- Bundle Size: Pino adds ~50KB minified
- Complexity: Transports, pretty-printing, child loggers with bindings
- Over-Engineering: We don't need 90% of Pino's features
- Edge Compatibility: More dependencies = more edge case issues
- Current Logger: Already does what we need
Why Not More Log Levels?
- Trace/Debug: Use
infowith detailed context instead - Fatal: Use
error- Cloudflare will restart the worker anyway - Three levels are sufficient: More adds cognitive overhead
Why Not Big-Bang Migration?
- Risk: Large refactors are risky
- Value: console.log isn't broken, structured logging is just better
- Team Velocity: Don't pause feature work for logging migration
- YAGNI: Migrate when touching files anyway
Testing
The logger has comprehensive test coverage in worker/__tests__/logger/logger.test.ts:
- 36 test cases covering all logger functionality
- Core functionality: Logger creation, child loggers, browserLogger alias
- Log levels: All three levels (info, warn, error) tested
- Output format: JSON structure, timestamps, level fields validated
- Call patterns: All supported patterns (message-only, message+data, data+message, data-only)
- Context inheritance: Parent-child context, overrides, nested loggers
- Data types: Strings, numbers, booleans, null, arrays, nested objects
- Real-world patterns: Service logging, use-case logging, error handling
- Edge cases: Empty objects, special characters, long messages, undefined values
- Performance: Rapid successive calls, efficient child creation
All tests use mocked console methods to avoid actual output during testing.
Running Logger Tests
# Run all logger tests
pnpm vitest run worker/__tests__/logger/logger.test.ts
# Run with watch mode
pnpm vitest watch worker/__tests__/logger/logger.test.ts
# Run with coverage
pnpm vitest run --coverage worker/__tests__/logger/logger.test.ts
Usage Examples
Service Logging
import { createLogger } from "@shared/logger";
const logger = createLogger({ service: "fund" });
export class FundService {
async createFund(data: CreateFundRequest) {
logger.info("Creating fund", {
organizationId: data.organizationId,
code: data.code,
});
try {
const fund = await this.repository.create(data);
logger.info("Fund created", { fundId: fund.id });
return fund;
} catch (error) {
logger.error("Fund creation failed", {
code: data.code,
error: error instanceof Error ? error.message : String(error),
});
throw error;
}
}
}
Use Case Logging
import { createLogger } from "@shared/logger";
const logger = createLogger({ useCase: "setup-fund" });
export class SetupFundUseCase {
async execute(input: SetupFundInput) {
logger.info("Starting fund setup", {
organizationId: input.organizationId,
});
// Orchestration steps...
logger.info("Fund setup complete", {
fundId: result.fund.id,
bankAccountCount: result.bankAccounts.length,
});
}
}
What NOT to Do
// ❌ Don't log in tight loops
for (const item of items) {
logger.info("Processing item", { item }); // Too verbose!
}
// ✅ Log summary instead
logger.info("Processing items", { count: items.length });
// ❌ Don't log sensitive data
logger.info("User login", { password: user.password }); // NEVER!
// ✅ Log safe identifiers
logger.info("User login", { userId: user.id, email: user.email });
Monitoring & Observability
Current Capabilities
- Cloudflare Dashboard: View real-time logs
- Logpush: Ship logs to external services (S3, R2, etc.)
- Analytics: Built-in metrics and insights
- Tail Workers: Real-time log streaming with
wrangler tail
What This Gives Us
- All logs are JSON-structured and easily queryable
- Context (service, IDs) makes debugging straightforward
- No additional infrastructure needed
- Works seamlessly with Cloudflare's ecosystem
Future Enhancements (Only If Needed)
If we actually need more observability (prove it first!):
- Set up Logpush to S3/R2 for long-term storage
- Use external log aggregation (DataDog, Grafana, etc.)
- Add custom error tracking (Sentry)
But wait until we have a concrete need - YAGNI!
Maintenance
Regular Reviews
- Never: This is a solved problem
- Review only if: Pain points emerge, new requirements surface
Red Flags (Reasons to Revisit)
- Can't debug production issues with current logs
- Log queries take too long (unlikely with structured JSON)
- Team frequently confused about logging patterns
- Cloudflare introduces breaking changes (very unlikely)
Until then: Current approach is sufficient
Conclusion
Our simple console-based structured logger is exactly what we need. It's:
- ✅ Simple to understand
- ✅ Zero dependencies
- ✅ Cloudflare Workers compatible
- ✅ Produces structured, queryable logs
- ✅ Fast and lightweight
The plan is:
- Fix documentation to match reality
- Migrate opportunistically (not urgently)
- Add nothing unless proven necessary
This is KISS & YAGNI in action.