Skip to main content

Structured Logging Standards

Guidelines for using structured logging throughout the application.

Overview

We use a simple console-based structured logger for logging, which provides:

  • Zero external dependencies (Cloudflare Workers compatible)
  • Structured JSON output for easy parsing and querying
  • Three essential log levels (info, warn, error)
  • Contextual information via child loggers
  • Native integration with Cloudflare logging infrastructure

Why not Pino? Following KISS & YAGNI principles, we use a lightweight custom logger that does exactly what we need without added complexity or bundle size. See logging_plan_kiss_yagni.md for the full rationale.

Logger Utilities

Backend/Worker Logger

For server-side code (Cloudflare Workers, services, use cases):

import { logger, createLogger } from "@shared/logger";

// Use default logger for general logging
logger.info("User authenticated", { userId: 123, email: "[email protected]" });

// Create a child logger with service context
const serviceLogger = createLogger({ service: "auth", version: "1.0" });
serviceLogger.info("Session validated", { sessionId: "abc123" });

Frontend/Browser Logger

For client-side code (React components):

import { browserLogger } from "@shared/logger";

browserLogger.info("Component mounted", { componentName: "Dashboard" });
browserLogger.error("API call failed", {
endpoint: "/api/users",
error: err.message,
});

Log Levels

Our simple logger provides three essential log levels:

  • info: General informational messages about application flow
  • warn: Warning messages for potentially harmful situations
  • error: Error messages for serious problems

Note: We intentionally limit to three levels following YAGNI principles:

  • trace/debug: Use info with detailed context instead
  • fatal: Use error - Cloudflare Workers handle process crashes automatically

This keeps the mental model simple while covering all practical needs.

Best Practices

1. Use Structured Data

Good: Include contextual information as objects

logger.info("Fund created", {
fundId: 123,
name: "Growth Fund",
organizationId: 456,
});

Bad: Use string concatenation

console.log(`Fund ${fundId} created: ${name}`);

2. Create Service-Specific Loggers

Create child loggers for each service or domain:

// In service/auth/service.ts
import { createLogger } from "@shared/logger";
const logger = createLogger({ service: "auth" });

// In use-cases/fund/setup-fund.ts
import { createLogger } from "@shared/logger";
const logger = createLogger({ useCase: "setup-fund" });

3. Log Errors Properly

Always include error details in structured format:

try {
await someOperation();
} catch (error) {
logger.error("Operation failed", {
operation: "someOperation",
error: error instanceof Error ? error.message : String(error),
stack: error instanceof Error ? error.stack : undefined,
});
}

4. Avoid Sensitive Data

Never log passwords, tokens, or other sensitive information:

// ❌ Bad
logger.info("User logged in", { password: user.password });

// ✅ Good
logger.info("User logged in", { userId: user.id, email: user.email });

5. Use Consistent Field Names

Use consistent field names across the application:

  • userId, fundId, organizationId for IDs
  • error for error messages
  • count for counts
  • status for status codes

6. Avoid Over-Logging

Don't log excessively - it creates noise and costs:

Bad: Logging in tight loops

for (const item of items) {
logger.info("Processing item", { itemId: item.id }); // Too verbose!
}

Good: Log summaries

logger.info("Processing items", { count: items.length });
// Process items...
logger.info("Items processed", {
count: items.length,
successCount: results.filter((r) => r.success).length,
});

Examples by Scenario

Service Operations

import { createLogger } from "@shared/logger";
const logger = createLogger({ service: "portfolio" });

export class PortfolioService {
async createPortfolio(data: CreatePortfolioRequest) {
logger.info("Creating portfolio", {
fundId: data.fundId,
type: data.type,
});

try {
const portfolio = await this.repository.create(data);
logger.info("Portfolio created successfully", {
portfolioId: portfolio.id,
fundId: data.fundId,
});
return portfolio;
} catch (error) {
logger.error("Failed to create portfolio", {
fundId: data.fundId,
error: error instanceof Error ? error.message : String(error),
});
throw error;
}
}
}

Use Case Orchestration

import { createLogger } from "@shared/logger";
const logger = createLogger({ useCase: "run-eod" });

export class RunEODUseCase {
async execute(fundId: number, date: Date) {
logger.info("Starting EOD process", {
fundId,
date: date.toISOString(),
});

// Process steps...
logger.debug("Step completed", {
fundId,
step: "portfolio-valuation",
});

logger.info("EOD process completed", {
fundId,
duration: Date.now() - startTime,
});
}
}

Frontend Components

import { browserLogger } from "@shared/logger";

export function DashboardComponent() {
useEffect(() => {
browserLogger.debug("Dashboard mounted", { userId: user.id });

fetchData()
.then(() => {
browserLogger.info("Data loaded successfully");
})
.catch((error) => {
browserLogger.error("Failed to load data", {
error: error.message,
});
});
}, []);
}

Testing

In tests, you can check that appropriate log messages are generated:

import { describe, it, expect, vi } from "vitest";
import { logger } from "@shared/logger";

describe("MyService", () => {
it("should log errors when operation fails", async () => {
const logSpy = vi.spyOn(logger, "error");

await expect(service.failingOperation()).rejects.toThrow();

expect(logSpy).toHaveBeenCalledWith(
expect.stringContaining("Operation failed"),
expect.objectContaining({ error: expect.any(String) }),
);
});
});

Environment Configuration

The logger outputs to console automatically. Cloudflare Workers captures all console output as structured logs.

Local Development

All logs are visible in the terminal where you run pnpm dev.

Production

Logs are automatically captured by Cloudflare and available via:

  • Cloudflare Dashboard: Real-time logs viewer
  • Logpush: Export logs to S3, R2, or other destinations
  • Wrangler CLI: wrangler tail for real-time streaming

No additional configuration needed - it just works!

Migration from console.log

Strategy: Opportunistic migration (not urgent)

When modifying a file for other reasons, consider upgrading console logging:

  1. console.log()logger.info() or skip migration in tests
  2. console.warn()logger.warn()
  3. console.error()logger.error()
  4. Extract variables into structured context objects
  5. Remove string concatenation/interpolation

Important:

  • ✅ Migrate services and use cases when touching them
  • ✅ Keep console.log in test files (perfectly fine for tests!)
  • ❌ Don't create dedicated migration tickets
  • ❌ Don't pause feature work for logging updates

Example migration:

// Before
console.log(`Processing fund ${fundId} on ${date}`);
console.error("Failed to process:", error);

// After (in services/use-cases only)
logger.info("Processing fund", { fundId, date: date.toISOString() });
logger.error("Processing failed", {
fundId,
error: error instanceof Error ? error.message : String(error),
});

// Tests: Keep console.log!
console.log("Test output:", result); // This is fine in tests

Resources