Skip to main content
The backend application serves as the foundation for our new backend codebase, written in TypeScript and built using Deepkit as the core framework.

🧪 Testing

Backend testing is performed through integration tests, which verify the system’s behavior as a whole rather than in isolation. This approach helps catch issues that unit tests might miss, such as problems in data flow, service boundaries, and real-world edge cases. We use a real database in our tests to ensure backward compatibility between the legacy Python backend models and the new backend’s database entities. This gives us confidence that changes to the new backend won’t break existing data expectations or legacy functionality.

🚀 Deployment

We deploy our backend as an Azure Container App to take advantage of built‑in scaling, networking, and zero‑downtime rollouts.

⚙️ CI/CD with GitHub Actions

Our deployment pipeline is powered by GitHub Actions. Whenever changes are pushed to the staging branch—whether in the application code or its dependencies—GitHub Actions automatically:
  1. Builds the Docker image
  2. Pushes it to the container registry
  3. Deploys it to Azure Container Apps
  4. Monitors the rollout using readiness probes
This ensures continuous integration and delivery with zero manual steps.

🔄 Zero‑Downtime Deployments

  • Blue/Green-style rollouts
    When you push a new container revision, Azure spins it up in parallel with the existing one.
  • Traffic switchover
    The old revision continues to serve all incoming requests until the new revision passes its readiness probe and is marked healthy. Only then is traffic shifted over seamlessly.

⚠️ Handling Failed Deployments

Even with automated CI/CD, a deployment can fail for many reasons:
  • Database connectivity issues
  • Missing or invalid environment variables
  • Container image pull errors
  • Application startup/runtime errors
Until the new revision is healthy, the old revision remains live—so end users won’t see downtime.

🔍 Readiness Probe

A HTTP readiness probe ensures that only a fully initialized container receives traffic. This probe targets the /health-check/readiness endpoint, which is implemented in the HealthCheckController, and is used by Azure to verify that the new revision is ready before routing traffic to it.
Azure will retry probing; only once the probe succeeds will it mark the revision healthy.

📊 Monitoring & Logs

When diagnosing deployments, you’ll look at:
  1. Status
StateMeaningAction
Running + Healthy✅ Success — rollout completeExit with success
Failed / ActivationFailed❌ Deployment errorPrint logs, exit with error
Unhealthy⚠️ Failed readiness probe or runtime issuePrint logs, exit with error
Processing / Provisioning⏳ Waiting for container readinessContinue polling
  1. System Logs
    • Azure Container Apps platform logs (e.g., deployment events, scaling decisions)
  2. Application Logs
    • The service’s standard output / console logs for errors or stack traces

🛠 Example: Failed Deployment in GitHub Actions

In that run you’ll observe:
  • Readiness probe errors logged in system events
  • Console stack traces in application logs
Make sure to cross‑reference both system and application logs to pinpoint root causes.

🗄️ Database

The ORM uses a custom naming strategy to convert camelCase field names in our entities to snake_case column names in the database tables. This helps maintain consistency and readability in both code and database schemas.

🧬 Schema Migrations

To ensure consistent schema migrations across both the new backend and legacy backend projects—and to preserve backward compatibility—we will not use the built-in migration tools provided by individual ORMs. Instead, all schema changes will be defined and applied through a shared migrations project. This centralized approach serves as the single source of truth and guarantees alignment between systems.

🧱 Entities

Entities represent the core domain objects and encapsulate business logic. These entities are isomorphic, meaning they can be used on both the backend and frontend—maintaining a single source of truth across our entire codebase. Avoid placing domain logic inside service layers; instead, it belongs within the entities themselves. When defining entity fields: When defining entity fields:
  • Use clear, descriptive, and consistent naming conventions. All field names must follow the camelCase convention, which is the standard in TypeScript.
  • Rely on the ORM’s naming strategy to map those fields to snake_case columns automatically.

🧾 Custom Column Mapping

For cases where the database column name does not follow the default convention (e.g., UUID primary keys or legacy column names), annotate the field with DatabaseField to explicitly specify the column name:
@entity.collection('external_plan_countries')
class ExternalPlanCountry {
  readonly id: UUID &
    PrimaryKey &
    DatabaseField<{ name: 'external_plan_uuid' }> = uuid();
}
In this example:
  • The entity class is mapped to the table external_plan_countries.
  • The entity field id is mapped to the column external_plan_uuid.

🔗 Relationships

TODO

📡 API Communication

Our backend exposes two communication mechanisms: RPC (for internal services) and HTTP/OpenAPI (for external customers).

🤝 RPC

Uses the @telgea/api library to share controller interfaces between frontend and backend.

🔐 Authentication

Requires an Auth0 access token derived from a portal user’s session. See Access Token Guide for details on obtaining a valid token.

🧩 Controllers

Controller interfaces are located in the @telgea/api library. Interface Definition Controller interfaces are declared using Deepkit’s ControllerSymbol and TypeScript interfaces. Below is an example of a controller interface for managing phone numbers:
libs/api/src/controllers/number.ts
import { ControllerSymbol } from '@deepkit/rpc';

// Declare the controller symbol
export const NumberControllerInterface = ControllerSymbol('Number');

// Define request and response DTOs
export interface CreateNumberRequest {}

export interface CreateNumberResponse {}

// Define the controller interface
export interface NumberControllerInterface {
  create(request: CreateNumberRequest): Promise<CreateNumberResponse>;
}
Actions can accept any number of parameters; they are not restricted to a single object parameter.
Implementation Example The controller implementation resides in the backend service and adheres to the shared interface.
apps/backend/src/number/number-rpc.controller.ts
import { rpc } from '@deepkit/rpc';
import {
  CreateNumberRequest,
  CreateNumberResponse,
  NumberControllerInterface,
} from '@telgea/api';

// Bind the implementation to the controller symbol
@rpc.controller(NumberControllerInterface)
export class NumberRpcController implements NumberControllerInterface {
  // Define the RPC action
  @rpc.action()
  async create(request: CreateNumberRequest): Promise<CreateNumberResponse> {
    // Implementation goes here
  }
}

🌐 HTTP

Exposes endpoints for customer integrations via OpenAPI.

🔐 Authentication

Uses a per-customer API key (UUID). Currently, each company is issued a single API key at creation. Future Improvements:
  • Enhance API key management with issuance, revocation, refresh, rate limiting, and fine-grained permissions.

⚙️ Environment Variables

Configuration is loaded and validated using Deepkit. The following environment variables are expected:
VariableDescriptionRequired
PORTPort the application listens on.❌ (default: 3001)
JWT_AUDIENCEExpected audience claim (aud) in incoming JWTs. Used to validate token authenticity.
JWT_ISSUERExpected issuer claim (iss) in incoming JWTs. Must match the identity provider.
JWT_JWKS_URIURI to fetch JSON Web Key Set (JWKS) for verifying JWT signatures.
DATAASE_URL Connection string to the target database.
Environment variables for local development can be found in 1Password.
These should be defined in the project’s .env file.