Infrastructure & Availability

Rankability is built on modern cloud infrastructure with defined recovery objectives, automated backups, and resilient architecture designed to keep your data safe and the platform available.

Hosting

  • Application hosting: Cloud-based with automated deployment and scaling
  • Database: PostgreSQL with automated backups and point-in-time recovery
  • Cache layer: Distributed cache for rate limiting and session data, with automatic in-memory fallback
  • Object storage: Cloud object storage with cross-region replication and versioning enabled

Recovery Objectives

We define and maintain clear targets for data loss and downtime:

Component Recovery Point (max data loss) Recovery Time (max downtime)
Application 0 (code in Git) 1 hour
Database 1 hour 2 hours
Object storage Near-zero (cross-region replication) 1 hour
Cache layer N/A (ephemeral, rebuilt on recovery) 30 minutes
Full service recovery 4 hours

Backups

Database

  • Continuous write-ahead log (WAL) archiving for point-in-time recovery
  • Hourly automated snapshots
  • Daily full backups stored in a separate region
  • Weekly extended-retention backups (90 days)
  • Monthly archival backups (1 year)

Object Storage

  • Cross-region replication on production buckets
  • Versioning enabled to protect against accidental deletion

Application Code

  • All source code in Git with full history
  • Multiple distributed clones across the team
  • Database migration scripts versioned alongside code

Resilience

The platform is designed to degrade gracefully rather than fail completely:

  • Cache layer unavailable: Rate limiting falls back to in-memory mode with stricter limits. No data loss — cache data is ephemeral.
  • AI provider unavailable: If one AI provider is down, requests can be routed to an alternative. If all providers are down, AI features are temporarily disabled with clear user messaging.
  • Payment processing unavailable: Existing subscriptions continue based on cached plan data. Stripe queues webhook events and delivers them on recovery.
  • Authentication provider unavailable: All auth-dependent features are unavailable (fail-closed design). Service resumes automatically when the provider recovers.

Disaster Recovery

We maintain documented recovery procedures for seven specific disaster scenarios:

  1. Database failure (instance crash or data corruption)
  2. Application deployment failure
  3. Cache layer failure
  4. Authentication provider (Clerk) outage
  5. AI provider outage
  6. Payment processing (Stripe) outage
  7. Complete infrastructure loss

Each scenario has step-by-step recovery procedures, estimated recovery times, and defined responsibilities.

Secret Rotation

Credentials and secrets are rotated on a defined schedule:

Secret Type Rotation Frequency
Database credentials Annually or on compromise
Session secrets Annually
Internal admin secrets Quarterly
Third-party API keys Per vendor recommendation or annually

Uptime

We monitor application health continuously and respond to incidents per our defined severity levels and escalation procedures.

For security inquiries or to request our SOC 2 report, contact [email protected]