Shane HobanShane Hoban

Shane Hoban

MSc. Computer Science (UCD) · Full Stack Developer

Crosshaven, Cork, Ireland
All Articles

Building hobnb: a private-first booking app

20 February 2026
ai-assistedtanstack startbetter authdrizzle ormsqlitecoolifynixpackstestingmigrations

I built hobnb as a lightweight, private-first booking app for trusted people. The product goal was straightforward booking UX, but the engineering goal was reliability: I wanted a system that could survive normal mistakes, redeploys, and schema changes without drama.

Why this project mattered

I was not trying to build a generic booking SaaS. I was solving a narrower problem:

  • trusted users
  • clear booking approvals
  • no double-booking surprises
  • predictable operation on modest infrastructure

That narrow scope helped me make pragmatic architecture decisions and focus on safety where it actually matters.

Product and UX priorities

At a product level, hobnb is centered around:

  • Booking flow: choose dates, add notes, submit
  • Approval model: admins can approve/decline; trusted users can be instant-book enabled
  • Visibility: notifications, status-aware UI, and calendar-based clarity

The first account auto-promotes to super-admin, and subsequent users require approval. That keeps onboarding simple while preserving admin control.

Stack and architecture choices

hobnb uses:

  • TanStack Start + React 19 for route-driven full-stack app structure
  • Better Auth for account/session workflows
  • Drizzle ORM + SQLite for fast, simple persistence
  • Tailwind v4 + shadcn/ui for predictable UI composition
  • Vitest + TypeScript + Biome for quality gates

I deliberately avoided unnecessary moving parts. SQLite is enough for this use case when combined with strict operational discipline.

Deployment model (Coolify + Nixpacks)

Deployment is intentionally explicit in nixpacks.toml:

  • install dependencies
  • build the app
  • prune runtime dependencies
  • start with a migration-safe command

Runtime startup command:

pnpm db:deploy-migrate && node .output/server/index.mjs

Critical operational rule: persistent storage must be mounted at /app/data and DB_PATH must point to /app/data/sqlite.db, otherwise data will be lost on redeploy.

Migration safety scripts and process

This is the part I care most about.

1. Safe migration wrapper

bash scripts/safe-migrate.sh is the canonical migration path. It enforces:

  1. pre-migration integrity check (check-db-integrity.js)
  2. backup creation (backup-db.sh)
  3. migration apply
  4. post-migration integrity check

If integrity fails before migration, it exits. If integrity fails after migration, it flags and stops for intervention.

2. Backup flow

bash scripts/backup-db.sh creates timestamped backups and keeps retention bounded with MAX_BACKUPS.

Key details:

  • uses SQLite backup API via scripts/sqlite-backup.js
  • supports WAL-safe backup behavior
  • verifies backup file exists and has non-zero size
  • prunes old backups automatically

3. Integrity checks

node scripts/check-db-integrity.js and pnpm db:check validate DB health and policy expectations, including role constraints.

This is where I encoded operational invariants so mistakes show up early.

4. Deploy-time migration behavior

bash scripts/deploy-migrate.sh is run on container start through db:deploy-migrate.

Behavior differs by state:

  • fresh deploy (no DB): apply migrations, seed default listing
  • existing DB: integrity check -> backup -> apply migrations -> integrity check -> seed default listing

This means deploy-time schema evolution is automatic but guarded.

Incident prevention I intentionally baked in

I documented and enforced several guardrails because data loss usually comes from routine operations, not exotic failures.

Foreign key and cascade awareness

The docs explicitly call out risky cascades (for example deleting users and related auth/booking data). The point is not to ban deletion, but to stop accidental destructive changes.

Super-admin redundancy checks

Integrity logic includes super-admin constraints and warns when redundancy is low. That prevents accidental lockout scenarios.

Migration + backup discipline

I treat "migration without backup" as a process failure. The scripts are designed so the safe path is the default path.

Smoke-test capability

db:smoke and related scripts make it easier to validate migration/seed/integrity behavior in a controlled way before relying on production deploys.

Incident response playbook

If something goes wrong in production, the intended process is simple:

  1. stop making changes
  2. run integrity check
  3. restore latest known-good backup if needed
  4. re-check integrity
  5. restart app

Typical restore sequence:

cp /app/data/backups/sqlite_backup_YYYYMMDD_HHMMSS.db /app/data/sqlite.db
pnpm db:check

The real win is not the restore command itself; it is having predictable backup artifacts and documented steps before you need them.

Testing and quality gates

Quality gates are intentionally boring and strict:

  • Biome lint/format checks
  • TypeScript type-checking
  • Vitest test runs
  • coverage gate in CI
  • migration smoke verification

That baseline keeps regressions and schema mistakes from silently reaching deploy.

What I would improve next

  • richer admin analytics (lead time, approval latency, cancellation trends)
  • user-level notification preferences
  • deeper audit timeline tooling for admin actions

The current foundation is stable enough that these can be layered in incrementally.

Takeaway

The most important part of hobnb is not a fancy feature. It is that the system is designed to be operated safely by default.

For small to medium apps, reliability usually comes from:

  • clear constraints
  • explicit scripts
  • predictable deploy flow
  • boring recovery process

That is the standard I tried to hit with hobnb.