The Gamification Data Model: How to Structure Streaks, Achievements, Points & Leaderboards

Author
Jason Louro
Jason LouroCo-Founder, Trophy

The individual technical challenges of building gamification features are well documented at this point. Streak logic requires timezone-aware calendar evaluation, not 24-hour arithmetic. Leaderboards at scale require Redis sorted sets, per-segment key management, and atomic period resets. Achievement backfill requires idempotent scripts against event history, not denormalized totals. These are solvable problems, and teams who've understood them have a clear path through each one.

What's less documented is the data model problem that sits underneath all of them. Each feature, solved correctly in isolation, still leaves you with four separate systems — streak state in one table, achievement completions in another, points in a third, leaderboards driven by Redis — connected by glue code.

The moment you want an achievement that triggers when a user reaches a streak milestone, or a points boost that only applies to a specific user cohort, or a leaderboard that ranks by points balance rather than raw activity, you're writing across those systems. The integration points are where bugs live, where performance degrades, and where the features you didn't build upfront become expensive to add.

This post describes what a unified gamification data model looks like in practice, starting with the event layer that ties everything together, then the per-feature config and state models that Trophy has built and operated at scale.

The Event Layer: One Pipeline for Everything

The architectural decision that shapes everything else is where gamification logic lives relative to your event stream. In most custom implementations, features are added incrementally: you build streak logic that fires when a user logs activity, then add achievement checks to the same handler, then wire up points awards, then start a separate cron job for leaderboard updates. The result is a set of interconnected side effects on a single event that's easy to reason about when there are two features and increasingly brittle as the number grows.

Trophy's model inverts this. Every user interaction flows through a single metric event: a lesson completed, a workout logged, a task checked off. Trophy evaluates that event against all configured gamification features simultaneously and produces a unified response containing the state changes across all of them:

// One event — Trophy evaluates streaks, achievements, points, and leaderboards
const response = await trophy.metrics.event('lessons_completed', {
  user: { id: userId, tz: 'America/New_York' },
  value: 1,
});

// Everything that changed as a result of this single event
response.currentStreak    // streak state after this event
response.achievements     // any achievements unlocked by this event
response.points           // points awarded by this event, across all systems
response.leaderboards     // updated leaderboard positions after this event

The response isn't a polling result — it's the transactional outcome of evaluating one event against the full gamification configuration. Nothing is eventually consistent from the application's perspective: the streak has already been evaluated, the achievements have already been checked, the points have already been awarded, and the leaderboard positions have already been updated by the time the response returns.

This matters for the data model because it means every gamification record is traceable to the originating event. An achievement completion, a points award, a streak extension — each has a foreign key to the metric event that caused it. The event ledger is the canonical source of truth for everything downstream.

Streaks: Config and Period State

A streak in Trophy is not a counter. It's a series of period records, each with a start date, an end date, a length, and an outcome.

The config layer holds everything that determines how a streak is evaluated: frequency (daily, weekly, monthly), the metrics and thresholds that constitute a qualifying action, how multiple metrics are combined (ALL vs OR logic), freeze settings (initial grant, accumulation rate, maximum). This config is what makes it possible to change streak requirements without touching application code — adding a second qualifying metric, adjusting the threshold, switching from daily to weekly — all as config changes that take effect on the next event.

The period layer is where the history lives. Each streak period is a row:

streak_periods
  user_id
  period_start      -- local calendar date in user's timezone
  period_end        -- local calendar date in user's timezone
  length            -- streak count at close of this period
  outcome           -- extended | broken | frozen
  closed_at         -- UTC timestamp when the period was finalised

This is the record that makes streak restoration a data operation rather than a guess, that powers longest-streak badges without table scans, and that enables the streak history calendar view without storing anything extra.

The freeze ledger is a separate table where every grant and every consumption is a row:

streak_freeze_events
  user_id
  event_type        -- granted | consumed
  created_at        -- UTC timestamp
  reason            -- initial_grant | accumulation | manual

Freeze support isn't a feature you add to this model — it's a natural extension of modelling streak state as events rather than a counter. A freeze consumption is just a period outcome of frozen with a corresponding consumed row in the freeze ledger. The balance is always derivable by summing the ledger; there's no separate count to go stale.

The timezone handling described in Streak Timezone & DST Handling is baked into the period model at the schema level: period_start and period_end are local calendar dates, not UTC timestamps. DST transitions don't affect period boundaries because calendar dates don't have lengths in hours.

Achievements: Config, Progress, and Completions

Achievement data separates into three distinct concerns.

The config layer defines what an achievement is and when it triggers. Trophy supports four trigger types — metric threshold, streak length, API call, and composite — and each config row references the relevant entity (a metric key, a streak frequency, prerequisite achievement IDs). Config rows also carry attribute filters: a subject:physics filter on a metric achievement means the achievement only fires for metric events from users with that attribute, without any application-layer branching.

Achievement status (inactive, active, locked, archived) is part of config, and the status transition is what drives automatic backdating. When Trophy moves an achievement from inactive to active, it evaluates every existing user's metric totals and streak history against the achievement condition and creates completion records for qualifying users in bulk — the backfill happens automatically, and the achievement.completed webhook is suppressed for backdated completions to prevent notification floods. The full mechanics of this are covered in How to Backfill Achievements for Existing Users.

The progress layer tracks how far a user is toward each metric achievement's threshold:

achievement_progress
  user_id
  achievement_id
  current_value     -- derived from the user's metric total
  threshold         -- copied from config at progress record creation
  pct_complete      -- precomputed, updated on each metric event

This is updated as a side effect of metric events, not computed on demand. A progress bar query is a single row lookup, not an aggregate against the full event history.

The completion layer is an append-only ledger:

achievement_completions
  id
  user_id
  achievement_id
  completed_at
  trigger_event_id  -- the metric event or API call that caused this
  backdated         -- boolean

Completion records are never updated — they're facts. The trigger_event_id foreign key means every completion is traceable to the originating event, which makes it possible to audit why a user received an achievement, replay the evaluation for debugging, and correctly handle the idempotent re-runs described in the backfill post.

Rarity (the percentage of users who have earned each achievement) is maintained as a precomputed field on the config row, updated incrementally as completion records are created rather than computed as an expensive aggregate at display time.

Points: Ledger, Triggers, Boosts, and Levels

Points are the most compositionally complex feature because they sit at the intersection of every other feature. Points can be awarded when a metric threshold is reached, when a streak milestone is hit, when an achievement is completed, on a time schedule, or at user signup. The data model has to represent all of these trigger types cleanly and produce an audit-proof award history.

The system config layer defines each points currency: key, name, display badge, and optional cap. Trophy supports multiple independent systems per app — XP and gems, for example — and each system has its own trigger configuration and ledger. The multi-currency case isn't a special case in the schema; it's just multiple rows in the systems table.

The trigger config layer defines the rules for each system. Each trigger is a row with a type, a reference to the triggering entity (a metric key and threshold, a streak length, an achievement ID, a time interval), a points value, an active/inactive status, and optional user attribute filters. The attribute filters on triggers are what make cohort-specific points rates possible — premium users earn 2× points for lesson completions, free users earn 1× — without any application-layer branching.

The award ledger is the canonical source of truth:

points_awards
  id
  user_id
  points_system_id
  amount            -- net points after boost multiplication
  base_amount       -- pre-boost amount, for audit
  boost_multiplier  -- the combined multiplier active at award time
  trigger_id        -- which trigger config row fired
  source_event_id   -- the originating metric event, achievement, etc.
  awarded_at

amount and base_amount are separate columns because boost audit matters. When a user queries their points history and sees an award of 20 points, the award record shows that the base amount was 10 and the boost multiplier was 2.0. You can always reconstruct what the user saw, which is important for support tickets and for understanding the impact of past boosts.

The boost layer is a config table with time windows and multiplier values:

points_boosts
  id
  points_system_id
  multiplier
  rounding_mode     -- floor | ceil | nearest
  starts_at
  ends_at
  user_attribute_filters  -- JSON; null means global
  status

When a points award is created, Trophy evaluates every active boost applicable to the user and multiplies them together. The combined multiplier is stored on the award record. Boost stacking (a 2× global boost and a 1.5× personal boost producing a 3× combined multiplier) is a consequence of the multiplicative evaluation rule, not special-case logic.

Points boosts are the clearest example of a feature the data model either supports structurally or requires a redesign to add. A schema that stores points as a simple total plus a list of events has no natural place for "what multiplier was active when this award was created." Trophy's ledger carries this from the start.

The levels layer stores threshold configuration:

points_levels
  id
  points_system_id
  key
  name
  threshold
  badge_url

A user's current level is determined by finding the highest threshold their current balance clears. Trophy evaluates this on every points change and fires a points.level_changed webhook when the level transitions — the event payload contains both the previous level and the new level, so the application knows exactly what changed without querying current state separately.

Leaderboards: Config, Segments, and Rank History

The technical complexity of leaderboard infrastructure — Redis sorted set management, segment key explosion, atomic period resets — is covered in Scaling App Leaderboards: Redis Architecture and Where Trophy Fits. The data model question is distinct: even if you solve the infrastructure, what do you store and how?

The config layer defines each leaderboard: ranking method (metric, points, or streak), period type (perpetual or repeating), participant limit, and breakdown attributes. Breakdown attributes are the config-level representation of segmentation — a city breakdown attribute means Trophy automatically maintains a separate leaderboard segment for every distinct city value in the user base. The segments are not explicitly provisioned; they emerge from the attribute values Trophy has seen.

The rankings layer is the live sorted state, maintained in Redis. Every leaderboard has one sorted set per active segment, updated as metric events arrive. The Redis infrastructure and the reasoning for it are detailed in the leaderboard scaling post — the schema point is that this is ephemeral optimized storage, not the record of what happened.

The rank history layer is where Trophy diverges sharply from a custom implementation:

leaderboard_rank_events
  id
  leaderboard_id
  segment_key         -- NULL for global, attribute:value for segments
  user_id
  previous_rank
  new_rank
  occurred_at

Every rank change for every participant, in every segment, is a row. This table is what makes leaderboard.rank_changed webhooks possible without polling: Trophy inserts a row, fires the webhook, and delivers the previous and new rank in the payload. It's also what makes "you were #3 in your city last week" queries possible without reconstructing history from raw events.

The period archive layer stores the final rankings when a repeating leaderboard period closes:

leaderboard_period_archives
  id
  leaderboard_id
  segment_key
  period_start
  period_end
  rankings          -- JSON array of {userId, rank, value}
  finalised_at

Period archives are the queryable history of every leaderboard run. The leaderboard finalization process — evaluate all users across all timezones, wait for the last timezone to pass midnight, archive the final state, reset the live sorted set, fire leaderboard.finished — writes to this table atomically before clearing Redis. If you query a historical leaderboard run through the API, you're reading from this archive, not reconstructing from events.

Cross-Feature References: Where Custom Models Break Down

The individual models above are each technically achievable in a custom implementation. The harder problem is the foreign keys between them.

A points trigger that fires on streak milestone completion references both the points system config and the streak config. A composite achievement references the completion records of its prerequisite achievements. A leaderboard ranked by points balance references the points award ledger to determine participant scores. A points.level_changed event needs to fire when a points award causes a threshold crossing, which means the award write and the level evaluation happen in the same transaction.

In Trophy, these cross-feature references are structural — the foreign keys exist in the schema, and the evaluation logic that traverses them runs inside the same transactional boundary as the originating event. In a custom implementation, the equivalent is glue code: explicit queries from the achievement handler to the streak state table, explicit calls from the points award function to the level check function, explicit webhook dispatches after each state change. Each piece of glue is a place where something can go wrong silently, run in the wrong order, or produce inconsistent state under concurrent updates.

The features that are hardest to add to a custom implementation later — streak freezes, points boosts, composite achievements, rank-change notifications — are all features that require new cross-feature references. Streak freezes need the freeze ledger to reference streak periods. Points boosts need the award record to reference the active boost config. Composite achievements need a prerequisite graph traversal at evaluation time. Rank-change notifications need the rank history table to be populated atomically with the sorted set update.

These aren't design oversights — they're features teams typically don't know they want until they've shipped without them. Trophy's schema carries them from the start because they were always part of the model, not added later.

FAQ

If Trophy stores all this state, what do I own in my own database?

Your application data — user records, content, purchases, the things that make your product what it is. Trophy stores the gamification state that's derived from the events you send. The two are linked by user ID. You don't need to replicate gamification state into your own schema — you read it from Trophy's API as needed, or receive it in webhook payloads as it changes.

How does Trophy handle high event volumes without the points and achievement checks becoming a bottleneck?

The evaluation pipeline is designed for throughput, not just correctness. Achievement progress updates are async; the metric event response includes achievement completions that have already crossed the threshold, but progress increments for achievements not yet complete are written behind the response. Points awards, leaderboard updates, and streak evaluations are synchronous in the critical path because they affect the response the application receives. The distinction between what must be consistent on return and what can be eventually consistent is baked into the evaluation order.

Can I have one metric trigger achievements in one system and points in another simultaneously?

Yes. A single metric key can be referenced by multiple achievement configs and multiple points trigger configs, across multiple points systems. When an event arrives, Trophy evaluates all of them. The response contains the union of everything that changed — streak extension, achievement completions, points awards across all applicable systems, leaderboard position updates.

What happens if I add a new achievement or a new points trigger to users who already have existing history?

For metric and streak achievements, Trophy backdates automatically when you activate them — any user whose current totals meet the threshold receives the completion. For new points triggers, previously recorded events are not retroactively re-evaluated; the trigger applies from activation forward. If you need to award points to existing users for past activity, the Admin API allows creating point award records directly.

Where to Go Next

The per-feature technical challenges that motivate this data model are covered in detail across the series: Streak Timezone & DST Handling, Scaling App Leaderboards Beyond Basic Redis, and How to Backfill Achievements for Existing Users. The full configuration reference for each feature is in the Trophy documentation: Streaks, Achievements, Points, and Leaderboards.


Author
Jason Louro
Jason LouroCo-Founder, Trophy

Get the latest on gamification

Product updates, best practices, and insights on retention and engagement — delivered straight to your inbox.

The Gamification Data Model: How to Structure Streaks, Achievements, Points & Leaderboards - Trophy