Skip to main content
Measurement Infrastructure
Thothy Research Desk5 min read

How Serverless Postgres Turns Growth Tactics Into Kill-or-Scale Decisions

Growth loops do not become operational until every tactic leaves behind comparable evidence in the database.

Proof stack

Evidence Stack

Serverless Postgres

Database substrate

Neon describes itself as serverless Postgres with separated storage and compute, autoscaling, database branching, and scale to zero.

Database branching

Experiment safety

Neon docs position branching as a way to branch data for development, testing, and CI/CD workflows.

Churn analytics

Retention signal

Retail churn research frames retention as economically important because acquisition often costs more than retaining existing customers.

Preference history

Memory pattern

MemRerank argues that raw long purchase histories can be noisy or mismatched, motivating more selective preference memory.

1Define each tactic as a relational entity.
2Log activation and retention events against that entity.
3Branch, test, measure, and decide whether to keep or kill the tactic.

Thesis

A Growth Loop Needs a Schema Before It Needs a Dashboard

The practical measurement problem is not whether a team can collect more events. It is whether acquisition, retention, publishing, and revenue events can be joined back to the tactic that caused them. For Thothy, that makes the relational schema the control plane: subscribers, reports, videos, products, feedback, and tactic events need stable keys before any kill-or-scale decision is credible.

Neon is a fitting substrate for this kind of loop because it is still Postgres while adding serverless properties. Its project README describes separated storage and compute, autoscaling, code-like database branching, and scale to zero; Neon’s own documentation similarly presents serverless Postgres with autoscaling, branching, and restore capabilities.[1][6]

Implementation Lens

Drizzle Should Encode the Experiment Contract, Not Just the Tables

In a growth system, an ORM schema is more than a convenience layer. It is the typed contract that says which entities exist, which events count, and which relationships are allowed to drive analysis. A Drizzle schema for Thothy should make tactics, surfaces, activation events, affiliate clicks, feedback, and retention observations explicit rather than burying them in one generic analytics blob.

This matters because retention analysis is only useful when the drivers of return behavior are visible. Research on churn analytics in online retail argues that firms invest in churn modeling because retention is economically important, while opaque models can limit insight into why attrition happens.[5]

  • Minimum useful entities: tactic, content surface, visitor or subscriber identity, activation event, downstream action, review window, and decision outcome.
  • Minimum useful joins: event-to-surface, surface-to-tactic, tactic-to-strategy, and tactic-to-review decision.

Measurement

Event Tables Convert Wow Moments Into Reviewable Evidence

A Wow Moment cannot remain a story in a strategy doc. It needs an activation event, a timestamp, a surface, and a review window. When those values live in Postgres event tables, the team can ask the operational question directly: did people who hit the activation event return, subscribe, click, or purchase at a higher rate than those who did not?

The same discipline applies to personalization and product intelligence. MemRerank argues that simply appending raw purchase history to prompts can be ineffective because histories may be noisy, long, or mismatched to the current need. That is also a database lesson: store the raw event stream, but promote only selected, queryable signals into the memory layer that powers recommendations and review decisions.[3]

LayerStored EvidenceDecision It Enables
Raw eventsViews, clicks, scrolls, saves, feedbackWhat happened on the surface
Derived factsActivation status, cohort, source, tacticWhether the tactic created a measurable signal
Review recordsBaseline, target, result, decisionWhether to scale, iterate, or kill

Reliability

Branching Makes Measurement Changes Testable Before They Touch Production

Measurement infrastructure is easy to break because the schema, event names, and dashboards must move together. Neon’s branching model is useful here: its docs describe branching data for development, testing, and CI/CD, while third-party technical coverage explains Neon’s architecture in terms of separated compute and storage with database branching built on copy-on-write cloning.[4][8]

For Thothy, that means an experiment can add an activation event, migrate a Drizzle schema, backfill a small cohort, and run reporting queries on a branch before the production measurement ledger is changed. The goal is not database novelty; it is publishing reliability and decision reliability.[4]

Learning Loop

Historical Events Are Useful Only When They Stabilize Early Decisions

New growth tactics usually begin with sparse data. A relational event store helps because earlier tactics, surfaces, and cohorts can provide comparison sets instead of forcing every experiment to start from zero. The analogy is visible in data-driven inventory research: the paper notes that warm-start information from similar existing products can stabilize early-stage training and reduce variance in estimated optimal policy.[2]

The measurement equivalent is cautious transfer. A new product-intelligence surface should not inherit conclusions from an unrelated tactic, but it can inherit baselines, event definitions, and cohort structures from similar surfaces. Postgres makes that review auditable because the historical evidence remains queryable rather than trapped in a dashboard screenshot.[2]

Operating Model

The Database Should Hold the Decision, Not Just the Data

The final table in a growth measurement stack should not be another raw event table. It should be a decision ledger: tactic, hypothesis, activation event, retention test, review date, observed result, and decision. That structure turns measurement from retrospective reporting into an operating system for deciding what earns more investment.

This is where Neon and Drizzle fit the growth loop together. Neon supplies serverless Postgres capabilities such as autoscaling and branching; Drizzle can keep the application schema explicit; event tables make the Wow Moment measurable; and the decision ledger records whether the tactic is kept, iterated, or killed.[6][4]

  • Keep: activation and retention both move in the review window.
  • Iterate: activation appears but retention does not move.
  • Kill: neither activation nor retention clears the documented threshold.

Recommendation

Build the Measurement Ledger as a First-Class Product Surface

Thothy should treat its Neon Postgres schema as the source of truth for growth decisions: Drizzle defines the contract, event tables capture activation and retention, branches test measurement changes safely, and a decision ledger records whether each tactic deserves more distribution.

Sources

github.com

GitHub - neondatabase/neon: Neon: Serverless Postgres. We separated ...

Neon : Serverless Postgres . We separated storage and compute to offer autoscaling, code-like database branching , and scale to zero. - neondatabase/ neon

Open source

arXiv:2501.08109

[2501.08109] Data-driven inventory management for new products: An ...

Based on the idea of transfer learning, warm-start information from the demand data of existing similar products can be incorporated into the algorithm to further stabilize the early-stage training and reduce the variance of the estimated optimal policy. Our a

Open source

arXiv:2603.29247

[2603.29247] MemRerank: Preference Memory for Personalized Product ...

LLM-based shopping agents increasingly rely on long purchase histories and multi-turn interactions for personalization, yet naively appending raw history to prompts is often ineffective due to noise, length, and relevance mismatch. We propose MemRerank, a pref

Open source

neon.com

Branching - Neon Docs

With Neon , you can quickly branch your data for development, testing, and various other purposes, enabling you to improve developer productivity and optimize continuous integration and delivery (CI/CD...

Open source

arXiv:2510.11604

Explainability, risk modeling, and segmentation based customer churn ...

In online retail, customer acquisition typically incurs higher costs than customer retention , motivating firms to invest in churn analytics . However, many contemporary churn models operate as opaque black boxes, limiting insight into the determinants of attr

Open source

neon.com

Neon documentation - Neon Docs

Neon is serverless Postgres designed to help you build faster. Autoscaling, branching , instant restore, and more. Get started with our Free plan Learn more" command="npx neonctl@latest init" trackingL...

Open source

encore.dev

Neon Serverless Postgres Guide for TypeScript Backends in 2026 - Encore

Learn how to use Neon serverless PostgreSQL with TypeScript. From setup to production, including branching , connection pooling, and integration with Encore.ts.

Open source

jusdb.com

Neon Serverless PostgreSQL | JusDB Blog

Neon is a serverless PostgreSQL platform built on a custom storage engine that separates compute (PostgreSQL processes) from storage (distributed page server). Database branching uses copy-on-write (CoW) to create instant, full-fidelity clones of any database

Open source