From JSON Blob to SQLite: Fixing Chat Message Persistence
An app store review landed that cut straight to the bone: “My channel chats and direct messages sometimes disappear, even when I stay on the same node.” When a user reports data loss, everything else stops.
The Archaeology
The original message storage was one of the earliest pieces of the codebase. It serialized every chat message — channels and DMs alike — into a single JSON array inside SharedPreferences. One key, one blob, every message the app had ever seen.
It worked fine at first. A handful of messages, quick reads, quick writes. But SharedPreferences was never designed for this. The problems accumulated quietly:
Global 100-message cap. Not per conversation — across all conversations combined. Ten active channels and a few DM threads meant each conversation held maybe a dozen messages before older ones were silently discarded.
Missing field serialization. The toJson/fromJson round-trip dropped status, packetId, routingError, and errorMessage. Messages survived restarts but lost their delivery state. A confirmed message came back as “unknown.”
Write amplification. Every single incoming message re-serialized the entire array. On a busy mesh with 80+ messages, that meant writing tens of kilobytes of JSON on every packet.
No indexing. Loading messages for channel 3 meant deserializing every message in the app, then filtering in memory.
The Fix
The replacement is a proper SQLite database (messages.db) with a schema designed around how messages actually get queried:
CREATE TABLE messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
conversation_key TEXT NOT NULL,
timestamp INTEGER NOT NULL,
from_node INTEGER NOT NULL,
to_node INTEGER NOT NULL,
channel INTEGER NOT NULL DEFAULT 0,
packet_id INTEGER NOT NULL DEFAULT 0,
text TEXT NOT NULL,
owner INTEGER NOT NULL DEFAULT 0,
status TEXT NOT NULL DEFAULT 'unknown',
source TEXT NOT NULL DEFAULT 'mesh',
routing_error TEXT,
error_message TEXT,
is_emoji INTEGER NOT NULL DEFAULT 0,
rx_time INTEGER,
rx_snr REAL,
rx_rssi INTEGER
);
Three indexes cover the access patterns:
(conversation_key, timestamp)— the primary query path(from_node)— node-centric lookups(packet_id)— deduplication checks
The conversation_key is the critical design choice. Channels get channel:<n>, DMs get dm:<lower>:<higher> with node numbers sorted so the same conversation always resolves to the same key regardless of sender/receiver perspective.
Retention is per-conversation: 500 messages each. A busy channel no longer evicts messages from quiet DM threads.
The Migration
On first launch after the update, MessageDatabase.init() checks for the legacy SharedPreferences key. If it finds data, it deserializes the old JSON array, inserts each message into SQLite with a computed conversation_key, then deletes the SharedPreferences entry. One-shot, no user interaction needed.
The migration path was the part that required the most care. Users have months of chat history in that blob. Losing it during the upgrade would be exactly the kind of data loss we were trying to fix.
A Deduplication Bug
While auditing the storage layer, a second bug surfaced in the content-based deduplication logic. When a push notification delivers a channel message, it sets the to field to the local node number. When the same message arrives over the mesh, to is 0xFFFFFFFF (the broadcast address). The old deduplication compared all fields including to, so these were treated as different messages — the same text appearing twice in the chat.
The fix: for channel messages (where channel > 0), skip the to field comparison. DMs still compare it since sender and recipient are meaningful there.
Test Infrastructure
Migrating the test suite from SharedPreferences mocks to SQLite FFI exposed its own class of problems. The sqflite_common_ffi package caches database instances by path, so :memory: databases leaked state between tests. The fix was unique file paths per test incorporating the process ID, guaranteeing isolation even across parallel test runs.
A subtler issue: Timer.periodic with short intervals in test configurations would fire during event queue pumps, clearing session state before assertions could check it. The solution was recognizing that Riverpod listeners fire synchronously — no pump needed between a state change and its immediate consumer.
Numbers
| Metric | Before | After |
|---|---|---|
| Per-conversation retention | ~10-20 (shared 100 global) | 500 |
| Fields preserved on restart | 12 of 17 | 17 of 17 |
| Write cost per message | Re-serialize entire array | Single row INSERT |
| Query cost for one conversation | Deserialize all, filter in memory | Indexed SELECT |
| Deduplication | Content-based with to field bug | Content-based, channel-aware |
The app store review was right. Messages were disappearing. They just were not disappearing in a way that was immediately obvious during development — it took real-world usage with multiple active conversations to hit the global cap consistently. SQLite is the correct tool for structured, queryable, per-record persistence. SharedPreferences is for preferences.