<!--
Full-page Markdown export (rendered HTML → GFM).
Source: https://neotoma.io/schema-management
Generated: 2026-05-04T09:52:25.275Z
-->
# Schema management
Schema constraints are a core invariant: malformed writes should fail at store time, not silently degrade state quality. This page covers practical schema workflows for unfamiliar users.
## List and inspect schema types
\# List known entity types
neotoma schemas list
# Inspect one schema
neotoma schemas get contact
◆
## Store data with schema validation
Store operations validate payloads against the target schema. If required fields are missing or have unexpected types, the observation is still stored but a warning is recorded in the raw fragments layer, so no data is silently lost or misclassified.
\# Valid write
neotoma store --json='\[{"entity\_type":"contact","name":"Ana Rivera","email":"ana@acme.com"}\]'
# Invalid write (example: wrong type for age)
neotoma store --json='\[{"entity\_type":"person","name":"Ana Rivera","age":"thirty"}\]'
◆
## Evolve schemas incrementally
Add new fields without breaking existing workflows. For larger changes, analyze candidates first, then register updates intentionally.
\# Analyze candidate fields from observed data
neotoma schemas analyze-candidates --entity-type contact
# Recommend schema updates
neotoma schemas recommendations --entity-type contact
# Register or update schema
neotoma schemas register --file ./contact\_schema.json
◆
## Operational guidance
- →Prefer additive schema changes over destructive renames.
- →Use versioned changelogs for schema edits with rationale.
- →Test representative payloads before changing production schema.
- →Treat schema updates as state-model changes, not UI tweaks.
See [schema constraints](/memory-guarantees#schema-constraints), [data model walkthrough](/data-model), and [architecture](/architecture).