Pillar 3 · Decoding
We are not translating. We are mapping.
Decoding, in the strong sense — converting crow vocalizations into human-language glosses with semantic precision — isn't something the field has done. It's not on the horizon for v0, v1, or v2 of any project we know of.
What we have done is build a high-resolution map. Crow voice is now visible at the granularity of acoustic features per call, individual signatures within call type, and behavioral context per cluster. That isn't translation. It's a new vocabulary for asking better questions, and it has already answered several.
Four sub-pages, four kinds of new visibility. Each intellectually honest about what has been demonstrated, what is suggestive, and what is still science fiction.
Sub-page · flagship
What we can decode now
The four features a self-supervised model extracts from a single half-second of crow voice — pitch contour, harmonic emphasis, duration, spectral grain — and what each tells us.
Sub-page
Contextual clustering
How latent coordinates correlate with behavior. The 2026 carrion-crow preprint as the cleanest current example.
Sub-page
Individuality & dialect
Caller-identity inference, family-group acoustic signatures, the dialect hypothesis — what's defensible and what's overreach.
Sub-page
Combinatorial evidence
Sequence models on call orderings, statistical regularities, and the open question of crow 'syntax'. Honest about the current weak state of behavioral confirmation.