About

Music search.
Finally.

There’s no agent-ready music data layer on the internet. Every AI asked about music guesses — confidently, fluently, wrongly. Dig is the canonical music data layer that should already exist.

What this is

Dig is a structured search engine and data layer built on the full Discogs CC0 dataset — 24 million records, 2.5 million masters, 580,000 artists, and 2.3 million labels. Every entity is cross-linked: artists to releases, releases to credits, credits to labels. Click anything, follow the thread.

Agent first, human second

01

Open by default

Fully open REST API and MCP server. No keys required, no signup. Any agent, any LLM workflow — just point at it.

02

Deterministic retrieval

No inference in the retrieval path. Structured data only. When an agent asks for a record, it gets the real thing — not a hallucination dressed as an answer.

03

Human search that works

Because the data’s already there, humans finally get the deep catalog search tool that should have existed years ago. Fast, mobile-first, connected.

Enrichment layers

Beyond the core Discogs catalog, Dig cross-references MusicBrainz (1.2M artist mappings, 1.8M release crosswalks), Wikidata (bios, locations, genres for 200K artists), and setlist.fm (live performance history). All enrichment is additive and source-attributed.

Stack

TypeScript, Postgres, Kysely, Fastify, Next.js. Hosted on Fly.io. Full-text search via Postgres FTS + pg_trgm. Redis for caching. Cover art from the Cover Art Archive. No external AI services in the data path.

Status

Early stage, active alpha. Building in public. The API, MCP server, and web frontend are all live. See how we built it for the full build log.

Links

Built by b1rdmania. The music tool agents reach for. And the one humans actually wanted.