Plinth v0.1.0 — first SDK release
2026-05-01
If you run a fleet of internal applications — change requests, audit dashboards, HR tooling, account-access workflows, the long tail — you’ve watched every module re-implement the same plumbing: identity, authorization, audit, observability, deployment. You’ve also watched every module inherit the same gaps: a session secret committed in .env.example, an authorization layer that fails open in dev mode, no real healthcheck, no error boundaries, no centralised logs.
Plinth is the platform foundation those modules should have stood on from day one. Today is v0.1.0 — the first stable release. The Go and TypeScript SDKs are on public registries you can install from right now; the substrate Helm chart you build from source.
Six commitments
Section titled “Six commitments”The manifesto is the contract. Zero standing trust. GitOps everything. Immutable infrastructure. Durable workflows. Evidence by default. Open source first.
If those don’t match how you’d build internal tooling at a regulated org, save yourself the install. If they do — keep reading.
What v0.1.0 ships
Section titled “What v0.1.0 ships”| Component | What it is | Install |
|---|---|---|
sdk-go | 7 Go packages: fail-closed authz, non-blocking audit (CloudEvents 1.0), OTel init, RFC 7807 errors, paginate, vault, health | go get github.com/plinth-dev/sdk-go/<pkg>@v0.1.0 |
sdk-ts | 7 TypeScript packages: env, api-client, authz, authz-react, forms, otel-web, tables | pnpm add @plinth-dev/<pkg> |
starter-api | Go 1.25 + chi + pgx, every SDK wired into one working service | git clone |
starter-web | Next.js 16 + React 19, every TS SDK wired into one working app | git clone |
cli | plinth new <name> to scaffold both starters with renamed identifiers | go install github.com/plinth-dev/cli/cmd/plinth@latest |
scaffolder | Backstage software template + custom action emitting the same output | pnpm add @plinth-dev/scaffolder-actions |
platform | Helm umbrella chart — CloudNativePG + Cerbos + OTel Collector | git clone && helm install |
example-access-requests | Working internal tool: temp-prod-access requests with approver workflow | git clone |
Total: nine repos, MIT-licensed, tagged v0.1.x, CI-green on main. SDKs on public registries; CLI bumped to v0.1.1 for an early bug-fix; platform chart from source today.
API contracts are frozen for the 0.x line. Breaking changes batch into 0.2.0; v1.0 locks the API across major versions.
Try it in two minutes
Section titled “Try it in two minutes”# Scaffold a fully-wired Plinth module pairgo install github.com/plinth-dev/cli/cmd/plinth@latestplinth new billing --module-path github.com/acme/billing-api
# Or read a real working example end-to-endgit clone https://github.com/plinth-dev/example-access-requestsThe stand-it-up walkthrough takes you from kind create cluster to a deployed module. The access-requests example is a real internal tool you can clone and click through in 10 minutes.
What’s deliberately not in v0.1.0
Section titled “What’s deliberately not in v0.1.0”The platform chart’s a walking skeleton — Postgres + Cerbos + OTel only. The big substrate pieces are roadmapped, not shipped:
- Identity — Vault HA Raft, Authentik, Ory Oathkeeper, cert-manager
- Data — MinIO, NATS JetStream, Redis Sentinel, OpenSearch
- Observability — SigNoz (ClickHouse-backed), kube-prometheus-stack
- Security — Wazuh, Falco, Trivy Operator, Kyverno
- GitOps + DevX — Argo CD, Argo Rollouts, Backstage
- Bootstrap — Talos manifests, Argo app-of-apps, OCI publish to ghcr.io
- Examples — feature-flag dashboard, invoice approval, on-call directory
These land incrementally. The shape is fixed in the architecture doc. What changes between now and v1.0 is how much is wired up, not what the foundation looks like.
Why v0.1.0 instead of v1.0
Section titled “Why v0.1.0 instead of v1.0”The SDK contracts are stable — what’s documented in the 14 ADRs at /sdk/ is the API for the 0.x line. Lib code rarely needs breaking changes; it needs depth. v0.2 / v0.3 etc. add features without breaking signatures.
The substrate’s another story. The platform chart will gain components, profiles will harden, the Talos bootstrap will appear. v1.0 is when the whole architecture in the manifesto runs end-to-end with one helm install. Mid-2027 if I’m honest with myself.
Why this exists at all
Section titled “Why this exists at all”I’ve spent fifteen years watching internal-tools teams at regulated orgs re-implement the same six commitments — badly, inconsistently, mostly at 2am the night before an audit. Every team. Every shop. The platform-engineering practice that emerged in the 2020s solved the deployment side of this; the application-foundation side stayed re-implemented per module.
Plinth is the application-foundation side. It’s the thing I’d want to inherit on my first day if a director told me “stand up an internal-tools team and ship 30 apps in 18 months.”
It’s also a bet that internal-tools work is high-leverage, undervalued, and — uniquely in this AI-everywhere moment — gets more valuable as the apps themselves get cheaper to write. If everyone can vibe-code the next access-request workflow, what differentiates a serious org’s internal tools is the substrate they sit on. The audit log nobody can disable. The authz that fails closed by default. The OTel pipeline that catches the regression three minutes after deploy.
That substrate is what Plinth is. Open source, on purpose, MIT, no open-core, no proprietary extensions. The rest is the work.
What I’d love feedback on
Section titled “What I’d love feedback on”- Architecture critiques. Read the overview and the ADRs. Tell me what’s wrong or what I’m missing.
- Use it. Run
plinth new. File issues for paper cuts. - The example. Is it convincing as a “this is how you’d actually build it” demo? Where does it cheat?
Bugs and feedback: GitHub issues on the relevant repo. Disclosures: security@plinth.run.
— Husham