- Go 98.8%
- Dockerfile 0.8%
- Makefile 0.4%
Wire the SimplePool with WithAuthHandler so relays that reject our publish with `auth-required:` get a kind-22242 AUTH event signed by the same key, then auto-retry. Also routes publishes through PublishMany (which is what triggers the auth path) instead of relay.Publish. Tests use an in-process gorilla/websocket fake-relay covering: - relay sends AUTH challenge; publisher authenticates and retries - non-AUTH relay continues to work unchanged - failed AUTH surfaces as a publish error rather than silent success |
||
|---|---|---|
| .forgejo | ||
| cmd | ||
| internal | ||
| tests | ||
| .dockerignore | ||
| .gitignore | ||
| .golangci.yaml | ||
| compose.yaml | ||
| Dockerfile | ||
| go.mod | ||
| go.sum | ||
| Makefile | ||
| README.md | ||
btclock-ws-nostr-publish-go
Go WebSocket server that fans out Bitcoin price, block-height, and mempool-fee events to BTClock devices, and optionally republishes them to Nostr relays. Built to run behind HAProxy at 10k+ concurrent connections and to fit on a Raspberry Pi 4 / 5.
Features
- v1 JSON flat-broadcast publisher (
/ws,/api/v1/ws) - v2 MessagePack subscription publisher (
/api/v2/ws) with per-event and per-currency subscriber indexes, unknown-currency error path, integer fee dedup, and decimal fee2 every tick - Pluggable upstream drivers:
mempool-ws,bitcoind-zmq(requires cgo- libzmq),
bitcoind-rpc(polling)
- libzmq),
- Driver supervisor with primary/secondary failover (grace windows 30s block / 10s fee)
- Pluggable event bus: in-process (default) or Valkey Streams
- Nostr parameterized-replaceable publisher (kind 30078, one d-tag per datum) — no kind-5 cleanup, self-superseding at the relay
- Prometheus
/metricson a sidecar port;/healthzon the main port - Backpressure-aware gorilla/websocket writes with a bounded outbound channel per client (cap 256); overflow closes the socket
- Graceful shutdown on SIGTERM/SIGINT
Environment variables
| Variable | Default | Purpose |
|---|---|---|
LISTEN_ADDR |
:8080 |
Main HTTP listen address |
METRICS_ADDR |
:9090 |
Prometheus metrics sidecar address |
UPSTREAM |
mempool-ws |
mempool-ws | bitcoind-zmq | bitcoind-rpc |
UPSTREAM_FALLBACK |
(empty) | Optional secondary driver |
MEMPOOL_INSTANCE |
mempool.space |
Mempool WS hostname |
BITCOIND_RPC |
127.0.0.1:8332 |
bitcoind RPC host:port |
BITCOIND_USER / BITCOIND_PASS |
(empty) | RPC credentials |
BITCOIND_ZMQ |
tcp://127.0.0.1:28332 |
bitcoind ZMQ endpoint |
BROKER |
inproc |
inproc | valkey |
BROKER_URL |
localhost:6379 |
Valkey host:port |
PUBLISH_TO_NOSTR |
false |
Enable Nostr publishing |
NOSTR_PRIV |
(empty) | Nostr private key (hex or nsec) |
NOSTR_RELAYS |
wss://relay.damus.io |
Comma-separated relay URLs |
LOGLEVEL |
info |
zerolog level |
ENABLE_PPROF |
false |
Mount /debug/pprof/* on the metrics sidecar (sensitive — leaks stack traces) |
A .env in the working directory is loaded automatically via godotenv.
HTTP timeout behaviour
The main listener sets ReadHeaderTimeout=10s, IdleTimeout=120s, and
MaxHeaderBytes=16384 to defend against Slowloris-style attacks without
killing long-lived WebSocket connections (there is no WriteTimeout on
the main listener — gorilla's write loop uses per-frame deadlines so the
connection stays alive through idle periods). The metrics sidecar is
plain HTTP and carries a WriteTimeout=30s in addition.
After a successful Upgrade the server sets SetKeepAlive(true) +
SetKeepAlivePeriod(30s) + SetNoDelay(true) on the underlying TCP
conn so dead peers are detected promptly and frames aren't held for
Nagle.
Build and run
make build # → bin/ws-node (host arch)
make test # go test ./...
make vet # go vet ./...
make lint # golangci-lint if present, else go vet
make cross-arm64 # statically-linked Pi 4 / 5 build, nozmq
Run locally against public mempool.space:
./bin/ws-node
Run against a local bitcoind with RPC:
UPSTREAM=bitcoind-rpc BITCOIND_USER=u BITCOIND_PASS=p ./bin/ws-node
Run with ZMQ primary and RPC fallback:
UPSTREAM=bitcoind-zmq UPSTREAM_FALLBACK=bitcoind-rpc \
BITCOIND_USER=u BITCOIND_PASS=p BITCOIND_ZMQ=tcp://127.0.0.1:28332 \
./bin/ws-node
Run behind HAProxy with a Valkey bus (multi-node fan-out):
BROKER=valkey BROKER_URL=valkey.internal:6379 ./bin/ws-node
Driver selection
| Driver | Requires | Best for |
|---|---|---|
mempool-ws |
network egress to mempool.space or a self-hosted instance | hobbyist deployments; no bitcoind needed |
bitcoind-zmq |
local bitcoind with zmqpub enabled, cgo + libzmq |
lowest latency, full self-hosted path |
bitcoind-rpc |
local bitcoind with RPC enabled | fallback; polling cadence 2s |
The ZMQ driver subscribes to hashblock + sequence. Height is read via
getblockcount RPC on every hashblock notification. Fee estimation
computes a rolling median (sat/vB) over the top-fee transactions that
would fit in roughly the next block, taken from getrawmempool true —
matching DataStorage.lastMedianFee semantics from the mempool-WS driver
so a ZMQ → mempool-ws failover is wire-transparent. A sequence event
debounces the recomputation at 2 s by default (override via
MempoolRefreshInterval on the struct) to avoid RPC-storming a busy
node. If getrawmempool fails the driver falls back to
estimatesmartfee so the feed doesn't go silent.
Valkey setup
Minimal single-instance Valkey is enough for a single-host multi-ws-node deployment. For HA use Valkey/Redis Sentinel or Cluster.
The bus writes MSET btclock:price:<CCY> per price update and XADD btclock:events MAXLEN ~ 10000 * per event. Consumers start from the
stream tail ($) on connect; XREAD BLOCK 5s drives liveness. On trim-past
the consumer re-snapshots from the tail and continues — see
internal/bus/valkey.go.
Integration tests against a live broker:
REDIS_URL=localhost:6379 go test ./tests/... -run TestValkeyBus
Raspberry Pi deployment
make cross-arm64
scp bin/ws-node-arm64 pi@btclock:/usr/local/bin/ws-node
ssh pi@btclock 'sudo systemctl restart ws-node'
A sample systemd unit:
[Unit]
Description=BTClock WS node
After=network-online.target
[Service]
Environment=LISTEN_ADDR=:8080 METRICS_ADDR=:9090
Environment=UPSTREAM=mempool-ws
ExecStart=/usr/local/bin/ws-node
Restart=always
RestartSec=2
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Raise LimitNOFILE — at 10k clients each needs one fd. Kernel tuning
(net.ipv4.tcp_mem, net.core.somaxconn) may also be needed at that
scale.
The ZMQ driver needs cgo + libzmq at both build and run time; cross-build
from macOS to linux/arm64 with cgo enabled requires a cross toolchain.
The cross-arm64 target intentionally uses -tags nozmq for a clean
static build; use mempool-ws or bitcoind-rpc in that scenario, or
build on the target.
Metrics
Exposed at http://$METRICS_ADDR/metrics. Scrape each ws-node
independently; Grafana sums.
Nostr
When PUBLISH_TO_NOSTR=true and NOSTR_PRIV is set, every price update,
new block, and median fee change is published to the configured relays
as a parameterized-replaceable event (kind 30078, d tag per datum).
Relays supersede the previous event for that d tag automatically — no
delete-event loop is needed.
Deployment
File descriptors
10k clients + upstream + RPC + metrics plus headroom → aim for at least
ulimit -n 65536. The process logs the current soft limit on startup
and warns if it's below that target. Raise via:
# systemd
LimitNOFILE=65536
# or interactively
ulimit -n 65536
HAProxy
Terminate client TCP on HAProxy, upgrade to WS, and load-balance across N ws-node backends. Health-check on the metrics sidecar port (not the WS port) to avoid flapping on a slow tick.
frontend ws_in
bind *:443 ssl crt /etc/haproxy/btclock.pem alpn h2,http/1.1
mode http
default_backend ws_nodes
backend ws_nodes
mode http
balance leastconn
timeout tunnel 1h
timeout client 1h
timeout server 1h
option httpchk GET /healthz
http-check expect status 200
# Health check goes to the metrics sidecar (:9090), not the WS port.
server ws1 10.0.0.11:8080 check port 9090 inter 2s fall 2 rise 2
server ws2 10.0.0.12:8080 check port 9090 inter 2s fall 2 rise 2
server ws3 10.0.0.13:8080 check port 9090 inter 2s fall 2 rise 2
/healthz returns 503 with a reason string (upstream down / broker stale) when the node shouldn't serve traffic — HAProxy drains the
backend automatically.
Prometheus
Scrape each ws-node independently; Grafana sums across instances.
scrape_configs:
- job_name: btclock-ws-node
scrape_interval: 15s
static_configs:
- targets:
- 10.0.0.11:9090
- 10.0.0.12:9090
- 10.0.0.13:9090
Key metrics for alerting:
btclock_ws_backpressure_drops_total— non-zero = slow clients, tuneoutboundBufor scale out.btclock_ws_client_send_queue_depth{proto}— peak 5s-sampled depth; climbing toward 256 indicates queueing before backpressure triggers.btclock_upstream_source_active{driver}— Grafana stat panel for the current driver per node.btclock_broker_seq_gaps_total— a steady non-zero rate means ValkeyMAXLEN ~ 10000is too tight for the aggregator pause budget.
pprof
Opt-in via ENABLE_PPROF=true. Exposes /debug/pprof/* on the metrics
sidecar only. Sensitive (leaks stack traces, goroutine dumps, heap
snapshots) — keep it off the public internet; run curl localhost:9090/debug/pprof/goroutine?debug=1 over an SSH tunnel
instead.
Contract tests
tests/ws{1,2}_test.go pin the v1 / v2 wire format byte-for-byte.
tests/upstream_supervisor_test.go covers driver failover.
tests/bus_valkey_test.go is an integration test (gated on REDIS_URL).
Run all:
go test ./...