Claude Code for Tinybird Analytics — Guide

Written by Michael Lip · Solo founder of Zovo · $400K+ on Upwork · 100% JSS Join 50+ builders · More at zovo.one

The Setup

You are building real-time analytics APIs with Tinybird, a platform that ingests event data and exposes ClickHouse-powered SQL queries as REST API endpoints. Tinybird handles the infrastructure — you define data sources, write SQL transformations as “pipes,” and get instant API endpoints. Claude Code can build analytics backends, but it creates custom Express APIs with ClickHouse connections instead of Tinybird’s declarative approach.

What Claude Code Gets Wrong By Default

  1. Creates a ClickHouse cluster manually. Claude deploys ClickHouse with Docker and writes connection management code. Tinybird provides managed ClickHouse — you define data sources in .datasource files and Tinybird handles the infrastructure.

  2. Builds REST APIs for each query. Claude creates Express/FastAPI endpoints wrapping SQL queries. Tinybird pipes automatically become API endpoints — define SQL in a .pipe file and get an API with authentication, pagination, and caching.

  3. Writes ETL pipelines for data ingestion. Claude creates Python scripts to batch-load data. Tinybird has native streaming ingestion via Events API, Kafka connector, and S3 imports — data flows in real-time without custom ETL.

  4. Ignores materialized views. Claude runs complex aggregation queries on raw data every request. Tinybird supports materialized views that pre-compute aggregations — queries on materialized data are orders of magnitude faster.

The CLAUDE.md Configuration


# Tinybird Analytics Project

## Platform
- Service: Tinybird (real-time analytics APIs)
- Engine: Managed ClickHouse
- Config: .datasource and .pipe files
- API: auto-generated from pipe definitions

## Tinybird Rules
- Data Sources: .datasource files define schema
- Pipes: .pipe files define SQL transformations
- API: pipes with endpoints become REST APIs
- Ingest: Events API (POST), Kafka, S3
- Params: {{Type(param, default)}} in SQL
- Materialized: TYPE materialized for pre-computation
- CLI: tb push to deploy, tb sql for queries

## Conventions
- datasources/ directory for .datasource files
- pipes/ directory for .pipe files
- Use parameters for dynamic API queries
- Materialized views for expensive aggregations
- Events API for real-time ingestion
- tb push --force for schema changes
- Token auth for API endpoint security

Workflow Example

You want to build a real-time product analytics dashboard API. Prompt Claude Code:

“Create Tinybird data sources and pipes for product analytics. Define a datasource for user events (timestamp, user_id, event_name, properties), create pipes for: events per hour, unique users per day, and top events by count. Each pipe should be an API endpoint with date range parameters.”

Claude Code should create datasources/events.datasource with the schema, pipes/events_per_hour.pipe with SQL using {{DateTime(start_date)}} and {{DateTime(end_date)}} parameters, similar pipes for daily unique users and top events, each with TYPE endpoint to expose as an API.

Common Pitfalls

  1. Schema changes breaking ingestion. Claude modifies .datasource schema without considering existing data. Tinybird requires explicit schema evolution — some changes need tb push --force which recreates the datasource, losing existing data. Plan schema carefully.

  2. Not using materialized views for heavy queries. Claude runs aggregation queries on raw data for every API call. For high-traffic dashboards, create materialized views that pre-compute aggregations — queries become simple lookups instead of full scans.

  3. Missing API token scoping. Claude uses the admin token for all API calls. Tinybird supports scoped tokens that limit access to specific pipes — create read-only tokens for frontend API calls instead of sharing admin access.