How to Integrate Claude Skills (2026)

Notion serves as a knowledge base, project tracker, and documentation hub for many developer teams. Connecting Claude skills to the Notion API lets you automate document creation, populate databases from AI analysis, and build intelligent knowledge workflows. This guide covers how to integrate Claude skills with the Notion API, from authentication setup to practical patterns using pdf, supermemory, and tdd skills.

Why This Integration Matters

The combination solves real friction points:

  • Meeting notes captured as PDFs → pdf skill extracts action items → Notion database entries created automatically
  • Code reviews generated by tdd skill → stored in Notion as searchable documentation
  • supermemory skill can read from and write to Notion pages to maintain persistent project context
  • frontend-design skill feedback → organized in a Notion design review database

Prerequisites

  • A Notion workspace with API access enabled
  • Notion Internal Integration Token (from notion.so/my-integrations)
  • Node.js 18+ with the @notionhq/client package
  • Claude Code installed locally. skills run inside Claude Code, not via the Anthropic SDK

Step 1: Create a Notion Integration

  1. Go to notion.so/my-integrations
  2. Click New integration
  3. Name it “Claude Skills Bot”, select your workspace
  4. Under Capabilities, enable:
    • Read content
    • Update content
    • Insert content
  5. Copy the Internal Integration Token. this is your NOTION_TOKEN
  6. Share your target Notion pages/databases with the integration by clicking the … menu on the page and choosing Add connections

Step 2: Install Dependencies

npm install @notionhq/client dotenv

Create .env:

NOTION_TOKEN=secret_your_token_here
NOTION_DATABASE_ID=your_database_id_here

Find your database ID in the Notion URL: notion.so/workspace/{database_id}?v=...

Step 3: Initialize Notion Client

require('dotenv').config();
const { Client } = require('@notionhq/client');
const notion = new Client({ auth: process.env.NOTION_TOKEN });

Claude skills (/pdf, /tdd, /supermemory) run inside your Claude Code terminal session. They are not called via the Anthropic SDK in external scripts. To use skill output in this pipeline, run Claude Code in print mode and capture stdout, then pass the result to Notion:

Run a skill in print mode and capture output
OUTPUT=$(claude --print "/pdf
Extract action items from /tmp/meeting-notes.pdf" 2>/dev/null)

Then your Node.js script reads from a file or stdin that Claude Code wrote.

Step 4: Run a Skill and Capture Output

Shell script that calls a Claude skill and writes output to a JSON file for the Node.js pipeline:

#!/bin/bash
run-skill.sh. invoke a Claude skill and save output
SKILL="$1" # e.g. "pdf" or "tdd"
INPUT="$2" # path or description
OUTPUT_FILE="$3" # where to write the result
RESULT=$(claude --print "/$SKILL $INPUT" 2>/dev/null)
echo "$RESULT" > "$OUTPUT_FILE"
echo "Skill output saved to $OUTPUT_FILE"

Then your Node.js script reads the output:

const fs = require('fs');
function loadSkillOutput(filePath) {
 const raw = fs.readFileSync(filePath, 'utf8');
 try {
 return JSON.parse(raw);
 } catch {
 return { summary: raw, action_items: [], key_points: [], tags: [] };
 }
}

Step 5: Create a Notion Page from Claude Output

async function createNotionPage(databaseId, title, content, tags = []) {
 const blocks = contentToNotionBlocks(content);
 
 await notion.pages.create({
 parent: { database_id: databaseId },
 properties: {
 Name: {
 title: [{ text: { content: title } }],
 },
 Tags: {
 multi_select: tags.map(tag => ({ name: tag })),
 },
 Status: {
 select: { name: 'AI Generated' },
 },
 Date: {
 date: { start: new Date().toISOString().split('T')[0] },
 },
 },
 children: blocks,
 });
}
function contentToNotionBlocks(content) {
 const blocks = [];
 
 if (content.summary) {
 blocks.push({
 object: 'block',
 type: 'paragraph',
 paragraph: {
 rich_text: [{ type: 'text', text: { content: content.summary } }],
 },
 });
 }
 
 if (content.action_items && content.action_items.length > 0) {
 blocks.push({
 object: 'block',
 type: 'heading_2',
 heading_2: {
 rich_text: [{ type: 'text', text: { content: 'Action Items' } }],
 },
 });
 content.action_items.forEach(item => {
 blocks.push({
 object: 'block',
 type: 'to_do',
 to_do: {
 rich_text: [{ type: 'text', text: { content: item } }],
 checked: false,
 },
 });
 });
 }
 
 if (content.key_points && content.key_points.length > 0) {
 blocks.push({
 object: 'block',
 type: 'heading_2',
 heading_2: {
 rich_text: [{ type: 'text', text: { content: 'Key Points' } }],
 },
 });
 content.key_points.forEach(point => {
 blocks.push({
 object: 'block',
 type: 'bulleted_list_item',
 bulleted_list_item: {
 rich_text: [{ type: 'text', text: { content: point } }],
 },
 });
 });
 }
 
 return blocks;
}

Step 6: Read Pages for Supermemory Context

The supermemory skill benefits from reading existing Notion content to build context:

async function readNotionPageContent(pageId) {
 const blocks = await notion.blocks.children.list({ block_id: pageId });
 
 return blocks.results
 .filter(b => b.type === 'paragraph' || b.type === 'bulleted_list_item')
 .map(b => {
 const richText = b[b.type]?.rich_text || [];
 return richText.map(rt => rt.plain_text).join('');
 })
 .filter(t => t.trim())
 .join('\n');
}
async function buildProjectContext(pageIds) {
 const contents = await Promise.all(pageIds.map(readNotionPageContent));
 const combined = contents.join('\n\n---\n\n');
 
 // Feed to /supermemory skill via Claude Code CLI
 const { execSync } = require('child_process');
 const prompt = `/supermemory Store and summarize this project context:\n\n${combined.substring(0, 2000)}`;
 const context = execSync(`claude --print "${prompt.replace(/"/g, '\\"')}"`, { encoding: 'utf8' });
 
 return { summary: context.trim() };
}

Step 7: Full Pipeline. Document to Notion

async function processDocumentToNotion(documentText, databaseId) {
 const fs = require('fs');
 const { execSync } = require('child_process');
 
 // Write document text to temp file
 fs.writeFileSync('/tmp/doc-input.txt', documentText);
 
 console.log('Running /pdf skill via Claude Code...');
 const raw = execSync('claude --print "/pdf\nExtract title, summary, action items, key points, and tags from /tmp/doc-input.txt. Return as JSON."', { encoding: 'utf8' });
 
 let extracted;
 try {
 extracted = JSON.parse(raw);
 } catch {
 extracted = { title: '', summary: raw, action_items: [], key_points: [], tags: [] };
 }
 
 const title = extracted.title || `AI Summary. ${new Date().toLocaleDateString()}`;
 const tags = extracted.tags || ['ai-generated'];
 
 console.log('Creating Notion page...');
 await createNotionPage(databaseId, title, extracted, tags);
 
 console.log(`Created: "${title}" with ${extracted.action_items?.length || 0} action items`);
 return extracted;
}
// Example usage
processDocumentToNotion(
 fs.readFileSync('./meeting-notes.txt', 'utf8'),
 process.env.NOTION_DATABASE_ID
);

Step 8: Query Notion Database for Context

Before sending content to Claude, retrieve related Notion entries to improve response quality:

async function getRelatedContext(databaseId, searchText) {
 const results = await notion.databases.query({
 database_id: databaseId,
 filter: {
 property: 'Tags',
 multi_select: { contains: 'ai-generated' },
 },
 sorts: [{ property: 'Date', direction: 'descending' }],
 page_size: 5,
 });
 
 return results.results
 .map(page => page.properties.Name?.title?.[0]?.plain_text || '')
 .filter(Boolean)
 .join(', ');
}

Handling Rate Limits

The Notion API enforces rate limits. typically 3 requests per second on average. Implement exponential backoff for retry logic in your pipeline:

async function makeRequestWithRetry(fn, maxRetries = 3) {
 for (let attempt = 0; attempt < maxRetries; attempt++) {
 try {
 return await fn();
 } catch (error) {
 if (error.status === 429) {
 const waitTime = Math.pow(2, attempt) * 1000;
 console.log(`Rate limited. Waiting ${waitTime}ms`);
 await new Promise(resolve => setTimeout(resolve, waitTime));
 } else {
 throw error;
 }
 }
 }
 throw new Error('Max retries exceeded');
}

For read-heavy workflows, implement caching to reduce API calls and improve response times. Version your API interactions by pinning to a specific Notion API version header and updating deliberately to avoid breaking changes.

Conclusion

Integrating Claude skills with the Notion API creates a knowledge management pipeline where AI analysis flows directly into your team’s documentation. The pdf skill populates databases with structured extracts, tdd generates code review docs, and supermemory reads existing pages to maintain project context. Build the pipeline incrementally. start with document-to-Notion, then add the two-way reading pattern.


Try it: Paste your error into our Error Diagnostic for an instant fix.

This site was built by 5 autonomous agents running in tmux while I was in Bali. 2,500 articles. Zero manual work. 100% quality gate pass rate. The orchestration configs, sprint templates, and quality gates that made that possible are in the Zovo Lifetime bundle. Along with 16 CLAUDE.md templates and 80 tested prompts. **[See how the pipeline works →](https://zovo.one/lifetime?utm_source=ccg&utm_medium=cta-skills&utm_campaign=how-to-integrate-claude-skills-with-notion-api-guide)** $99 once. I'm a solo dev in Da Nang. This is how I scale.

Related Reading

Built by theluckystrike. More at zovo.one

Find the right skill → Browse 155+ skills in our Skill Finder.

Quick setup → Launch your project with our Project Starter.