How to Integrate Claude Skills with Notion API Guide
Notion serves as a knowledge base, project tracker, and documentation hub for many developer teams. Connecting Claude skills to the Notion API lets you automate document creation, populate databases from AI analysis, and build intelligent knowledge workflows. This guide covers how to integrate Claude skills with the Notion API, from authentication setup to practical patterns using pdf, supermemory, and tdd skills.
Why This Integration Matters
The combination solves real friction points:
- Meeting notes captured as PDFs →
pdfskill extracts action items → Notion database entries created automatically - Code reviews generated by
tddskill → stored in Notion as searchable documentation supermemoryskill can read from and write to Notion pages to maintain persistent project contextfrontend-designskill feedback → organized in a Notion design review database
Prerequisites
- A Notion workspace with API access enabled
- Notion Internal Integration Token (from notion.so/my-integrations)
- Node.js 18+ with the
@notionhq/clientpackage - Claude Code installed locally — skills run inside Claude Code, not via the Anthropic SDK
Step 1: Create a Notion Integration
- Go to notion.so/my-integrations
- Click New integration
- Name it “Claude Skills Bot”, select your workspace
- Under Capabilities, enable:
- Read content
- Update content
- Insert content
- Copy the Internal Integration Token — this is your
NOTION_TOKEN - Share your target Notion pages/databases with the integration by clicking the … menu on the page and choosing Add connections
Step 2: Install Dependencies
npm install @notionhq/client dotenv
Create .env:
NOTION_TOKEN=secret_your_token_here
NOTION_DATABASE_ID=your_database_id_here
Find your database ID in the Notion URL: notion.so/workspace/{database_id}?v=...
Step 3: Initialize Notion Client
require('dotenv').config();
const { Client } = require('@notionhq/client');
const notion = new Client({ auth: process.env.NOTION_TOKEN });
Important: Claude skills (/pdf, /tdd, /supermemory) run inside your Claude Code terminal session. They are not called via the Anthropic SDK in external scripts. To use skill output in this pipeline, run Claude Code in print mode and capture stdout, then pass the result to Notion:
# Run a skill in print mode and capture output
OUTPUT=$(claude --print "/pdf
Extract action items from /tmp/meeting-notes.pdf" 2>/dev/null)
Then your Node.js script reads from a file or stdin that Claude Code wrote.
Step 4: Run a Skill and Capture Output
Shell script that calls a Claude skill and writes output to a JSON file for the Node.js pipeline:
#!/bin/bash
# run-skill.sh — invoke a Claude skill and save output
SKILL="$1" # e.g. "pdf" or "tdd"
INPUT="$2" # path or description
OUTPUT_FILE="$3" # where to write the result
RESULT=$(claude --print "/$SKILL $INPUT" 2>/dev/null)
echo "$RESULT" > "$OUTPUT_FILE"
echo "Skill output saved to $OUTPUT_FILE"
Then your Node.js script reads the output:
const fs = require('fs');
function loadSkillOutput(filePath) {
const raw = fs.readFileSync(filePath, 'utf8');
try {
return JSON.parse(raw);
} catch {
return { summary: raw, action_items: [], key_points: [], tags: [] };
}
}
Step 5: Create a Notion Page from Claude Output
async function createNotionPage(databaseId, title, content, tags = []) {
const blocks = contentToNotionBlocks(content);
await notion.pages.create({
parent: { database_id: databaseId },
properties: {
Name: {
title: [{ text: { content: title } }],
},
Tags: {
multi_select: tags.map(tag => ({ name: tag })),
},
Status: {
select: { name: 'AI Generated' },
},
Date: {
date: { start: new Date().toISOString().split('T')[0] },
},
},
children: blocks,
});
}
function contentToNotionBlocks(content) {
const blocks = [];
if (content.summary) {
blocks.push({
object: 'block',
type: 'paragraph',
paragraph: {
rich_text: [{ type: 'text', text: { content: content.summary } }],
},
});
}
if (content.action_items && content.action_items.length > 0) {
blocks.push({
object: 'block',
type: 'heading_2',
heading_2: {
rich_text: [{ type: 'text', text: { content: 'Action Items' } }],
},
});
content.action_items.forEach(item => {
blocks.push({
object: 'block',
type: 'to_do',
to_do: {
rich_text: [{ type: 'text', text: { content: item } }],
checked: false,
},
});
});
}
if (content.key_points && content.key_points.length > 0) {
blocks.push({
object: 'block',
type: 'heading_2',
heading_2: {
rich_text: [{ type: 'text', text: { content: 'Key Points' } }],
},
});
content.key_points.forEach(point => {
blocks.push({
object: 'block',
type: 'bulleted_list_item',
bulleted_list_item: {
rich_text: [{ type: 'text', text: { content: point } }],
},
});
});
}
return blocks;
}
Step 6: Read Pages for Supermemory Context
The supermemory skill benefits from reading existing Notion content to build context:
async function readNotionPageContent(pageId) {
const blocks = await notion.blocks.children.list({ block_id: pageId });
return blocks.results
.filter(b => b.type === 'paragraph' || b.type === 'bulleted_list_item')
.map(b => {
const richText = b[b.type]?.rich_text || [];
return richText.map(rt => rt.plain_text).join('');
})
.filter(t => t.trim())
.join('\n');
}
async function buildProjectContext(pageIds) {
const contents = await Promise.all(pageIds.map(readNotionPageContent));
const combined = contents.join('\n\n---\n\n');
// Feed to /supermemory skill via Claude Code CLI
const { execSync } = require('child_process');
const prompt = `/supermemory Store and summarize this project context:\n\n${combined.substring(0, 2000)}`;
const context = execSync(`claude --print "${prompt.replace(/"/g, '\\"')}"`, { encoding: 'utf8' });
return { summary: context.trim() };
}
Step 7: Full Pipeline — Document to Notion
async function processDocumentToNotion(documentText, databaseId) {
const fs = require('fs');
const { execSync } = require('child_process');
// Write document text to temp file
fs.writeFileSync('/tmp/doc-input.txt', documentText);
console.log('Running /pdf skill via Claude Code...');
const raw = execSync('claude --print "/pdf\nExtract title, summary, action items, key points, and tags from /tmp/doc-input.txt. Return as JSON."', { encoding: 'utf8' });
let extracted;
try {
extracted = JSON.parse(raw);
} catch {
extracted = { title: '', summary: raw, action_items: [], key_points: [], tags: [] };
}
const title = extracted.title || `AI Summary — ${new Date().toLocaleDateString()}`;
const tags = extracted.tags || ['ai-generated'];
console.log('Creating Notion page...');
await createNotionPage(databaseId, title, extracted, tags);
console.log(`Created: "${title}" with ${extracted.action_items?.length || 0} action items`);
return extracted;
}
// Example usage
processDocumentToNotion(
fs.readFileSync('./meeting-notes.txt', 'utf8'),
process.env.NOTION_DATABASE_ID
);
Step 8: Query Notion Database for Context
Before sending content to Claude, retrieve related Notion entries to improve response quality:
async function getRelatedContext(databaseId, searchText) {
const results = await notion.databases.query({
database_id: databaseId,
filter: {
property: 'Tags',
multi_select: { contains: 'ai-generated' },
},
sorts: [{ property: 'Date', direction: 'descending' }],
page_size: 5,
});
return results.results
.map(page => page.properties.Name?.title?.[0]?.plain_text || '')
.filter(Boolean)
.join(', ');
}
Conclusion
Integrating Claude skills with the Notion API creates a knowledge management pipeline where AI analysis flows directly into your team’s documentation. The pdf skill populates databases with structured extracts, tdd generates code review docs, and supermemory reads existing pages to maintain project context. Build the pipeline incrementally — start with document-to-Notion, then add the two-way reading pattern.
Related Reading
- Best Claude Skills for Data Analysis — Covers the pdf and xlsx skills in depth, both of which feed directly into Notion database pipelines like the one described here
- How to Share Claude Skills With Your Team — If your Notion integration serves a team, this guide covers distributing and standardizing the skills that power it
- Claude Skills Token Optimization: Reduce API Costs — Tips for batching and structuring document processing calls to keep API costs manageable at scale
Built by theluckystrike — More at zovo.one