Build an AI Content Pipeline with N8N and Ollama (Step-by-Step)

How to wire up N8N workflows and a local Ollama model to research, write, and publish blog articles automatically — fully self-hosted, zero API costs.

5 min read
980 words

The goal: a workflow that picks a topic, writes a full article, and opens a pull request in your blog repo — all without touching a keyboard. This guide walks through every N8N node, the exact prompts I use, and the mistakes I made before landing on something that works reliably.

What you’ll build

A five-stage N8N pipeline:

  1. Schedule — triggers on a cron schedule
  2. Research — generates article topic, title, and keyword list
  3. Write — produces a full Markdown article with frontmatter
  4. Validate — checks the output for quality issues
  5. Publish — creates a GitHub pull request with the article file

Everything runs on your own machine. No OpenAI key, no cloud storage.

Prerequisites

Before you start, you need three things:

  • N8N running locally (Docker: docker run -p 5678:5678 -v ~/.n8n:/home/node/.n8n n8nio/n8n)
  • Ollama with qwen3:7b and phi4-mini pulled
  • A GitHub personal access token with repo scope

Node 1: Schedule Trigger

Add a Schedule Trigger node. Set it to run daily at whatever time makes sense — I use 6 AM so articles are ready when I wake up.

For testing, add a manual trigger alongside it. You’ll use that constantly during setup.

Node 2: Research node (Phi-4 Mini)

Add an HTTP Request node. This calls your local Ollama API to research a topic.

Method: POST
URL: http://localhost:11434/api/generate
Body (JSON):

{
  "model": "phi4-mini",
  "prompt": "Generate a blog article idea for an AI automation blog. Return JSON only with these fields: title (compelling SEO title), slug (URL-friendly), category (one of: local-llms, n8n-workflows, ai-agents, automation, tutorials, tools-reviews), description (meta description under 160 chars), keywords (array of 4-6 tags).",
  "stream": false,
  "format": "json"
}

Set the Response Format to JSON. The format: json field in the Ollama request forces the model to output valid JSON — critical for automation.

Node 3: Parse research output

Add a Code node to extract the article data from the Ollama response:

const raw = $input.first().json.response;
const article = JSON.parse(raw);

return [{
  json: {
    title: article.title,
    slug: article.slug,
    category: article.category,
    description: article.description,
    keywords: article.keywords,
    date: new Date().toISOString().split('T')[0],
  }
}];

This node is where most failures happen. Add error handling: if JSON.parse throws, log the raw response and stop the workflow rather than passing garbage downstream.

Node 4: Write the article (Qwen 3 7B)

Another HTTP Request node, this time calling the heavier model for actual content generation.

Body (JSON):

{
  "model": "qwen3:7b",
  "prompt": "Write a complete, high-quality blog article for Pipeline Monk.\n\nTitle: {{ $json.title }}\nCategory: {{ $json.category }}\nTarget keywords: {{ $json.keywords.join(', ') }}\n\nRequirements:\n- 800-1200 words\n- Use ## for H2 headings, ### for H3\n- Include at least one code block\n- Write in second person (\"you\")\n- Be direct and practical, no fluff\n- Return the article body only (no frontmatter)\n\nWrite the article now:",
  "stream": false
}

Note: Qwen 3 7B on CPU takes 3–8 minutes for a full article. Increase the N8N execution timeout in Settings → Execution if you hit timeouts.

Node 5: Assemble the Markdown file

Code node to combine the frontmatter and body:

const meta  = $('Parse research output').first().json;
const body  = $input.first().json.response.trim();

const frontmatter = `---
title: "${meta.title}"
description: "${meta.description}"
date: ${meta.date}
category: "${meta.category}"
tags: [${meta.keywords.map(k => `"${k}"`).join(', ')}]
author: "ai-pipeline"
draft: false
---

`;

return [{
  json: {
    ...meta,
    content: frontmatter + body,
    filename: `${meta.slug}.md`,
    filepath: `src/content/blog/${meta.slug}.md`,
  }
}];

Node 6: Validate output

A Code node that catches common quality issues before publishing:

const { content, title } = $input.first().json;

const checks = {
  hasContent: content.length > 500,
  hasHeadings: content.includes('## '),
  noPlaceholders: !content.includes('[INSERT'),
  validFrontmatter: content.startsWith('---'),
  notTooShort: content.split(' ').length > 300,
};

const passed = Object.values(checks).every(Boolean);

if (!passed) {
  const failed = Object.entries(checks)
    .filter(([, v]) => !v)
    .map(([k]) => k)
    .join(', ');
  throw new Error(`Quality check failed: ${failed}`);
}

return $input.all();

If validation fails, the workflow stops and N8N logs the error. No bad articles get published.

Node 7: Publish to GitHub

Use N8N’s built-in GitHub node (not an HTTP Request — the GitHub node handles base64 encoding and the API auth automatically).

Operation: Create File
Repository: your-username/your-blog-repo
File Path: {{ $json.filepath }}
File Content: {{ $json.content }}
Commit Message: feat: add article - {{ $json.title }}
Branch: ai-draft-{{ $json.slug }}

This creates a new branch with the article file. Then add a second GitHub node to create a pull request from that branch to main.

Running and monitoring

Test each node individually before connecting them. Click the node and run it with mock data to verify the output shape before wiring up the next stage.

Once the full workflow runs end-to-end, enable the Schedule Trigger. Check the Executions tab daily — N8N shows you which workflows ran, how long they took, and where any errors occurred.

The typical flow: wake up, find a pull request in your repo, review the article, fix anything that needs fixing, and merge. The whole editorial process takes about 10 minutes per article.

What this pipeline can’t do

Local models don’t pull fresh information — they only know what was in their training data. The pipeline won’t write about last week’s product launch or a new model release.

For topicality, add a research step that fetches from an RSS feed or a search API and injects the current context into the writing prompt.

Common failure modes

Model returns non-JSON: Phi-4 Mini occasionally wraps JSON in markdown code blocks. Add a regex strip in the Parse node: raw.replace(/```json\n?|```\n?/g, '').trim().

Article too short: Qwen 3 7B sometimes writes 400 words when you asked for 1000. Rephrase the prompt to say “Write a MINIMUM of 800 words” and add word count to the validation check.

GitHub node fails on duplicate branch: If the same slug is generated twice, the branch already exists. Add a timestamp to the branch name: ai-draft-{{ $json.slug }}-{{ $now.toFormat('yyyyMMdd') }}.


Written by

Admin Editor & Builder

Human editor behind Pipeline Monk. Building AI-powered workflows, reviewing pipeline output, and writing guides from hands-on experience.