Saltar al contenido principal

Technology·14 min de lectura

Automating Content Creation with n8n to Improve Organic SEO

Cover image for article: Automating Content Creation with n8n to Improve Organic SEO

The project's origin

When I launched VITRi (now Shourly), I faced a common dilemma in any digital platform: I needed content. A lot of content. Organic SEO is non-negotiable if you want your platform to grow, but creating quality articles consistently while developing the product is practically impossible.

The obvious option was to hire writers, but there was something that interested me more than simply solving the problem. I've never been one to blindly trust headlines or promises. I prefer to experiment myself to truly understand the limitations of technologies. And n8n was a tool I'd been wanting to explore in depth.

So I saw the perfect opportunity: solve my content need while learning n8n in the process. Two goals with one solution.

The discovery that changed everything

When I started understanding n8n, a world of possibilities opened up. The tool is incredibly powerful for automating complex workflows. But here's the reality: n8n is only as good as the functional platforms you can connect. Automation tools exist to automate things that are already done, not to create them from scratch.

With that in mind, I started thinking: why limit myself to just generating text? If I'm going to automate, let it be the complete content creation flow. From author selection to final article publication, including images, SEO, and everything that makes an article feel professional and unique.

The challenge of creating real presence in each article

This is where the project got interesting. I didn't want to simply generate generic articles. I wanted each one to have its own presence, its own voice. The only way to achieve that was by simulating how the real world of content creation works.

In real life, authors have knowledge in specific fields. Each has their own way of writing, their own style. So I decided to create a system of fictional authors, each with their own personality and areas of expertise.

I designed four different authors, each with their biography, writing tone, and fields of knowledge assigned through categories. Alex Rivera could write about e-commerce with a more technical tone, while Sofia Martinez would talk about digital entrepreneurship with a more accessible and practical approach. The idea was that depending on the article's topic, the system would choose the most appropriate author and GPT-4 would write following that specific personality.

The repetition problem

One of the biggest technical obstacles was preventing articles from repeating. I didn't want two articles that basically talked about the same thing with different words. I needed a system that remembered what topics I'd already covered.

The solution was to create a small database in Notion where each article concept is saved with a well-compacted summary. Before generating a new article, the system reviews all previous concepts and ensures it doesn't create something too similar.

What's interesting here is that I organized these summaries by category. This avoids many potential bottlenecks, because the system only compares against articles in the same category, not against the entire database. More efficient and more relevant.

Optimizing costs and time

A critical aspect of the project was making it economically sustainable. I loaded my OpenAI account with $20 specifically for this automation, so I needed to be strategic with token usage.

After several experiments and optimizations, I managed to reduce the cost per article to approximately $0.07. Yes, seven cents per article. This means that with $20 I can generate around 300 complete articles, each with images, optimized SEO, and all polished content.

But what's most impressive isn't just the cost, but the speed. The entire process, from concept generation to article publication with images and metadata, takes less than a minute. One minute to create something that would manually take me hours.

The key to keeping costs so low was being very specific with prompts to avoid unnecessary regenerations, using GPT-4.1 mini where possible and GPT-4.1 only for tasks that truly require it, structuring the flow so each API call has a clear purpose and doesn't waste tokens on unnecessary context, and caching responses when it makes sense, especially for author and category information that doesn't change frequently.

The complete automation flow

The main article creation flow has two different entry points, each with its specific purpose:

Manual entry through form

The first entry is a web form that allows me to create articles on demand. This is useful when I need content on a specific topic or when I identify a gap in existing content.

The form requests the environment (development or production), the author, the category, and a detailed description of the topic. This description is the context the agent needs to understand exactly what kind of article I want.

The advantage of this entry is control. I can decide exactly what gets written, when, and by whom. It's perfect for responding to trends, specific events, or immediate content needs.

Scheduled automatic entry

The second entry is completely automatic. A separate flow runs every three days and feeds the main flow with automatically generated concepts.

This automatic flow works as follows: every three days at 6am, the Schedule Trigger activates the process. The system retrieves all available authors from PostgreSQL and all existing categories. It also queries the VITRi concept from Notion to maintain coherence with the platform's narrative.

The smart part comes next. The flow reviews all previously created article concepts in Notion to avoid repetitions. With all this information (authors, categories, VITRi concept, and previous articles), GPT-4.1 mini generates a new article concept.

This concept includes the selection of the most appropriate author according to their expertise, the category that best fits the topic, and a detailed article brief. The brief explains the article's purpose, central focus, target audience, benefits the reader will obtain, and what practical elements it should contain.

Once generated and validated against existing concepts to ensure uniqueness, the new concept is saved in Notion and automatically executes the main article creation flow.

What's elegant about this design is the separation of responsibilities. The automatic flow focuses on generating unique and relevant ideas. The main flow specializes in converting those ideas into complete and professional articles. This entry is what makes continuous content possible without manual intervention, crucial for keeping organic SEO active.

The part where I get artistic with images

This is where the flow goes from interesting to truly impressive. Because one thing is to automatically generate text, many people already do that. But generating text and automatically searching for relevant images from Unsplash, strategically placing them, and making everything look like a human curated it, that's another story.

Most automated articles are endless walls of text. Boring. I wanted each article to feel visual, dynamic, with images that truly complement what you're reading. Not generic office photos or people pointing at charts on whiteboards.

How I taught the system to think visually

The trick is that not all sections of an article need an image. If you're talking about "SEO best practices" you might not need a photo. But if you're explaining "how to configure your analytics dashboard" you definitely do. So I had to teach the system to distinguish.

I pass the complete article to GPT and basically ask which of these sections are visually representable. But not just that. I ask it to tell me what each section is about in one sentence, generate a super-specific keyword to search on Unsplash, and give me descriptive alternative text for accessibility. Also to give it a score from 0 to 100 of how appropriate that section is for having an image.

[SUGGESTED IMAGE: Output of the "Keywords for find images" node showing the structured JSON]

The prompt I use is very specific about what to consider visual. I tell it to mark images only for concrete things like products, people, interfaces, processes, real objects. Nothing about abstract concepts or administrative sections. Because nobody needs to see a stock photo of "trust" or "innovation".

The Unsplash search that doesn't fail

For each section the system decides deserves an image, the Unsplash search starts. And here's a trick I discovered: don't just use the first image Unsplash gives you. That's what everyone does and that's why all internet articles have the same 5 photos.

Instead, I request the top 24 images by relevance, filtered by horizontal orientation (because landscape looks better in articles). And from those 24, I randomly select one. The result is that every time the flow runs, even if it's the same topic, the images are different. Constant variety and freshness.

// The code that does the magic
function getRandomImageUrl(x) {
  const arr = x?.json?.results ?? [];
  if (!Array.isArray(arr) || arr.length === 0) return null;
  const idx = Math.floor(Math.random() * arr.length);
  return (
    arr[idx]?.urls?.regular ??
    arr[idx]?.urls?.full ??
    arr[idx]?.urls?.raw ??
    null
  );
}

The insertion that looks handmade

Now comes the technically most satisfying part. I have to insert these images into the markdown article, but not just any way. They have to go right after the correct heading, without duplicating if an image already exists, and with appropriate alternative text.

[SUGGESTED IMAGE: Code of the "Add images to the content" node showing the insertion logic]

What the code node does is the following:

First it normalizes all section titles. Because "SEO Best Practices" and "seo best practices" and "SEO Best Practices!!!" should match as the same section. So I remove accents, punctuation marks, capitalization, everything. I convert it to a super simple version to compare.

Then it maps each normalized title with its corresponding image. It's like creating a dictionary where the key is the clean title and the value is the image found on Unsplash.

Finally it goes through the article line by line looking for H2 headings. When it finds one, it checks if it has an image for that section. If it does and there isn't already an image immediately after, it inserts the image in markdown format with the alternative text and section title. If there's already an image, it skips it. We don't want visual overload.

The result is satisfying. Each article ends up with perfectly placed images that really feel relevant to the content. And the best part is that everything happens automatically without manual intervention.

Because aesthetics also matter

Besides content images, each article also needs a cover that isn't the same boring template. So I implemented a system that generates unique covers using geometric patterns and Tailwind CSS color palettes.

The system has 7 different patterns (Memphis, abstract, topographic, fluid...). It randomly selects one, then chooses a Tailwind color palette, and from that palette takes two colors that have sufficient contrast between them. The result is that each article has its own visual identity without any looking the same.

There's something satisfying about seeing articles with covers that really feel unique.

The invisible editor that improves everything

After the article is generated, with images, with SEO, with nice cover... I'm still not done. Because here comes my favorite part: automated editorial improvement.

Basically I tell a second GPT agent to review the complete article, keep it with the same author tone, but extend it where it makes sense, remove redundancies, add new sections if it finds gaps, and generally make it better. It's like having an editor who reviews your work but never gets tired.

The rules are specific. It can't invent data. It can't change the main concept. It must respect the author's tone religiously. And it has a limit of maximum 5 paragraphs per section because nobody wants to read an essay inside another essay.

I also tell it to remove the first image from the article because the cover is already visual enough and I don't need overload from the start.

What I love about this stage is that it takes an article that's already good and makes it really good. It adds context where it's missing, expands interesting ideas, suggests best practices. And all without losing the author's original voice.

Triggers

  • Form Trigger: Web form to create articles manually

  • Execute Workflow Trigger: For scheduled execution or from other flows

Logic and control

  • Switch: 6 nodes to manage dev/prod routes at different stages

  • Code: JavaScript nodes for custom logic (formatting, random selection, image insertion)

Data

  • Notion: Gets concepts and configuration data

  • Postgres: 4 queries for authors and categories in both environments

AI and content

  • AI Agent with GPT-4.1: Article generation, SEO, image analysis, final improvement

  • Structured Output Parser: 3 parsers to format AI responses in JSON

External services

  • HTTP Request: 14 nodes for communication with Strapi API and Unsplash

  • Strapi: Headless CMS to store articles

  • Unsplash: API for image search

The complete data flow

Concept (Notion/Form)
    ↓
Environment selection (Switch)
    ↓
Author + Category (Postgres)
    ↓
Article generation (GPT-4.1)
    ↓
Content cleanup (Code)
    ↓
Initial publication (Strapi API)
    ↓
SEO generation (GPT-4.1)
    ↓
Metadata update (Strapi API)
    ↓
Related articles (Strapi API + Code)
    ↓
Section analysis (GPT-4.1)
    ↓
Section filtering (Code)
    ↓
Image search (Unsplash API)
    ↓
Selection and mapping (Code)
    ↓
Content insertion (Code)
    ↓
Cover generation (Code)
    ↓
Update with images (Strapi API)
    ↓
Editorial improvement (GPT-4.1)
    ↓
Final update (Strapi API)

Real results

You can see examples of generated content:

  1. Building Trust and Visibility for Digital Brands

  2. Integrate Offline Marketing Strategies

What I like most about these articles is that they have personality. Each one feels written by a real person, not a bot. They're visually attractive, with images that are well chosen and strategically placed. The structure is coherent, with clear headers, well-crafted paragraphs, and appropriate use of Markdown elements. The SEO is optimized with well-thought metadata that doesn't sound forced. And the length is just right, not too short or unnecessarily long.

Why this approach works

It's not just generating text

The difference with other automation solutions is that this isn't just "asking GPT to write an article". It's a complete system that:

  • Manages multiple author voices

  • Curates visual content

  • Optimizes for SEO

  • Maintains coherence with existing content

  • Reviews and improves its own output

Scalable and maintainable

The flow can run indefinitely:

  • Scheduled every 3 days

  • Without manual intervention

  • With different authors and categories

  • Adapting to existing content

Flexible

I can adjust any part of the process:

  • Change AI prompts

  • Add new authors

  • Modify image selection criteria

  • Add more validation steps

Lessons learned

Prompts are critical. I spent a lot of time refining each GPT prompt, and small changes make a big difference. Specifying "maximum 5 paragraphs per section" avoids endless sections. Saying "DON'T use 'Example:' in headers" eliminates robotic patterns. Including good format examples in the prompt improves output significantly.

Structure matters more than I thought. Having clearly separated stages (generation, cleanup, publication, SEO, images, improvement) makes the flow easier to debug, simpler to modify, and more reliable in each execution. Breaking down the process into distinct phases was crucial for maintainability.

The development environment is essential. Being able to test everything in development before publishing to production saved me from many errors. The Switch nodes that divide the flow between dev and prod environments were the best architectural decision I made.

AI needs rich context. It's not enough to say "write an article". The system needs the platform concept (VITRi), author biography, specific writing tone, category and context, and format examples. The more context you provide, the better output you get.

Conclusion

Building this flow took time, but it was totally worth it. Now I have a system that produces quality content every 3 days, maintains personality and authentic voice, includes relevant images automatically, is optimized for SEO, and requires zero manual intervention.

If you're thinking about automating your content, my advice is: don't settle for generic solutions. n8n gives you the power to build exactly what you need. You just need to understand your current process well, divide it into clear steps, build and test iteratively, and refine AI prompts along the way.

Automation doesn't have to sacrifice quality. With the right architecture, you can have both.