How I automated my entire blog with Claude API and GitHub Actions
Table of Contents
Three months ago, I published my last manually written blog post. Since then, my blog has been running on autopilot — Claude API generates posts from a topic pool, GitHub Actions orchestrates everything, and the content flows through Hugo to AWS S3 without me touching a single file. Here’s exactly how I built this system for imyuvii.com.
The architecture I landed on
The full pipeline looks like this: a topics.json file holds my content queue, a Node.js script calls Claude API to generate a post, Hugo builds the static site, AWS S3 hosts it, CloudFront serves it globally, and another script posts to LinkedIn. GitHub Actions ties it all together on a cron schedule.
I didn’t start with this architecture. Initially, I thought I’d just call Claude from a local script and manually deploy. But once I had the generation working, the obvious next step was full automation. One thing led to another.
topics.json → Claude API → Markdown file → Hugo build → S3 sync → CloudFront invalidation → LinkedIn post
The entire flow runs every 3 days at 6 AM IST. No human intervention required.
Why a topic pool instead of letting Claude decide
Early on, I let Claude pick what to write about. Bad idea. I got three posts about “10 VS Code extensions every developer needs” in a row. The model optimizes for what it thinks will perform well, which means repetitive, safe content.
Now I maintain a topics.json file with 30-40 pre-approved topics:
{
"topics": [
{
"id": "claude-blog-automation",
"title": "How I automated my entire blog with Claude API and GitHub Actions",
"context": "First-person account of the full stack...",
"status": "pending"
}
]
}
The generation script picks the first pending topic, generates the post, then marks it completed. I review and add new topics once a month. This gives me editorial control without daily involvement.
The handoff file between generation and publishing
Here’s something that took me a few iterations to get right. The Claude script generates a Markdown file, but the LinkedIn script needs to know what was just created — the title, slug, and description. I couldn’t assume the filename because it’s generated dynamically.
My solution: a last-generated.json handoff file.
{
"slug": "how-i-automated-my-blog-with-claude-and-github-actions",
"title": "How I automated my entire blog with Claude API and GitHub Actions",
"description": "I built a system where Claude writes my blog posts...",
"generatedAt": "2026-04-17T06:00:00+05:30"
}
The generation script writes this file. The LinkedIn script reads it. Simple, but it took me embarrassingly long to realize I needed this intermediary step.
Two CloudFront distributions — here’s why
I run two separate CloudFront distributions for imyuvii.com. The first serves the main site from S3. The second handles www to apex domain redirects.
Why not just a single distribution with multiple origins? Because S3 website endpoints and S3 REST API endpoints behave differently with CloudFront. The website endpoint handles redirects and index documents correctly, but you can’t use it with Origin Access Control. The REST API endpoint works with OAC but doesn’t do the index.html resolution Hugo needs.
My setup:
- Distribution 1: Points to S3 REST API endpoint with OAC, serves
imyuvii.com - Distribution 2: Points to S3 website endpoint, handles
www.imyuvii.comredirect
# Invalidation after deploy
- name: Invalidate CloudFront
run: |
aws cloudfront create-invalidation \
--distribution-id ${{ secrets.CF_DISTRIBUTION_ID }} \
--paths "/*"
The /* invalidation isn’t cheap at scale, but for a personal blog with ~50 posts, it’s fine.
IAM policy with actual least privilege
I see a lot of tutorials that just attach AdministratorAccess to their GitHub Actions role. That’s asking for trouble. Here’s the actual policy I use:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::imyuvii-blog",
"arn:aws:s3:::imyuvii-blog/*"
]
},
{
"Effect": "Allow",
"Action": "cloudfront:CreateInvalidation",
"Resource": "arn:aws:cloudfront::ACCOUNT_ID:distribution/DIST_ID"
}
]
}
No s3:GetObject because the action only writes. No wildcard resources. If someone compromises my GitHub secrets, they can mess with my blog but nothing else in my AWS account.
The GitHub Actions workflow
Here’s the actual workflow file, simplified for clarity:
name: Generate and Deploy Blog Post
on:
schedule:
- cron: '30 0 */3 * *' # 6 AM IST every 3 days
workflow_dispatch: # Manual trigger for testing
jobs:
generate-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Generate post with Claude
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: node scripts/generate-post.js
- name: Build Hugo site
uses: peaceiris/actions-hugo@v2
with:
hugo-version: 'latest'
- run: hugo --minify
- name: Deploy to S3
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: aws s3 sync public/ s3://imyuvii-blog --delete
- name: Invalidate CloudFront
run: |
aws cloudfront create-invalidation \
--distribution-id ${{ secrets.CF_DISTRIBUTION_ID }} \
--paths "/*"
- name: Post to LinkedIn
env:
LINKEDIN_ACCESS_TOKEN: ${{ secrets.LINKEDIN_ACCESS_TOKEN }}
run: node scripts/post-linkedin.js
- name: Commit updated topics.json
run: |
git config user.name "GitHub Actions"
git config user.email "actions@github.com"
git add topics.json last-generated.json
git commit -m "Mark topic as completed [skip ci]"
git push
The [skip ci] in the commit message prevents an infinite loop. Without it, the commit would trigger another workflow run.
What I learned building this
The system’s been running for three months now. A few observations:
Claude’s output quality is consistent but not perfect. I still review posts weekly and occasionally make edits. The 80/20 rule applies — automation handles 80% of the work, I handle the rest.
The topic pool is the real editorial lever. The quality of my blog depends entirely on what topics I feed it. Garbage in, garbage out.
LinkedIn automation is the flakiest part. Their API token expires, rate limits are aggressive, and the post format that works keeps changing. I’ve had to fix this three times already.
Costs are negligible. Claude API runs about $0.15 per post. S3 and CloudFront are pennies. The whole system costs less than $5/month.
If you’re thinking about automating your own blog, start with the generation script and run it manually for a few weeks. Once you trust the output, add the deployment. LinkedIn can wait — it’s nice to have but breaks often enough to be annoying.