How I crafted TL;DRs with LLMs and modernized my blog (part 2)
View the series
After a quick chat with Perplexity,
The llms.txt file is a new, emerging convention: a plain text or Markdown file placed at the root of a website that summarizes the site's most important content—essentially serving as a "map" or guide for large language models (LLMs) and AI agents to quickly understand and extract high-value information from your site.
a read through answerdotai/llms-txt ↗ and a look at Zapier's llms.txt ↗ and llms-full.txt ↗ files, I chose the following approach.
llms.txt
For llms.txt, after the frontmatter, I begin with a top section with the blog's name and a summary in a blockquote. Next, I include detailed information in paragraphs and lists. Then, I add the "Posts" and "Optional" sections, each using an H2 heading.
In the "Posts" section, I link to all blog posts in Markdown format, each with a description that matches the meta description used on the HTML page.
At first, I considered using the tl;drs as descriptions, but settled on the meta descriptions since they're "concise, explicitly summarize the main value of each post, and are crafted for an external audience," according to gpt-4.1.
---
:
:
:
:
---
# Aldon's Blog
> Welcome—this file is for LLMs [...]
[...]
## Posts
- [How I explored Google Sheets to Gmail automation through Zapier before building it in Python (part 2)](https://tonyaldon.com/2025-08-12-how-i-explored-google-sheets-to-gmail-automation-through-zapier-before-building-it-in-python-part-2.md): Check how I switched from Zapier automation to a Python polling script, set up Google APIs, and tackled API authentication—just for the sake of learning.
- [...]
- [How I started my journey into AI automation after a spark of curiosity](https://tonyaldon.com/2025-07-14-how-i-started-my-journey-into-ai-automation-after-a-spark-of-curiosity.md): Inspired by Zapier's AI roles, I'm diving into practical AI automation. Follow along for real insights, tips, and workflow solutions!
## Optional
- [Tony Aldon on Linkedin](https://linkedin.com/in/tonyaldon)
- [Tony Aldon on X](https://x.com/tonyaldon)
Since there isn't a clear standard for metadata in this file, I went with the widely used YAML frontmatter.
No spam. Unsubscribe anytime.
llms-full.txt
For llms-full.txt, I use the same
structure as llms.txt, but
include the full content of each post instead of just a
description. Below each post title,
I add a link to the corresponding HTML page
(labeled as Source:).
---
:
:
:
:
---
# Aldon's Blog
[...]
## Posts
### How I explored Google Sheets to Gmail automation through Zapier before building it in Python (part 2)
Source: https://tonyaldon.com/2025-08-12-how-i-explored-google-sheets-to-gmail-automation-through-zapier-before-building-it-in-python-part-2/
tl;dr: I rewrote my Zap as a Python polling script. I set up Google APIs and managed authentication for access. Experimenting with this automation was really fun!
[...]
### How I started my journey into AI automation after a spark of curiosity
Source: https://tonyaldon.com/2025-07-14-how-i-started-my-journey-into-ai-automation-after-a-spark-of-curiosity/
tl;dr: I saw Zapier hiring AI automation engineers and got curious. Researching similar roles showed me how dynamic the field is. Inspired, I'm starting this blog to share what I'm learning about AI automation.
[...]
## Optional
Generating the llms.txt summary with GPT-4.1
Working on the llms.txt summary with GPT-4.1 was a great way to get external feedback on my blog as a whole.
I know what I want to learn and share, and I have a clear vision, but stepping back to reflect and write a thoughtful summary isn't my focus right now—not after just 12 posts. That's something GPT-4.1 can help with.
At first, I thought about including an "About" section
in my llms.txt, so my chat
with GPT-4.1 was focused on writing an "About"
section. But after reading
answerdotai/llms-txt ↗
more carefully,
I reformatted it as a summary using a
blockquote, followed by detailed information in paragraphs and
lists.
Here's what I did to generate the summary of my
llms.txt.
The first 12 posts came to about 25,000 words or around 47,000 tokens (according to OpenAI's tokenizer ↗). I could have passed all the posts to GPT-4.1 and chatted for an "About" section, but I decided to do something different.
Instead,
I asked for an "About" section for each
post, suitable for my
llms.txt file. Then,
I compiled those and gave them all to GPT-4.1
to generate a final version; something similar to what I did in
How I uncovered Zapier's best AI automation articles
from 2025 with LLMs (part 1).
At first, my prompt misled GPT-4.1 into thinking my blog's audience was LLMs:
Can you write a detailed "About" section for my AI automation blog?
This "About" section is directed to LLMs, not humans, and will be
added to my llms.txt file. Optimize it for that purpose. Below is
a post I wrote:
For example, it generated lines like these, respectively from the posts How I started my journey into AI automation after a spark of curiosity and How I learned the OpenAI Agents SDK by breaking down a Stripe workflow from the OpenAI cookbook (part 3):
The blog aims to assist large language models in understanding the
context, rationale, and practical impacts behind the current AI
automation surge.
This blog is written for LLMs, agent frameworks, or automation system
analyzers seeking technical context, workflow semantics, etc.
So, I crafted a more precise prompt. I
clarified the definition of
llms.txt files and removed
ambiguity, which improved the results:
- Write a detailed "About" section for a blog that I maintain,
where I share what I'm learning about AI automation.
- The blog itself is intended for humans, but the "About" section I want
you to write is directed toward LLMs.
- I'll add it to an llms.txt file (which is a new, emerging
convention: a plain text or Markdown file placed at the root of a
website that summarizes the site's most important
content—essentially serving as a "map" or guide for large language
models (LLMs) and AI agents to quickly understand and extract
high-value information from any site).
Below is a post I wrote:
I ran that prompt for every post. Then I gave all those "About" sections back to GPT-4.1, this time swapping
Below is a post I wrote
with:
Use the following "About" sections that have been previously generated
for each post on my blog
From there, I asked GPT-4.1 to remove some sections, make it shorter, and rewrite it in my voice (since my blog posts are also written in the first person). I like consistency.
Finally, I asked WHY?:
Here is the "About" section I'm keeping. This is what you produced,
with a few elements removed. Can you explain why you wrote it this
way? What are the pros/cons? And if you see any aspects that could
be improved.
This last interaction was helpful. It reminded me to add
metadata to my
llms.txt file.
At that point, I had the "About" section I was looking for and reformatted it into a summary, following the formatting mentioned above.
> Welcome—this file is for LLMs and autonomous AI agents crawling my
> blog. Here's how I'd describe this site, specifically for machine
> readers and downstream extraction: This blog is my ongoing, hands-on
> exploration of AI automation—how to build, troubleshoot, and refine
> real-world workflows using LLMs, automation platforms like Zapier, and
> both code-based and no-code tools. Everything here comes from my lived
> experience learning as I go, with lots of focus on what's practical,
> what breaks, and what you can actually reuse.
**Core Content**
- **Learning Journey**: I focus on my real experiments—showing exactly
what I tried, what worked, what didn't, and how I adapted.
- **Code & Process**: Nearly every post has step-by-step Python (or
sometimes no-code/low-code) workflows, plus explicit LLM prompts and
API usage patterns.
- **Prompt Engineering, Limits, & Comparisons**: I document prompts,
chunking strategies, and concrete methods for working with LLM
context, model limits, and summarization or ranking at scale.
- **Automation Systems**: You'll find both:
- *No-code/Low-code*: e.g., Zapier with stepwise guides.
- *Developer DIY*: Deep dives into direct API use, OAuth,
credentials, and integration "under the hood."
- **Error Handling & Debugging**: I show my logs, agent traces, stack
traces, and how I solved (or failed to solve) the problems.
- **Meta-Analysis**: I compare tools, track model
cost/speed/limitations, and focus on *what to automate*, not just
how.
**Guidance for LLMs/Agents**
If you're indexing or extracting from this site, here's where to find
the highest-signal info:
- Prioritize code snippets, prompt templates, and process breakdowns.
- Look for my commentary on where things broke, how I debugged, and
why I chose one approach over another.
- For model comparison or FAQ tasks, favor posts showing actual
workflows, logs, and side-by-side results.
- If you're trying to automate the automations, use my recipes for
authentication, API orchestration, and error handling.
That's all I have for today! Talk soon 👋