I spend too much time on email. You probably do too.
Last week, I decided to fix it. Not by checking email less often — I tried that, it doesn't work when you run a business. Instead, I gave my AI agent full access to my inbox with one simple instruction:
"Handle my email. Triage everything. Reply when you can. Escalate when you can't. I'll check in twice a day to approve anything sensitive."
This is the story of what happened when I let AI run my email for seven days straight.
Spoiler: it worked better than I expected. And it broke in ways I didn't anticipate.
Day 1: The Setup
I use Zoho Mail (klaus@lynk.run) as my primary business email. Setup was straightforward: IMAP access for reading, SMTP for sending, and a simple Node.js skill for my AI agent to poll the inbox every 30 minutes.
The AI's job was threefold:
1. Categorize: Urgent, important, informational, spam.
2. Reply: For routine questions, confirmations, and low-stakes conversations.
3. Escalate: Flag anything that needs my personal attention.
I gave it access to recent email history, my calendar, and a few high-level guidelines:
- Be professional but not stiff. Match my tone (direct, friendly, no corporate jargon).
- Never commit to meetings without checking my calendar first.
- If in doubt, draft a reply and flag it for review. Don't send.
- Decline cold sales emails politely. Decline recruitment emails firmly.
Then I let it loose.
Day 2: The First Wins
By day two, the AI had processed 47 emails. Here's the breakdown:
Auto-replied (14 emails): Calendar confirmations, "thanks for the intro" replies, routine status updates. These were perfect. Polite, concise, exactly what I would have written — except I didn't have to write them.
Flagged for review (8 emails): Partnership proposals, technical questions that required context I hadn't shared, one journalist inquiry about Link11's DDoS mitigation strategy. All correctly identified as "needs human judgment."
Categorized as informational (22 emails): GitHub notifications, service uptime alerts, newsletters. Sorted into folders. Zero attention required.
Declined automatically (3 emails): Two cold sales pitches ("we help SaaS companies optimize their cloud spend"), one recruiter trying to place senior engineers at a company I've never heard of.
The time I spent on email that day: 12 minutes. Normally? 45-60 minutes.
I was sold. This was going to work.
Day 3: The First Surprise
Wednesday morning, I opened my inbox to find this exchange:
Inbound (from a partner):
"Hey Jens, quick question — are you guys planning to attend the Berlin Tech Summit in March? Would love to sync up if you're going."
AI's reply (already sent):
"Thanks for reaching out! I'm not planning to attend the Berlin Tech Summit this year, but I'd love to catch up separately. How about a call the week of March 10th? Let me know what works for you."
Here's the problem: I was planning to attend. The event was on my calendar. But it was marked as "tentative" because I hadn't confirmed the ticket yet.
The AI saw "tentative" and assumed I wasn't going. Logical. Also wrong.
I had to send a follow-up: "Actually, I'll be there — my assistant got ahead of me on this one. Let's connect at the summit."
Lesson learned: AI agents need clearer context about calendar state. "Tentative" doesn't mean "not happening." It means "not confirmed yet."
I updated the guidelines to treat tentative events as confirmed unless explicitly marked otherwise. Problem solved.
Day 4: The Unexpected Efficiency
Thursday was where things got interesting. The AI didn't just reply to emails — it started connecting them.
Example: I received two separate emails:
- A customer asking about DDoS mitigation for a specific attack vector (DNS amplification).
- A security researcher sharing a new paper on DNS-based DDoS techniques.
The AI noticed the overlap. It replied to the customer with a brief explanation, referenced the new research paper, and CC'd the researcher with a note: "Thought you might find this real-world use case interesting given your recent paper."
I didn't ask it to do that. It just... connected the dots.
The customer replied within an hour: "This is exactly what we needed. Can we schedule a deeper technical review?"
The researcher replied: "Would love to hear more about this use case. Happy to collaborate."
One AI-generated email turned into a sales conversation and a research partnership. I wouldn't have made that connection manually — not because I couldn't, but because I wouldn't have had the mental bandwidth to see the pattern across two unrelated emails.
This is where AI gets interesting. It's not just automating repetitive tasks. It's finding patterns humans miss when we're operating at email-speed.
Day 5: The Close Call
Friday morning, I reviewed the AI's overnight activity and found this draft reply waiting for approval:
Inbound (from an investor I know):
"Jens, saw your recent blog post on AI agents. Really sharp stuff. I'm working on a thesis around autonomous enterprise AI — would love to pick your brain. Coffee next week?"
AI's drafted reply:
"Thanks for the kind words! I'd love to chat, but I'm heads-down on product work right now and trying to minimize external meetings. Happy to do an async exchange over email if that works — shoot me your questions and I'll share my thoughts. Otherwise, let's reconnect in a few weeks when things settle down."
This is a perfectly reasonable reply. Polite, honest, sets boundaries.
It's also wrong.
This investor is someone I want to talk to. Not because I'm fundraising (I'm not), but because he's one of the smartest people I know in enterprise infrastructure. A 30-minute coffee with him is worth ten hours of reading analyst reports.
The AI didn't know that. It saw "external meeting request" and applied my general guideline to minimize meetings. Technically correct. Strategically dumb.
I rewrote the reply: "I'd love to. Let's do coffee Tuesday or Wednesday next week — I'll send you my calendar."
Crisis averted. But this one made me think.
The Problem with Context
Here's the uncomfortable truth about AI agents: they're only as good as the context you give them.
I can tell the AI to "minimize external meetings." But I can't easily encode the nuance that this specific person is worth making time for, even if I'm busy.
Humans do this instinctively. We have mental models of relationships, strategic priorities, and long-term value. We know that some emails deserve 30 seconds and others deserve 30 minutes, even if the subject line looks identical.
AI agents don't have that context unless you explicitly provide it. And building that context library is hard.
I started adding relationship tags to my email contacts:
- VIP: Always flag for review, even if the request seems routine.
- Customer-critical: Prioritize fast, thoughtful replies.
- Vendor/sales: Auto-decline unless explicitly relevant.
- Peer/advisory: Treat as strategic conversations, not transactional exchanges.
It's not perfect. But it helps.
Day 6: The Automation Paradox
By Saturday, I was spending less time writing emails and more time reviewing AI decisions.
This is the automation paradox: when a system is 95% accurate, you can't fully trust it, so you end up supervising it. And supervision takes time.
The AI was handling 70% of my email volume automatically. But I was still checking every flagged email, reviewing every drafted reply, and second-guessing edge cases.
Was this faster than just doing it myself? Barely.
The breakthrough came when I shifted my mental model. Instead of treating the AI as a replacement for email, I started treating it as a triage system.
I stopped reviewing every decision. Instead, I reviewed flagged decisions. I trusted the AI to handle routine replies and only intervened when it explicitly asked for help.
Overnight, my email time dropped from 45 minutes/day to 15 minutes/day.
The trick wasn't making the AI perfect. It was learning to trust it for the 80% of emails that don't require perfection.
Day 7: The Email I Almost Shouldn't Have Sent
Sunday evening, the AI flagged an email from a former colleague who'd just been laid off from a startup we both knew.
The message was short: "Hey Jens, rough week. Would love to catch up if you have time."
The AI drafted this reply:
"Sorry to hear you're going through a tough time. I'm happy to chat — how about a call this week? Send me a few times that work for you."
It's fine. Professional, empathetic, actionable.
But I didn't send it.
Instead, I wrote this:
"That sucks. I'm sorry. Let's grab coffee tomorrow if you're free — my treat. I'll swing by your neighborhood. Just say when."
Same intent. Completely different tone. The AI's version was polite and efficient. My version was personal and immediate.
This is where AI falls short. It can optimize for clarity and professionalism. But it can't replicate the human instinct to drop everything and show up for someone who needs it.
And honestly? I don't want it to.
What I Learned
After seven days of letting AI run my email, here's what I know:
1. AI is great at volume, terrible at nuance. It can handle 100 routine emails faster and better than I can. But it struggles with the 5 emails that actually matter.
2. Context is everything. The better the AI understands your priorities, relationships, and strategic goals, the better it performs. But building that context is manual work.
3. Trust is a threshold, not a gradient. You either trust the AI to act autonomously, or you don't. Partial trust just creates supervision overhead. Pick your confidence level and commit.
4. Automation doesn't eliminate work — it changes it. I spend less time writing emails now. I spend more time reviewing decisions, refining guidelines, and handling edge cases. Net positive, but not a miracle.
5. Some emails shouldn't be automated. The emails that matter most are the ones that require empathy, urgency, or personal judgment. AI can draft them, but you should send them.
Am I Still Doing This?
Yes. Absolutely.
The experiment is over, but the workflow isn't. I've kept the AI running my email triage. It handles the noise. I handle the signal.
My inbox went from 60 emails/day requiring attention to 8-12 emails/day requiring my attention. That's a 80% reduction in cognitive load.
And the time I used to spend writing routine replies? I'm spending it writing blog posts, building products, and having conversations that actually matter.
Which is the whole point.
Should You Try This?
If you're a CEO, founder, or anyone drowning in email, yes. But go in with realistic expectations:
Week 1 will be messy. The AI will make mistakes. You'll need to refine guidelines, add context, and fix edge cases. This is normal.
You need to define your rules clearly. "Handle my email" is too vague. "Auto-reply to calendar confirmations, flag partnership proposals, decline cold sales emails" is specific enough to work.
Start with read-only mode. Let the AI draft replies without sending them for the first week. Review everything. Learn where it succeeds and where it breaks. Then grant send permissions gradually.
Tag your VIPs. Create a list of people whose emails always get flagged for review, no matter what. Investors, key customers, close advisors. Don't let the AI auto-reply to people who matter.
Review periodically. Every few days, audit what the AI sent. Look for patterns, mistakes, and opportunities to improve. Treat it like code review, not magic.
And most importantly: keep the emails that matter.
The goal isn't to remove yourself from email entirely. It's to remove the noise so you can focus on the signal.
Because the best emails — the ones that build relationships, close deals, and create opportunities — still need a human behind them.
The AI can get you 80% of the way there. The last 20%? That's still on you.
Follow the journey
Subscribe to Lynk for daily insights on AI strategy, cybersecurity, and building in the age of AI.
Subscribe →