live · 6:00 AM Pacific · daily

Wire

Local-first · Python 3.13 + Ollama + Playwright + SQLite

Built for:
One person — me. The willpower budget for “post on LinkedIn every morning” is finite; a cron has no willpower budget.
Not built for:
Anyone who needs a multi-tenant social scheduler. This is a single-user tool that owns one account and posts on its behalf.

The useful kind of automation is the kind that runs on a cron, not on willpower. Wire compiles two or three stories worth discussing, edits them to sound the way I write, and posts a single brief to LinkedIn at 6 AM Pacific on weekdays. The hard part wasn’t the posting; the hard part was the editing.

§ I

The problem

A daily LinkedIn post is a small commitment that compounds badly. Skip a week and the algorithm forgets you exist; skip a month and you’re back to zero. Doing it well — finding genuinely interesting AI news, distilling it, writing it in a real voice — is twenty minutes a day. Doing it poorly is worse than not doing it. The middle ground is exhausting.

Wire moves the work to a cron. Research, generate, self-review, post — one brief at 6 AM Pacific, weekdays only. By the time I’m at the desk, the day’s post has shipped. The willpower budget I would have spent goes to the actual work.

§ II

Decisions

Four calls that shaped Wire and what each one cost.

  1. flipped

    From Firecrawl to Google News RSS for source discovery. Firecrawl was the better tool but I was burning through credits faster than the value justified. RSS is free, infinite, and the dataset I want — recent AI news headlines — is exactly what RSS was built to publish.

  2. cut

    The Anthropic API for editing. The Max plan blocks API tokens issued to non-Claude-Code clients, which I respect; rather than fight it, I moved editing to a local qwen3:8b running on Ollama. The post quality was a step down for the first week and a step up by week three after I tuned the prompt.

  3. kept

    The self-review loop. After the model drafts a post, a second pass reviews it specifically for factual accuracy against the source articles. Roughly 1 in 8 drafts gets rewritten on review. The whole pipeline takes longer; the embarrassment cost of a wrong claim posted under my name is much higher.

  4. refused

    Auto-generated comments and reactions. The line for me is “automation drafts; human ships.” The post is automated because the cost of skipping a day is real; engagement is human because the cost of fake engagement is also real, just in the other direction.

The useful kind of automation runs on a cron, not on willpower.

— Wire design note
§ III

System

One Python process, scheduled by Windows Task Scheduler, writing to a local SQLite database and a persistent Playwright Chromium session that stays logged into LinkedIn. The whole thing runs in ~90 seconds per pass and idles otherwise.

Stack — current pins.
LayerImplementationPurpose
ScheduleWindows Task SchedulerWeekdays · 6 AM Pacific · single daily brief
Research13 RSS + 6 Google NewsParallel feed pulls; dedup by URL
EditOllama qwen3:8bDrafts, then self-reviews each draft
PostPlaywright (persistent)Logged-in Chromium session, posts as me
StoreSQLite (10 tables)Sources · drafts · review notes · history
AnalyticsDaily scrape + rankerWhat worked feeds the next day’s prompt
morning_intel/writer/post_writer.pypython · self-review loop
# After the model drafts a post, send it back with the source
# finding for fact-checking. Roughly 1 in 8 drafts gets rewritten
# on review; the embarrassment cost of a wrong claim posted under
# my name is much higher than the cost of a second LLM call.
async def _review_draft(post: Post, findings: list[Finding]) -> str:
    source = _source_finding_for(post, findings)
    if source is None:
        return post.content  # nothing to fact-check against

    prompt = _REVIEW_PROMPT.format(
        finding_title=source.title,
        finding_summary=source.summary,
        finding_url=source.url,
        draft_content=post.content,
    )
    try:
        reviewed = await _call_llm(prompt)
        if reviewed and reviewed.strip():
            logger.info("Post %d reviewed: %s", post.id,
                "unchanged" if reviewed.strip() == post.content.strip()
                else "corrected")
            return reviewed.strip()
    except Exception as exc:
        logger.warning("Review failed for post %d: %s", post.id, exc)
    return post.content
post_review.audit.logndjson · pre-post review
{"t":"06:18:02-07","post":1881,"event":"draft_done","words":162,"model":"qwen3:8b"}
{"t":"06:18:04-07","post":1881,"event":"review_start","source":"news.ycombinator.com/item?id=43512..."}
{"t":"06:18:11-07","post":1881,"event":"review_done","verdict":"corrected","cuts":3,"reasons":["word_count","opener_too_abstract"]}
{"t":"06:18:11-07","post":1881,"event":"rules_applied","hashtags":1,"emojis":0,"url_present":true}
{"t":"06:18:13-07","post":1881,"event":"image_attached","path":"images/post_1881.png"}
{"t":"06:00:00-07","post":1881,"event":"posted","platform":"linkedin"}
FIGURE. Self-review on every draft. Verdict, cut count, and reasons logged before the post is even queued. Real reach numbers stay in private analytics.
Wire daily-brief draft and structured self-review verdict, with audit-log strip showing draft → review → corrected → posted.
FIGURE 1. The morning view — draft on the left, structured review verdict on the right, six-row audit log below. The cron does the work; the verdict makes it shippable.
Wire feed source configuration — nineteen active feeds spanning thirteen RSS plus six Google News, with ingest intervals and last-fetched timestamps on the left, the active feed’s six most-recent items on the right, with a single POSTED-red flag on the row that became today’s post.
FIGURE 2. The source side. Nineteen feeds across RSS and Google News, two-hour ingest, every item flagged F → S → K → X (fetched, summarized, kept, discarded). The single red dot is today’s morning — one ingested article became one published post.
Wire posted-archive grid — eight past LinkedIn posts as cards, each with date, the first three lines of the body, engagement counts, and a short caption naming the model and word count.
FIGURE 3. The archive. Every post the cron has shipped, datelined and engagement-tagged. One card’s dot is red — the morning the page was captured. The rest are the slow accumulation of a willpower-free habit.
§ IV

Running it

Wire owns one LinkedIn account and posts on its behalf — the kind of automation that benefits nobody from being multi-user. The setup shape is below in case it’s useful as a reference.

setup.ps1powershell# prerequisites: Python 3.13, Ollama, Chromium via Playwright
python -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install -r requirements.txt
ollama pull qwen3:8b
python -m morning_intel.auth_setup    # one-time LinkedIn login
schtasks /Create /SC DAILY /TN "Wire" /TR "python -m morning_intel" /ST 06:00

Acknowledgments

Filed under credits, in the order of the byline: the publishers who still ship RSS feeds in 2026, the qwen3 team at Alibaba for weights that hold up at 8B, Ollama for the local serving layer, Playwright for a logged-in Chromium that doesn’t ask questions, and the Python standard library for the rest. A 90-second daily desk job, sourced and edited by software, posted by software, read by humans.

← Index