Skip to main content
  1. posts/

Coefficiencies Newsletter Issue 5

·4 mins

Hey, welcome to issue 5 of the Coefficiencies newsletter! I spent the week coming down from a Vegas conference high, tightening up some of my automations, and generally trying to keep the post-conference energy focused on things that will actually ship.

What I’ve Been Up To #

Most of the week centred on the Enterprise Tech Leadership Summit, where every morning in the Fontainebleau’s Royal Ballroom kicked off with Gene Kim before diving into talks like “Scaling AI Adoption Across 3000+ Developers at Booking.com,” “State of AI-assisted Software Development 2025 (DORA Report),” and “Making AI Agents Actually Work for You.” It was a wall-to-wall reminder that AI isn’t just hype—it’s now the baseline expectation for modern engineering teams.

I pulled together my favourite sessions and takeaways in Thoughts About AI from a Tech Conference. Between vibe coding demos and watching seasoned teams treat agents like new teammates, I left inspired (and slightly overwhelmed) by how fast the tooling is evolving.

The travel delays even helped me make progress on the automation backlog: Active Pieces Maintenance in the Airport walks through the rss2social cleanup I finally finished while camped out at the gate, and Automating My Obsidian Vault with Codex, Healthchecks, and Rsync covers the vault tidy-up that followed once I was back home. The DORA State of AI report is still sitting in my read-later queue, but I’m legitimately excited to dig in now that I’ve seen how many teams are treating it like required reading.

DORA – Get Better at Getting Better #

DORA remains the gold standard for understanding how elite teams actually deliver software. After all the conference chatter, this is at the top of my weekend reading list.

Slate Truck is a $20,000 American-made electric pickup with no paint, no stereo, and no touchscreen #

This Verge piece on Slate’s minimalist EV is delightful—a no-nonsense two-seater that embraces scratches, ditches infotainment, and still manages to feel futuristic. Perfect reading while killing time between sessions.

GPT-5 Thinking in ChatGPT (aka Research Goblin) is shockingly good at search #

Simon Willison’s write-up sold me on letting OpenAI’s o3 model handle deeper research runs. It’s basically the nervous system behind the agent experiments I watched all week.

Final Thought #

I’ve been noodling on a half-finished draft about how weird it can feel to let an AI help with writing. There’s a guilt that sneaks in—am I outsourcing the part I actually enjoy? But the more I experiment, the more it feels like having an editor who never sleeps. The trick is treating the agent as a collaborator: let it handle the mechanical bits, but keep the human voice, the opinions, and the context squarely on my side of the keyboard. That’s the balance I’m chasing heading into the rest of October.

Guess what, this whole thing was written using Codex, and this was a little experiment with agentic coding this week. Here’s the exact prompt that kicked it off:

hey i have a mission for you. can you look at the folder posts to see some of the recent articles i’ve written called newsletter. Remember that style okay! Now, go through and look at some of the articles where draft is true. Then go and look at my daily notes in Personal/Journal from the last week, and look at Bookmarks I’ve created in the last month or so. Then create an issue of this week’s newsletter in a similar style to other ones that highlights some recent bookmarks, stuff i’ve been up to (infer from posts in the past week as well as notes i’ve done in my daily notes as well as wiki links to notes in those daily notes). At the end of the newsletter say, guess what, this whole thing was written using codex and include this exact prompt and say this was a little experiment with agentic coding this week.