Anthropic's Mythos makes autonomous vulnerability chaining across devices a sudden reality, so I've been thinking about how digital 'antibotty' inoculation networks may be needed far sooner than I expected.
Atproto users need a way to express granular AI preferences and carve out exceptions for specific entities or content types. This post introduces community.lexicon.preference.ai, a lexicon schema that decomposes AI usage into distinct categories and adds a scoped override mechanism built on top of Bluesky's User Intents proposal.
written before brainstorming a readme
Proposing a voluntary, machine-readable AI content disclosure scheme for OCaml spanning opam packages, dune, and per-module attributes, aligned with the W3C AI Content Disclosure vocabulary.
I used Claude Code to build a tool I needed. It worked great, but I was miserable. I need to reckon with what it means.
Publishing the OxCaml Labs year-one review, POSSE and AI content disclosure for the web, adopting the geo-embeddings Zarr convention for TESSERA, action PROPL at PLDI, the death of the grant application, and NASA's new swathe lidar mission.
Community feedback reshaped our Zarr store layout — years became a dimension, shards got bigger, and we retired the TESSERA-specific convention in favour of a shared geo-embeddings standard that also covers other models.
'Maybe I should write more like an LLM,' I said, contrarily.
One cool thing about the app that keeps me alive is that people are always innovating on how to make it better. When I saw this video by Diabetech about a new customization someone built to use Ai food search within Loop to count carbs, I was so on it that I had my Loop app updated with this new customization added in before the video was even over (I’m not joking). Thanks to the great instructions, I was able to add the customization to my forked repository in GitHub and build the app in a matt...
About a year ago, my tween kid started talking obsessively about The Amazing Digital Circus. Like many of my tween’s obsessions, I got to listen patiently as she described the plot, the characters, the episodes…my initial reaction was “this sounds pretty freaking out there.”But I watched an episode with her (mostly to make sure this was appropriate viewing material for her age) and was kind of intrigued once I understood what was going on. She explained that the show was loosely based on “a book...
It is easy to get on the AI hype doom bandwagon. Concerns about AI wiping out the entire working class or AI doing entire jobs with no need for humans, and market analysis pieces about AI that have the ability to rock the stock market are scary. Not to mention all the articles scaring parents into believing that a traditional education means their children won't prepare them for an AI-dominated world. I have a tendency to get sucked into the AI doom cycle. It is easy to forgot that these videos ...
Last year I read Careless People, by Sarah Wynn-Wiliams, a tell-all book about the inner workings and people of Facebook/Meta. My thought while reading it was “holy shit, these people are evil.” The overall premise that I took away from her book is that they (the people who run Facebook/Meta and others like them) are unintentionally evil, their evilness a byproduct of insane, rapidly accumulated wealth in a sector and society with no guardrails on wealth or their products. They are essentially c...
How we restructured TESSERA's geospatial embeddings from millions of individual numpy files into sharded Zarr v3 stores for efficient HTTP streaming, enabling everything from single-pixel mobile lookups to regional-scale analysis with just a couple of range requests.
A guy named René
an article that is talking about claude code ego deathing me
We rebuilt the onboarding experience from scratch — now you can upload your resume and let AI do the heavy lifting, or skip straight to building things yourself.
Summary of the Nine Recommendations and Biodiversity Monitoring Standards Framework papers from the NAS/Royal Society US-UK Forum in summer 2025, and how they connect to my work on collective knowledge systems, TESSERA, and evidence synthesis.
Wading my way through the mess that is programming today
Generalist LLMs are not lawyers, and evaluating them that way is a waste of time. Evaluating LLMs with useful specialized prompts (and eventually, with specialized legal harnesses) is where the work must happen.
An AI-powered Bluesky bot that uses a local Ollama model to generate posts in the style of a source account.
Examining the parallels between art, AI, and the existential threat to programmers
Python tool for analysing .docx files and generating essays using a local Ollama model — now part of the @ewanc26/pkgs monorepo.
TESSERA paper accepted at CVPR 2026, went to the AI Impact Summit, OCaml Zarr hacking, Shriram's talk on human factors of formal methods, and discussions on teaching OxCaml to agents.
Trip report from the Indian AI Impact Summit in New Delhi, covering the massive expo, a conversation with Yann LeCun, a hackathon/talk at IIT-Delhi, networking at the British High Commission, and reflections on the summit declaration's shift from safety to progress and equitable access.
writes <2% as many bytes as Opus 4.6
Ultimately, the cloud and AI industries are about robbing you of computing power and selling it back at exorbitant rents.
Ultimately, the cloud and AI industries are about robbing you of computing power and selling it back at exorbitant rents.
Ultimately, the cloud and AI industries are about robbing you of computing power and selling it back at exorbitant rents.
Ultimately, the cloud and AI industries are about robbing you of computing power and selling it back at exorbitant rents.
First TESSERA hackathon held at the Indian AI Impact Summit in Delhi, exploring integration with IIT-Delhi's CoRE Stack for geospatial analysis and testing TESSERA labeling workflows.
A world model learns to predict (state, action) → next state. Google's Genie 3 uses this at massive scale to generate interactive worlds from images. This demo shows the core challenge: prediction errors compound. Two balls start identical — one follows true physics, the other uses a 'learned model' that adds noise to each prediction. Watch divergence accumulate frame by frame. The graph shows how small errors become large ones through autoregressive generation.
A biomimetic model of the corticostriatal loop discovered neurons that predict errors before they happen. About 20% of the neural population are 'incongruent' — their activity doesn't match the dominant decision signal. When researchers checked real animal data, the same pattern was hiding there, overlooked for years. These neurons maintain alternatives, enabling cognitive flexibility when the world changes. The model as scientific instrument, finding what humans missed.
AI-planned Mars rover navigation. In December 2025, NASA's Perseverance drove 456 meters on routes planned entirely by Claude AI models — analyzing HiRISE orbital imagery to identify boulder fields and sand ripples, generating waypoints to navigate safely. The critical challenge: positional uncertainty grows with distance. By 655m, the rover could be 33m from where it thinks it is. Validated through 500,000 telemetry variables on JPL's digital twin before transmission to Mars.
Hosting the UK chief scientists for nature conservation at Pembroke to discuss TESSERA and AI for biodiversity, followed by the Conservation Evidence conference where I talked about choosing the open red pill over black-box AI for conservation decision-making.