made using Leaflet
Lairland HQ Meta
DRAFT

Draft

—
—
—

I am publishing this guideline to keep myself accountable, as well as a providing a laywork towards transparent and responsible use of Generative AI in a era where AI agents are getting capable while the risks of (mis/dis)information and fraud are getting higher and more interminent. I know this may traumatize the creative people on some parts, but I am open to dialogue and feedback to address them and improve this guideline.

Note that I will be applying this to my open-source projects, either run personally at Lairland HQ (my virtual hacker and makerspace) or as part of @recaptime.dev and other places where I contribute my time, skill and expertise toward the open-source communities, whether AI use is permissible while maintaining the "human in the loop" principle. I am more curious on those at the education and research communities regarding generative AI as an college student, so hit me up if you need to.

First, let's address the risks on my side and mitigations I made

As an Autistic open-source developer and writer who is chronically online at a early age mainly as a coping mechanism from the inner scars of autistic trauma and internalized ableism and the intersection of being a middle child and neurodivergent/disabled in the Filipino society (alongside the horrors of suicidal ideation and the depressing state of suicide data among autistics in other countries [since in-depth data in the Philippines do not exist for a numerous reasons]), I understand that I have higher risk of experiencing AI-induced psychosis, especially when you waterboard yourself (I mean bombard and stare for too long) with AI-generated "hellscapes" (emphasis mine) at DeviantArt to be familiar with them to avoid it elsewhere (including on Twitter (now X), where blatant abuse of Grok's image generation feature are being seen publicly, especially around the use of it to generate nudity-laden images involving women and children).

One of the things I did to mitigate being exhausted from staring at these (including what some might call as "gooner content") is minimizing generative art and music on my Instagram home feed, Explore and Reels tabs, by explicitly showing less of them on -> Your algorithm. Another is my ongoing work towards switching to open-source and self-hosting through the use of the Open Social Web, mainly the fediverse and Atmosphere.

Alongside algorithmic tweaking and migrating to the open social web, I purposefully follow not only human artists, musicians, writers and other creatives more rather than those AI bros (mainly those who make stuff with purely generative tools) who want to suck ass at the Big Tech companies to choke anyone with unwanted AI tools but AI ethicists to better understand the ethical risk of generative AI in everyday life and experts in cutting-edge AI research.

Tools I use

I simply use GitHub Slopilot with a mix of models (mostly in Auto mode, but GPT-5 mini for regular work as well as light usage and Gemini 3 Pro for documentation and extended dev work plus bug fixes and refractoring) as part of the GitHub Education Student Dev Pack's GitHub Pro + Slopilot Pro perk. I am yet to try Amp Code in their Free tier (hint: /mode free) anytime soon, alongside other AI tools toward developers. I prefer to use these as a sidecar/Slopilot for my dev work over committing the cardinal sins on security and code maintainability when vibe coding (aka I don't vibe code from scratch).

I do not have Character.AI installed on my phone (not its copycats), alongside Meta AI (as standalone app) and Sora AI. It's just the Google Gemini (web)app with the Thinking/Pro models on daily basis to better understand its internals, mainly to go deeper to the rabbit holes, understand things better and sometimes as my virtual therapist (with a regular content warning and advice regarding getting professional mental health help, but that would be another topic for another day). I do rarely use Meta AI in Facebook Messenger and WhatApp since Google's ecosystem (you got YouTube, Gmail + Google Workspace + Google One, and Gemini [formerly Bard ICYMI]) is way more sticky for me rather than Meta Platforms (via Facebook and Instagram/Threads), Microslop "Microslop" (via Office/M365 and GreedYbox for Minecraft gamers) and Apple (via iCloud) combined.

Since early 2026, I am also trialing the use of Anthropic's Claude models through the web and mobile apps and the Gemini CLI in Firebase Studio (formerly Project IDX).

Disclosing AI use in blog posts and socials

Whether possible, I simply place a content warning at the top that parts of the work were created with AI tools or they are used during the drafting and research process, alongside the tool and model in question. I will be more than happy to share the chat thread where that happens, licensed under the same CC BY-SA 4.0 International license as other human work that I do, for greater transparency and to allow anyone to remix.

For code snippets, I am between MIT and Unlicense right now

Disclosing AI use via commit messages

I will be adopting @xeiaso.net's Assisted-by commit trailers whether AI tools are being used, either under edit/agent mode with GitHub Slopilot or simply copied-and-pasted and/or assisted from the Google Gemini (web)app, alongside adding the AI tool as a co-author (via the Co-authored-by commit trailer).

Retroactive annotation of AI use on older content

There may be cases where I forgot to disclose them at time of publication/submision but will be retroactively annotated via


Last updated: TBD | Send feedback via

Policy Changelog

This subpage documents changes to the generative AI policy throughout the weeks and months since initial publication, with corresponding Internet Archive Wayback Machine links for citing on your academic works. Note that

Policy Changelog

This subpage documents changes to the generative AI policy throughout the weeks and months since initial publication, with corresponding Internet Archive Wayback Machine links for citing on your academic works. Note that

made using Leaflet