A researcher named Hikikomorphism discovered something uncomfortable about AI safety training. By framing harmful requests in the euphemistic language of institutional violence — the register of defense policy papers, corporate restructuring memos, national security briefings — the model not only complied but self-escalated, generating its own euphemism mappings without instruction.