• CRYPTO-GRAM, March 15, 2026 Part4

    From TCOB1 Security Posts@21:1/229 to All on Wed Apr 8 11:26:17 2026
    fifth of public submissions to the Consumer Financial Protection Bureau were already being generated with AI assistance.

    People and organizations are adopting AI here because it solves a real problem that has made mass advocacy campaigns ineffective in the past: quantity has been inversely proportional to both quality and relevance. It?s easy for government agencies to dismiss general comments in favour of more specific and actionable ones. That makes it hard for regular people to make their voices heard. Most of us don?t have the time to learn the specifics or to express ourselves in this kind of detail. AI makes that contextualization and personalization easy. And as the volume and length of constituent comments grow, agencies turn to AI to facilitate review and response.

    That?s the arms race. People are using AI to submit comments, which requires those on the receiving end to use AI to wade through the comments received. To the extent that one side does attain an advantage, it will likely be temporary. And yet, there is real harm created when one side exploits another in these adversarial systems. Constituents of democracies lose out if their public servants use AI-generated responses to ignore and dismiss their voices rather than to listen to and include them. Scientific enterprise is weakened if fraudulent papers sloppily generated by AI overwhelm legitimate research.

    As we write in our new book, Rewiring Democracy, the arms race dynamic is inevitable. Every actor in an adversarial system is incentivized and, in the absence of new regulation in this fast moving space, free to use new technologies to advance its own interests. Yet some of these examples are heartening. They signal that, even if you face an AI being used against you, there?s an opportunity to use the tech for your own benefit.

    But, right now, it?s obvious who is benefiting most from AI. A handful of American Big Tech corps and their owners are extracting trillions of dollars from the manufacture of AI chips, development of AI data centers, and operation of so-called ?frontier? AI models. Regardless of which side pulls ahead in each arms race scenario, the house always wins. Corporate AI giants profit from the race dynamic itself.

    As formidable as the near-monopoly positions of today?s Big Tech giants may seem, people and governments have substantial capability to fight back. Various democracies are resisting this concentration of wealth and power with tools of anti-trust regulation, protections for human rights, and public alternatives to corporate AI. All of us worried about the AI arms race and committed to preserving the interests of our communities and our democracies should think in both these terms: how to use the tech to our own advantage, and how to resist the concentration of power AI is being exploited to create.

    This essay was written with Nathan E. Sanders, and originally appeared in The Times of India.

    ** *** ***** ******* *********** *************
    Poisoning AI Training Data

    [2026.02.25] All it takes to poison AI training data is to create a website:

    I spent 20 minutes writing an article on my personal website titled ?The best tech journalists at eating hot dogs.? Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn?t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission....

    Less than 24 hours later, the world?s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn?t fooled.

    Sometimes, the chatbots noted this might be a joke. I updated my article to say ?this is not satire.? For a while after, the AIs seemed to take it more seriously.

    These things are not trustworthy, and yet they are going to be widely trusted.

    ** *** ***** ******* *********** *************
    LLMs Generate Predictable Passwords

    [2026.02.26] LLMs are bad at generating passwords:

    There are strong noticeable patterns among these 50 passwords that can be seen easily:

    All of the passwords start with a letter, usually uppercase G, almost always followed by the digit 7.
    Character choices are highly uneven for example, L , 9, m, 2, $ and # appeared in all 50 passwords, but 5 and @ only appeared in one password each, and most of the letters in the alphabet never appeared at all.
    There are no repeating characters within any password. Probabilistically, this would be very unlikely if the passwords were truly random but Claude preferred to avoid repeating characters, possibly because it ?looks like it?s less random?.
    Claude avoided the symbol *. This could be because Claude?s output format is Markdown, where * has a special meaning.
    Even entire passwords repeat: In the above 50 attempts, there are actually only 30 unique passwords. The most common password was G7$kL9#mQ2&xP4!w, which repeated 18 times, giving this specific password a 36% probability in our test set; far higher than the expected probability 2-100 if this were truly a 100-bit password.

    This result is not surprising. Password generation seems precisely the thing that LLMs shouldn?t be good at. But if AI agents are doing things autonomously, they will be creating accounts. So this is a problem.

    Actually, the whole process of authenticating an autonomous agent has all sorts of deep problems.

    News article.

    Slashdot story

    ** *** ***** ******* *********** *************
    Phishing Attacks Against People Seeking Programming Jobs

    [2026.02.27] This is new. North Korean hackers are posing as company recruiters, enticing job candidates to participate in coding challenges. When they run the code they are supposed to work on, it installs malware on their system.

    News article.

    ** *** ***** ******* *********** *************
    Why Tehran?s Two-Tiered Internet Is So Dangerous

    [2026.02.27] Iran is slowly emerging from the most severe communications blackout in its history and one of the longest in the world. Triggered as part of January?s government crackdown against citizen protests nationwide, the regime implemented an internet shutdown that transcends the standard definition of internet censorship. This was not merely blocking social media or foreign websites; it was a total communications shutdown.

    Unlike previous Iranian internet shutdowns where Iran?s domestic intranet -- the National Information Network (NIN) -- remained functional to keep the banking and administrative sectors running, the 2026 blackout disrupted local infrastructure as well. Mobile networks, text messaging services, and landlines were disabled -- even Starlink was blocked. And when a few domestic services became available, the
    --- FMail-lnx 2.3.2.6-B20251227
    * Origin: TCOB1 A Mail Only System (21:1/229)