CRYPTO-GRAM, March 15, 2026 Part3
From
TCOB1 Security Posts@21:1/229 to
All on Wed Apr 8 11:26:17 2026
ete protection. Through responsible disclosure, we have collaborated with providers to implement initial countermeasures. Our findings underscore the need for LLM providers to address metadata leakage as AI systems handle increasingly sensitive information.
** *** ***** ******* *********** *************
AI Found Twelve New Vulnerabilities in OpenSSL
[2026.02.18] The title of the post is?What AI Security Research Looks Like When It Works,? and I agree:
In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one.
These weren?t trivial findings either. They included CVE-2025-15467, a stack buffer overflow in CMS message parsing that?s potentially remotely exploitable without valid key material, and exploits for which have been quickly developed online. OpenSSL rated it HIGH severity; NIST?s CVSS v3 score is 9.8 out of 10 (CRITICAL, an extremely rare severity rating for such projects). Three of the bugs had been present since 1998-2000, for over a quarter century having been missed by intense machine and human effort alike. One predated OpenSSL itself, inherited from Eric Young?s original SSLeay implementation in the 1990s. All of this in a codebase that has been fuzzed for millions of CPU-hours and audited extensively for over two decades by teams including Google?s.
In five of the twelve cases, our AI system directly proposed the patches that were accepted into the official release.
AI vulnerability finding is changing cybersecurity, faster than expected. This capability will be used by both offense and defense.
More.
** *** ***** ******* *********** *************
Malicious AI
[2026.02.19] Interesting:
Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library. This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.
Part 2 of the story. And a Wall Street Journal article.
EDITED TO ADD (2/20) Here are parts 3 and 4 of the story.
** *** ***** ******* *********** *************
Ring Cancels Its Partnership with Flock
[2026.02.20] It?s a demonstration of how toxic the surveillance-tech company Flock has become when Amazon?s Ring cancels the partnership between the two companies.
As Hamilton Nolan advises, remove your Ring doorbell.
** *** ***** ******* *********** *************
On the Security of Password Managers
[2026.02.23] Good article on password managers that secretly have a backdoor.
New research shows that these claims aren?t true in all cases, particularly when account recovery is in place or password managers are set to share vaults or organize users into groups. The researchers reverse-engineered or closely analyzed Bitwarden, Dashlane, and LastPass and identified ways that someone with control over the server -- either administrative or the result of a compromise -- can, in fact, steal data and, in some cases, entire vaults. The researchers also devised other attacks that can weaken the encryption to the point that ciphertext can be converted to plaintext.
This is where I plug my own Password Safe. It isn?t as full-featured as the others and it doesn?t use the cloud at all, but it?s actual encryption with no recovery features.
** *** ***** ******* *********** *************
Is AI Good for Democracy?
[2026.02.24] Politicians fixate on the global race for technological supremacy between US and China. They debate geopolitical implications of chip exports, latest model releases from each country, and military applications of AI. Someday, they believe, we might see advancements in AI tip the scales in a superpower conflict.
But the most important arms race of the 21st century is already happening elsewhere and, while AI is definitely the weapon of choice, combatants are distributed across dozens of domains.
Academic journals are flooded with AI-generated papers, and are turning to AI to help review submissions. Brazil?s court system started using AI to triage cases, only to face an increasing volume of cases filed with AI help. Open source software developers are being overwhelmed with code contributions from bots. Newspapers, music, social media, education, investigative journalism, hiring, and procurement are all being disrupted by a massive expansion of AI use.
Each of these is an arms race. Adversaries within a system iteratively seeking an edge against their competition by continuously expanding their use of a common technology.
Beneficiaries of these arms races are US mega-corporations capturing wealth from the rest of us at an unprecedented rate. A substantial fraction of global economy has reoriented around AI in just the past few years, and that trend is accelerating. In parallel, this industry?s lobbying interests are quickly becoming the object, rather than the subject, of US government power.
To understand these arms races, let?s look at an example of particular interest to democracies worldwide: how AI is changing the relationship between democratic government and citizens. Interactions that used to happen between people and elected representatives are expanding to a massive scale, with AIs taking the roles that humans once did.
In a notorious example from 2017, US Federal Communications Commission opened a comment platform on the web to get public input on internet regulation. It was quickly flooded with millions of comments fraudulently orchestrated by broadband providers to oppose FCC regulation of their industry. From the other side, a 19-yearold college student responded by submitting millions of comments of his own supporting the regulation. Both sides were using software primitive by the standards of today?s AI.
Nearly a decade later, it is getting harder for citizens to tell when they?re talking to a government bot, or when an online conversation about public policy is just bots talking to bots. When constituents leverage AI to communicate better, faster, and more, it pressures government officials to do the same.
This may sound futuristic, but it?s become a familiar reality in US. Staff in US Congress are using AI to make their constituent email correspondence more efficient. Politicians campaigning for office are adopting AI tools to automate fundraising and voter outreach. By one 2025 estimate, a
--- FMail-lnx 2.3.2.6-B20251227
* Origin: TCOB1 A Mail Only System (21:1/229)