• CRYPTO-GRAM, May 15, 2025

    From Sean Rima@21:1/229 to All on Thu May 15 12:39:28 2025
    Crypto-Gram
    May 15, 2025

    by Bruce Schneier
    Fellow and Lecturer, Harvard Kennedy School schneier@schneier.com https://www.schneier.com

    A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

    For back issues, or to subscribe, visit Crypto-Gram's web page.

    Read this issue on the web

    These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.

    ** *** ***** ******* *********** *************

    In this issue:

    If these links don't work in your email client, try reading this issue of Crypto-Gram on the web.

    Slopsquatting
    CVE Program Almost Unfunded
    Age Verification Using Facial Scans Android Improves Its Security
    Regulating AI Behavior with a Hypervisor New Linux Rootkit
    Cryptocurrency Thefts Get Physical
    Windscribe Acquitted on Charges of Not Collecting Users' Data Applying Security Engineering to Prompt Injection Security WhatsApp Case Against NSO Group Progressing US as a Surveillance State
    NCSC Guidance on "Advanced Cryptography"
    Privacy for Agentic AI
    Another Move in the Deepfake Creation/Detection Arms Race Fake Student Fraud in Community Colleges Chinese AI Submersible
    Florida Backdoor Bill Fails
    Court Rules Against NSO Group
    GoogleΓÇÖs Advanced Protection Now on Android Upcoming Speaking Engagements AI-Generated Law
    ** *** ***** ******* *********** *************

    Slopsquatting

    [2025.04.15] As AI coding assistants invent nonexistent software libraries to download and use, enterprising attackers create and upload libraries with those names -- laced with malware, of course.

    EDITED TO ADD (1/22): Research paper. Slashdot thread.

    ** *** ***** ******* *********** *************

    CVE Program Almost Unfunded

    [2025.04.16] MitreΓÇÖs CVEΓÇÖs program -- which provides common naming and other informational resources about cybersecurity vulnerabilities -- was about to be cancelled, as the US Department of Homeland Security failed to renew the contact. It was funded for eleven more months at the last minute.

    This is a big deal. The CVE program is one of those pieces of common infrastructure that everyone benefits from. Losing it will bring us back to a world where thereΓÇÖs no single way to talk about vulnerabilities. ItΓÇÖs kind of crazy to think that the US government might damage its own security in this way -- but I suppose no crazier than any of the other ways the US is working against its own interests right now.

    Sasha Romanosky, senior policy researcher at the Rand Corporation, branded the end to the CVE program as ΓÇ£tragic,ΓÇ¥ a sentiment echoed by many cybersecurity and CVE experts reached for comment.

    ΓÇ£CVE naming and assignment to software packages and versions are the foundation upon which the software vulnerability ecosystem is based,ΓÇ¥ Romanosky said. ΓÇ£Without it, we canΓÇÖt track newly discovered vulnerabilities. We canΓÇÖt score their severity or predict their exploitation. And we certainly wouldnΓÇÖt be able to make the best decisions regarding patching them.ΓÇ¥

    Ben Edwards, principal research scientist at Bitsight, told CSO, ΓÇ£My reaction is sadness and disappointment. This is a valuable resource that should absolutely be funded, and not renewing the contract is a mistake.ΓÇ¥

    He added ΓÇ£I am hopeful any interruption is brief and that if the contract fails to be renewed, other stakeholders within the ecosystem can pick up where MITRE left off. The federated framework and openness of the system make this
    possible, but itΓÇÖll be a rocky road if operations do need to shift to another entity.ΓÇ¥

    More similar quotes in the article.

    My guess is that we will somehow figure out how to transition this program to continue without the US government. ItΓÇÖs too important to be at risk.

    EDITED TO ADD: Another good article.

    ** *** ***** ******* *********** *************

    Age Verification Using Facial Scans

    [2025.04.17] Discord is testing the feature:

    ΓÇ£WeΓÇÖre currently running tests in select regions to age-gate access to certain spaces or user settings,ΓÇ¥ a spokesperson for Discord said in a statement. ΓÇ£The information shared to power the age verification method is only used for the one-time age verification process and is not stored by Discord or our vendor. For Face Scan, the solution our vendor uses operates on-device, which means there is no collection of any biometric information when you scan your face. For ID verification, the scan of your ID is deleted upon verification.ΓÇ¥

    I look forward to all the videos of people hacking this system using various disguises.

    ** *** ***** ******* *********** *************

    Android Improves Its Security

    [2025.04.22] Android phones will soon reboot themselves after sitting idle for three days. iPhones have had this feature for a while; itΓÇÖs nice to see Google add it to their phones.

    ** *** ***** ******* *********** *************

    Regulating AI Behavior with a Hypervisor

    [2025.04.23] Interesting research: ΓÇ£Guillotine: Hypervisors for Isolating Malicious AIs.ΓÇ¥

    Abstract:As AI models become more embedded in critical sectors like finance, healthcare, and the military, their inscrutable behavior poses ever-greater risks to society. To mitigate this risk, we propose Guillotine, a hypervisor architecture for sandboxing powerful AI models -- models that, by accident or malice, can generate existential threats to humanity. Although Guillotine borrows some well-known virtualization techniques, Guillotine must also introduce fundamentally new isolation mechanisms to handle the unique threat model posed by existential-risk AIs. For example, a rogue AI may try to introspect upon hypervisor software or the underlying hardware substrate to enable later subversion of that control plane; thus, a Guillotine hypervisor requires careful co-design of the hypervisor software and the CPUs, RAM, NIC, and storage devices that support the hypervisor software, to thwart side channel leakage and more generally eliminate mechanisms for AI to exploit reflection-based vulnerabilities. Beyond s uch isolation at the software, network, and microarchitectural layers, a Guillotine hypervisor must also provide physical fail-safes more commonly associated with nuclear power plants, avionic platforms, and other types of mission critical systems. Physical fail-safes, e.g., involving electromechanical disconnection of network cables, or the flooding of a datacenter which holds a rogue AI, provide defense in depth if software, network, and microarchitectural isolation is compromised and a rogue AI must be temporarily shut down or permanently destroyed.

    The basic idea is that many of the AI safety policies proposed by the AI community lack robust technical enforcement mechanisms. The worry is that, as models get smarter, they will be able to avoid those safety policies. The paper proposes a set technical enforcement mechanisms that could work against these malicious AIs.

    ** *** ***** ******* *********** *************

    New Linux Rootkit

    [2025.04.24] Interesting:

    The company has released a working rootkit called ΓÇ£CuringΓÇ¥ that uses io_uring, a

    --- BBBS/LiR v4.10 Toy-7
    * Origin: TCOB1: https/binkd/telnet binkd.rima.ie (21:1/229)
  • From Sean Rima@21:1/229 to All on Thu May 15 12:39:28 2025
    feature built into the Linux kernel, to stealthily perform malicious activities without being caught by many of the detection solutions currently on the market.

    At the heart of the issue is the heavy reliance on monitoring system calls, which has become the go-to method for many cybersecurity vendors. The problem? Attackers can completely sidestep these monitored calls by leaning on io_uring instead. This clever method could let bad actors quietly make network connections or tamper with files without triggering the usual alarms.

    HereΓÇÖs the code.

    Note the self-serving nature of this announcement: ARMO, the company that released the research and code, has a product that it claims blocks this kind of attack.

    ** *** ***** ******* *********** *************

    Cryptocurrency Thefts Get Physical

    [2025.04.25] Long story of a $250 million cryptocurrency theft that, in a complicated chain events, resulted in a pretty brutal kidnapping.

    ** *** ***** ******* *********** *************

    Windscribe Acquitted on Charges of Not Collecting Users' Data

    [2025.04.28] The company doesnΓÇÖt keep logs, so couldnΓÇÖt turn over data:

    Windscribe, a globally used privacy-first VPN service, announced today that its founder, Yegor Sak, has been fully acquitted by a court in Athens, Greece, following a two-year legal battle in which Sak was personally charged in connection with an alleged internet offence by an unknown user of the service.

    The case centred around a Windscribe-owned server in Finland that was allegedly used to breach a system in Greece. Greek authorities, in cooperation with INTERPOL, traced the IP address to WindscribeΓÇÖs infrastructure and, unlike
    standard international procedures, proceeded to initiate criminal proceedings against Sak himself, rather than pursuing information through standard corporate channels.

    ** *** ***** ******* *********** *************

    Applying Security Engineering to Prompt Injection Security

    [2025.04.29] This seems like an important advance in LLM security against prompt injection:

    Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within a secure software framework, creating clear boundaries between user commands and potentially malicious content.

    [...]

    To understand CaMeL, you need to understand that prompt injections happen when AI systems canΓÇÖt distinguish between legitimate user commands and malicious instructions hidden in content theyΓÇÖre processing.

    [...]

    While CaMeL does use multiple AI models (a privileged LLM and a quarantined LLM), what makes it innovative isnΓÇÖt reducing the number of models but
    fundamentally changing the security architecture. Rather than expecting AI to detect attacks, CaMeL implements established security engineering principles like capability-based access control and data flow tracking to create boundaries that remain effective even if an AI component is compromised.

    Research paper. Good analysis by Simon Willison.

    I wrote about the problem of LLMs intermingling the data and control paths here.

    ** *** ***** ******* *********** *************

    WhatsApp Case Against NSO Group Progressing

    [2025.04.30] Meta is suing NSO Group, basically claiming that the latter hacks WhatsApp and not just WhatsApp users. We have a procedural ruling:

    Under the order, NSO Group is prohibited from presenting evidence about its customersΓÇÖ identities, implying the targeted WhatsApp users are suspected or actual criminals, or alleging that WhatsApp had insufficient security protections.

    [...]

    In making her ruling, Northern District of California Judge Phyllis Hamilton said NSO Group undercut its arguments to use evidence about its customers with contradictory statements.

    ΓÇ£Defendants cannot claim, on the one hand, that its intent is to help its clients fight terrorism and child exploitation, and on the other hand say that it has nothing to do with what its client does with the technology, other than advice and support,ΓÇ¥ she wrote. ΓÇ£Additionally, there is no evidence as to the specific kinds of crimes or security threats that its clients actually investigate and none with respect to the attacks at issue.ΓÇ¥

    I have written about the issues at play in this case.

    ** *** ***** ******* *********** *************

    US as a Surveillance State

    [2025.05.01] Two essays were just published on DOGEΓÇÖs data collection and aggregation, and how it ends with a modern surveillance state.

    ItΓÇÖs good to see this finally being talked about.

    EDITED TO ADD (5/3): HereΓÇÖs a free link to that first essay.

    ** *** ***** ******* *********** *************

    NCSC Guidance on "Advanced Cryptography"

    [2025.05.02] The UKΓÇÖs National Cyber Security Centre just released its white paper on ΓÇ£Advanced Cryptography,ΓÇ¥ which it defines as ΓÇ£cryptographic techniques for processing encrypted data, providing enhanced functionality over and above that provided by traditional cryptography.ΓÇ¥ It includes things like homomorphic encryption, attribute-based encryption, zero-knowledge proofs, and secure multiparty computation.

    ItΓÇÖs full of good advice. I especially appreciate this warning:

    When deciding whether to use Advanced Cryptography, start with a clear articulation of the problem, and use that to guide the development of an appropriate solution. That is, you should not start with an Advanced Cryptography technique, and then attempt to fit the functionality it provides to the problem.

    And:

    In almost all cases, it is bad practice for users to design and/or implement their own cryptography; this applies to Advanced Cryptography even more than traditional cryptography because of the complexity of the algorithms. It also applies to writing your own application based on a cryptographic library that implements the Advanced Cryptography primitive operations, because subtle flaws in how they are used can lead to serious security weaknesses.

    The conclusion:

    Advanced Cryptography covers a range of techniques for protecting sensitive data at rest, in transit and in use. These techniques enable novel applications with different trust relationships between the parties, as compared to traditional cryptographic methods for encryption and authentication.

    However, there are a number of factors to consider before deploying a solution based on Advanced Cryptography, including the relative immaturity of the techniques and their implementations, significant computational burdens and slow response times, and the risk of opening up additional cyber attack vectors.

    There are initiatives underway to standardise some forms of Advanced Cryptography, and the efficiency of implementations is continually improving. While many data processing problems can be solved with traditional cryptography (which will usually lead to a simpler, lower-cost and more mature solution) for those that cannot, Advanced Cryptography techniques could in the future enable innovative ways of deriving benefit from large shared datase

    --- BBBS/LiR v4.10 Toy-7
    * Origin: TCOB1: https/binkd/telnet binkd.rima.ie (21:1/229)
  • From Sean Rima@21:1/229 to All on Thu May 15 12:39:28 2025
    ts, without compromising individualsΓÇÖ privacy.

    NCSC blog entry.

    ** *** ***** ******* *********** *************

    Privacy for Agentic AI

    [2025.05.02] Sooner or later, itΓÇÖs going to happen. AI systems will start acting as agents, doing things on our behalf with some degree of autonomy. I think itΓÇÖs worth thinking about the security of that now, while its still a nascent idea.

    In 2019, I joined Inrupt, a company that is commercializing Tim Berners-LeeΓÇÖs open protocol for distributed data ownership. We are working on a digital wallet that can make use of AI in this way. (We used to call it an ΓÇ£active wallet.ΓÇ¥ Now weΓÇÖre calling it an ΓÇ£agentic wallet.ΓÇ¥)

    I talked about this a bit at the RSA Conference earlier this week, in my keynote talk about AI and trust. Any useful AI assistant is going to require a level of access -- and therefore trust -- that rivals what we currently our email provider, social network, or smartphone.

    This Active Wallet is an example of an AI assistant. ItΓÇÖll combine personal information about you, transactional data that you are a party to, and general information about the world. And use that to answer questions, make predictions, and ultimately act on your behalf. We have demos of this running right now. At least in its early stages. Making it work is going require an extraordinary amount of trust in the system. This requires integrity. Which is why weΓÇÖre building protections in from the beginning.

    Visa is also thinking about this. It just announced a protocol that uses AI to help people make purchasing decisions.

    I like VisaΓÇÖs approach because itΓÇÖs an AI-agnostic standard. I worry a lot about lock-in and monopolization of this space, so anything that lets people easily switch between AI models is good. And I like that Visa is working with Inrupt so that the data is decentralized as well. HereΓÇÖs our announcement about its announcement:

    This isnΓÇÖt a new relationship -- weΓÇÖve been working together for over two years. WeΓÇÖve conducted a successful POC and now weΓÇÖre standing up a sandbox inside Visa so merchants, financial institutions and LLM providers can test our Agentic Wallets alongside the rest of VisaΓÇÖs suite of Intelligent Commerce APIs.

    For that matter, we welcome any other company that wants to engage in the world of personal, consented Agentic Commerce to come work with us as well.

    I joined Inrupt years ago because I thought that Solid could do for personal data what HTML did for published information. I liked that the protocol was an open standard, and that it distributed data instead of centralizing it. AI agents need decentralized data. ΓÇ£WalletΓÇ¥ is a good metaphor for personal data stores. IΓÇÖm hoping this is another step towards adoption.

    ** *** ***** ******* *********** *************

    Another Move in the Deepfake Creation/Detection Arms Race

    [2025.05.05] Deepfakes are now mimicking heartbeats

    In a nutshell

    Recent research reveals that high-quality deepfakes unintentionally retain the heartbeat patterns from their source videos, undermining traditional detection methods that relied on detecting subtle skin color changes linked to heartbeats.
    The assumption that deepfakes lack physiological signals, such as heart rate, is no longer valid. This challenges many existing detection tools, which may need significant redesigns to keep up with the evolving technology. To effectively identify high-quality deepfakes, researchers suggest shifting focus from just detecting heart rate signals to analyzing how blood flow is distributed across different facial regions, providing a more accurate detection strategy.
    And the AI models will start mimicking that.

    ** *** ***** ******* *********** *************

    Fake Student Fraud in Community Colleges

    [2025.05.06] Reporting on the rise of fake students enrolling in community college courses:

    The botsΓÇÖ goal is to bilk state and federal financial aid money by enrolling in classes, and remaining enrolled in them, long enough for aid disbursements to go out. They often accomplish this by submitting AI-generated work. And because community colleges accept all applicants, theyΓÇÖve been almost exclusively impacted by the fraud.

    The article talks about the rise of this type of fraud, the difficulty of detecting it, and how it upends quite a bit of the class structure and learning community.

    Slashdot thread.

    ** *** ***** ******* *********** *************

    Chinese AI Submersible

    [2025.05.07] A Chinese company has developed an AI-piloted submersible that can reach speeds ΓÇ£similar to a destroyer or a US Navy torpedo,ΓÇ¥ dive ΓÇ£up to 60 metres underwater,ΓÇ¥ and ΓÇ£remain static for more than a month, like the stealth capabilities of a nuclear submarine.ΓÇ¥ In case youΓÇÖre worried about the military applications of this, you can relax because the company says that the submersible is ΓÇ£designated for civilian useΓÇ¥ and can ΓÇ£launch research rockets.ΓÇ¥

    ΓÇ£Research rockets.ΓÇ¥ Sure.

    ** *** ***** ******* *********** *************

    Florida Backdoor Bill Fails

    [2025.05.12] A Florida bill requiring encryption backdoors failed to pass.

    ** *** ***** ******* *********** *************

    Court Rules Against NSO Group

    [2025.05.13] The case is over:

    A jury has awarded WhatsApp $167 million in punitive damages in a case the company brought against Israel-based NSO Group for exploiting a software vulnerability that hijacked the phones of thousands of users.

    IΓÇÖm sure itΓÇÖll be appealed. Everything always is.

    ** *** ***** ******* *********** *************

    GoogleΓÇÖs Advanced Protection Now on Android

    [2025.05.14] Google has extended its Advanced Protection features to Android devices. ItΓÇÖs not for everybody, but something to be considered by high-risk users.

    Wired article, behind a paywall.

    ** *** ***** ******* *********** *************

    Upcoming Speaking Engagements

    [2025.05.14] This is a current list of where and when I am scheduled to speak:

    IΓÇÖm speaking (remotely) at the Sektor 3.0 Festival in Warsaw, Poland, May 21-22, 2025.
    The list is maintained on this page.

    ** *** ***** ******* *********** *************

    AI-Generated Law

    [2025.05.15] On April 14, Dubai's ruler, Sheikh Mohammed bin Rashid Al Maktoum, announced that the United Arab Emirates would begin using artificial intelligence to help write its laws. A new Regulatory Intelligence Office would use the technology to "regularly suggest updates" to the law and "accelerate the issuance of legislation by up to 70%." AI would create a "comprehensive legislative plan" spanning local and federal law and would be connected to public administration, the courts, and global policy trends.

    The plan was widely greeted with astonishment. This sort of AI legislating would be a global "first," with the potential to go "horribly wrong." Skeptics fear that the AI model will make up facts or fundamentally fail to understand societal tenets such as fair treatment and justice when influencing law.

    The truth is, the UAE's idea of AI-generated law is not really a first and not necessarily terrible.

    The first instance of enacted law known to have been written by AI was p

    --- BBBS/LiR v4.10 Toy-7
    * Origin: TCOB1: https/binkd/telnet binkd.rima.ie (21:1/229)
  • From Sean Rima@21:1/229 to All on Thu May 15 12:39:28 2025
    assed in Porto Alegre, Brazil, in 2023. It was a local ordinance about water meter replacement. Council member Ramiro Rosário was simply looking for help in generating and articulating ideas for solving a policy problem, and ChatGPT did well enough that the bill passed unanimously. We approve of AI assisting humans in this manner, although Rosário should have disclosed that the bill was written by AI before it was voted on.

    Brazil was a harbinger but hardly unique. In recent years, there has been a steady stream of attention-seeking politicians at the local and national level introducing bills that they promote as being drafted by AI or letting AI write their speeches for them or even vocalize them in the chamber.

    The Emirati proposal is different from those examples in important ways. It promises to be more systemic and less of a one-off stunt. The UAE has promised to spend more than $3 billion to transform into an "AI-native" government by 2027. Time will tell if it is also different in being more hype than reality.

    Rather than being a true first, the UAE's announcement is emblematic of a much wider global trend of legislative bodies integrating AI assistive tools for legislative research, drafting, translation, data processing, and much more. Individual lawmakers have begun turning to AI drafting tools as they traditionally have relied on staffers, interns, or lobbyists. The French government has gone so far as to train its own AI model to assist with legislative tasks.

    Even asking AI to comprehensively review and update legislation would not be a first. In 2020, the U.S. state of Ohio began using AI to do wholesale revision of its administrative law. AI's speed is potentially a good match to this kind of large-scale editorial project; the state's then-lieutenant governor, Jon Husted, claims it was successful in eliminating 2.2 million words' worth of unnecessary regulation from Ohio's code. Now a U.S. senator, Husted has recently proposed to take the same approach to U.S. federal law, with an ideological bent promoting AI as a tool for systematic deregulation.

    The dangers of confabulation and inhumanity -- while legitimate -- aren't really what makes the potential of AI-generated law novel. Humans make mistakes when writing law, too. Recall that a single typo in a 900-page law nearly brought down the massive U.S. health care reforms of the Affordable Care Act in 2015, before the Supreme Court excused the error. And, distressingly, the citizens and residents of nondemocratic states are already subject to arbitrary and often inhumane laws. (The UAE is a federation of monarchies without direct elections of legislators and with a poor record on political rights and civil liberties, as evaluated by Freedom House.)

    The primary concern with using AI in lawmaking is that it will be wielded as a tool by the powerful to advance their own interests. AI may not fundamentally change lawmaking, but its superhuman capabilities have the potential to exacerbate the risks of power concentration.

    AI, and technology generally, is often invoked by politicians to give their project a patina of objectivity and rationality, but it doesn't really do any such thing. As proposed, AI would simply give the UAE's hereditary rulers new tools to express, enact, and enforce their preferred policies.

    Mohammed's emphasis that a primary benefit of AI will be to make law faster is also misguided. The machine may write the text, but humans will still propose, debate, and vote on the legislation. Drafting is rarely the bottleneck in passing new law. What takes much longer is for humans to amend, horse-trade, and ultimately come to agreement on the content of that legislation -- even when that politicking is happening among a small group of monarchic elites.

    Rather than expeditiousness, the more important capability offered by AI is sophistication. AI has the potential to make law more complex, tailoring it to a multitude of different scenarios. The combination of AI's research and drafting speed makes it possible for it to outline legislation governing dozens, even thousands, of special cases for each proposed rule.

    But here again, this capability of AI opens the door for the powerful to have their way. AI's capacity to write complex law would allow the humans directing it to dictate their exacting policy preference for every special case. It could even embed those preferences surreptitiously.

    Since time immemorial, legislators have carved out legal loopholes to narrowly cater to special interests. AI will be a powerful tool for authoritarians, lobbyists, and other empowered interests to do this at a greater scale. AI can help automatically produce what political scientist Amy McKay has termed "microlegislation": loopholes that may be imperceptible to human readers on the page -- until their impact is realized in the real world.

    But AI can be constrained and directed to distribute power rather than concentrate it. For Emirati residents, the most intriguing possibility of the AI plan is the promise to introduce AI "interactive platforms" where the public can provide input to legislation. In experiments across locales as diverse as Kentucky, Massachusetts, France, Scotland, Taiwan, and many others, civil society within democracies are innovating and experimenting with ways to leverage AI to help listen to constituents and construct public policy in a way that best serves diverse stakeholders.

    If the UAE is going to build an AI-native government, it should do so for the purpose of empowering people and not machines. AI has real potential to improve deliberation and pluralism in policymaking, and Emirati residents should hold their government accountable to delivering on this promise.

    ** *** ***** ******* *********** *************

    Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram's web page.

    You can also read these articles on my blog, Schneier on Security.

    Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

    Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books -- including his latest, A HackerΓÇÖs Mind -- as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.

    Copyright © 2025 by Bruce Schneier.

    ** *** ***** ******* *********** *************

    --- BBBS/LiR v4.10 Toy-7
    * Origin: TCOB1: https/binkd/telnet binkd.rima.ie (21:1/229)