Deliberate Internet Shutdowns
Dec. 17th, 2025 12:02 pmFor two days in September, Afghanistan had no internet. No satellite failed; no cable was cut. This was a deliberate outage, mandated by the Taliban government. It followed a more localized shutdown two weeks prior, reportedly instituted “to prevent immoral activities.” No additional explanation was given. The timing couldn’t have been worse: communities still reeling from a major earthquake lost emergency communications, flights were grounded, and banking was interrupted. Afghanistan’s blackout is part of a wider pattern. Just since the end of September, there were also major nationwide internet shutdowns in Tanzania and Cameroon, and significant regional shutdowns in Pakistan and Nigeria. In all cases but one, authorities offered no official justification or acknowledgment, leaving millions unable to access information, contact loved ones, or express themselves through moments of crisis, elections, and protests.
The frequency of deliberate internet shutdowns has skyrocketed since the first notable example in Egypt in 2011. Together with our colleagues at the digital rights organisation Access Now and the #KeepItOn coalition, we’ve tracked 296 deliberate internet shutdowns in 54 countries in 2024, and at least 244 more in 2025 so far.
This is more than an inconvenience. The internet has become an essential piece of infrastructure, affecting how we live, work, and get our information. It’s also a major enabler of human rights, and turning off the internet can worsen or conceal a spectrum of abuses. These shutdowns silence societies, and they’re getting more and more common.
Shutdowns can be local or national, partial or total. In total blackouts, like Afghanistan or Tanzania, nothing works. But shutdowns are often targeted more granularly. Cellphone internet could be blocked, but not broadband. Specific news sites, social media platforms, and messaging systems could be blocked, leaving overall network access unaffected—as when Brazil shut off X (formerly Twitter) in 2024. Sometimes bandwidth is just throttled, making everything slower and unreliable.
Sometimes, internet shutdowns are used in political or military operations. In recent years, Russia and Ukraine have shut off parts of each other’s internet, and Israel has repeatedly shut off Palestinians’ internet in Gaza. Shutdowns of this type happened 25 times in 2024, affecting people in 13 countries.
Reasons for the shutdowns are as varied as the countries that perpetrate them. General information control is just one. Shutdowns often come in response to political unrest, as governments try to prevent people from organizing and getting information; Panama had a regional shutdown this summer in response to protests. Or during elections, as opposition parties utilize the internet to mobilize supporters and communicate strategy. Belarusian president Alyaksandr Lukashenko, who has ruled since 1994, reportedly disabled the internet during elections earlier this year, following a similar move in 2020. But they can also be more banal. Access Now documented countries disabling parts of the internet during student exam periods at least 16 times in 2024, including Algeria, Iraq, Jordan, Kenya, and India.
Iran’s shutdowns in 2022 and June of this year are good examples of a highly sophisticated effort, with layers of shutdowns that end up forcing people off the global internet and onto Iran’s surveilled, censored national intranet. India, meanwhile, has been the world shutdown leader for many years, with 855 distinct incidents. Myanmar is second with 149, followed by Pakistan and then Iran. All of this information is available on Access Now’s digital dashboard, where you can see breakdowns by region, country, type, geographic extent, and time.
There was a slight decline in shutdowns during the early years of the pandemic, but they have increased sharply since then. The reasons are varied, but a lot can be attributed to the rise in protest movements related to economic hardship and corruption, and general democratic backsliding and instability. In many countries today, shutdowns are a knee-jerk response to any form of unrest or protest, no matter how small.
A country’s ability to shut down the internet depends a lot on its infrastructure. In the US, for example, shutdowns would be hard to enforce. As we saw when discussions about a potential TikTok ban ramped up two years ago, the complex and multifaceted nature of our internet makes it very difficult to achieve. However, as we’ve seen with total nationwide shutdowns around the world, the ripple effects in all aspects of life are immense. (Remember the effects of just a small outage—CrowdStrike in 2024—which crippled 8.5 million computers and cancelled 2,200 flights in the US alone?)
The more centralized the internet infrastructure, the easier it is to implement a shutdown. If a country has just one cellphone provider, or only two fiber optic cables connecting the nation to the rest of the world, shutting them down is easy.
Shutdowns are not only more common, but they’ve also become more harmful. Unlike in years past, when the internet was a nice option to have, or perhaps when internet penetration rates were significantly lower across the Global South, today the internet is an essential piece of societal infrastructure for the majority of the world’s population.
Access Now has long maintained that denying people access to the internet is a human rights violation, and has collected harrowing stories from places like Tigray in Ethiopia, Uganda, Annobon in Equatorial Guinea, and Iran. The internet is an essential tool for a spectrum of rights, including freedom of expression and assembly. Shutdowns make documenting ongoing human rights abuses and atrocities more difficult or impossible. They are also impactful on people’s daily lives, business, healthcare, education, finances, security, and safety, depending on the context. Shutdowns in conflict zones are particularly damaging, as they impact the ability of humanitarian actors to deliver aid and make it harder for people to find safe evacuation routes and civilian corridors.
Defenses on the ground are slim. Depending on the country and the type of shutdown, there can be workarounds. Everything, from VPNs to mesh networks to Starlink terminals to foreign SIM cards near borders, has been used with varying degrees of success. The tech-savvy sometimes have other options. But for most everyone in society, no internet means no internet—and all the effects of that loss.
The international community plays an important role in shaping how internet shutdowns are understood and addressed. World bodies have recognized that reliable internet access is an essential service, and could put more pressure on governments to keep the internet on in conflict-affected areas. But while international condemnation has worked in some cases (Mauritius and South Sudan are two recent examples), countries seem to be learning from each other, resulting in both more shutdowns and new countries perpetrating them.
There’s still time to reverse the trend, if that’s what we want to do. Ultimately, the question comes down to whether or not governments will enshrine both a right to access information and freedom of expression in law and in practice. Keeping the internet on is a norm, but the trajectory from a single internet shutdown in 2011 to 2,000 blackouts 15 years later demonstrates how embedded the practice has become. The implications of that shift are still unfolding, but they reach far beyond the moment the screen goes dark.
This essay was written with Zach Rosson, and originally appeared in Gizmodo.
it took me a while to understand, hayao
Dec. 16th, 2025 09:14 amMeta, the company controlled by babyfash Trump fan Mark Zuckerberg, the board of which thought it was reasonable to support Fascist politicians in the hope of avoiding regulation, and whose Facebook service has or had a “17-strike” policy for known sex trafficking accounts and not only doesn’t remove fraud posts but charges known fraud operations higher rates for their ads, puked this vile mixture of plagiarism, artist’s blood, and AI sludge posing as photography onto BART station walls in San Francisco:

And of course it’s shit. Of course it’s shit. Holy gods, it is such hot garbage, and I’m not even talking about the implied higher situational awareness of someone wearing an AI PHONE ON THEIR FACE over people looking down at their regular phones
tho’ that’s a pretty fuckin’ hot take for them to have right there too, I have to say
I’m talking about the raw clownery of this image. Holy hell. Let’s zoom in at one of the insults to imagery:

And I’m not even mentioning the ghost in the room, by which I mean the four ghosts in this one particular rendered room:

And I have to ask:
HOW CAN ALL THIS STILL BE THIS SHITTY AND PASS MUSTER FOR THEM? HOW?
Christ it’s so insultingly bad. It’s infuriatingly bad. As photography substitute, as AI generated Not Art. It’s… it’s like it’s Anti-art, an opposite of art that mocks the real, that imitates while degrading both itself and its opposite.
Anybody can make bad art. I’ve made plenty. Also some good art.
But it takes real work to make anti-art.
And that’s what makes me want to fucking scream.
We all know how monstrously wealthy Fuckerberg is. How much money he and his company have. How he could jerk off with thousand dollar bills, wipe himself clean, and burn the dirties the rest of his wretched life and not even notice the difference.
So when you see that they’d rather put out this slapdash, revolting, uncaring – no sneering insult of a render than pay a photographer and a few models a few bucks for an afternoon photo shoot, what’s that say?
It’s not the money. He has all the money. All of it. Well, him, and the other TESCREAL fascists.
I think… I think I have to think… that it’s a matter of principle for them. A sick principle, but a principle nonetheless. It has to be, because otherwise it makes no. goddamn. sense.
I literally have to conclude that they hate art, and even more, hate artists. They have to, to consider this better. It must be principle for them to not care about artistic creative work, to not pay artistic workers. It has to be principle to hold all that in contempt, to say, “see? We just steal everything you’ve ever done, throw it into our churn machine, and then rub out our own version in half an hour to show you’re not any better than us. And you can’t do shit about it.”
They’ve made it clear that they’d not only spew this kind of rancid splatter, this metaphorical scrawl of shit, urine, blood, and theft across the walls of a city than break that principle.
And they’ll enjoy it.
I used to think, once upon a time, that Syndrome from The Incredibles was a little too on the nose,a little too pointed, maybe – dare I say it – a little too cartoonish for even a cartoon.
I’m starting to think maybe he wasn’t on the nose enough.
But that’s flippant, and maybe a little too easy.
What I really feel is that… I’m finally starting to understand – really understand, at a gut level – what Hayao Miyazaki meant when he called AI “art” an insult to life itself.
Because, well, almost anything can be art. Art is an observation and an intent, as much as anything else, and handing that mantle to something which has no awareness, no observation, no actual knowledge of meaning, no ability to opine, no personhood at all, a chum machine with less actual awareness than a housefly maggot…
…how could that be anything less than an insult to life, itself?
It took me a while to understand, Hayao. But I think I’ve finally got there.
Posted via Solarbird{y|z|yz}, Collected.
Chinese Surveillance and AI
Dec. 16th, 2025 12:02 pmNew report: “The Party’s AI: How China’s New AI Systems are Reshaping Human Rights.” From a summary article:
China is already the world’s largest exporter of AI powered surveillance technology; new surveillance technologies and platforms developed in China are also not likely to simply stay there. By exposing the full scope of China’s AI driven control apparatus, this report presents clear, evidence based insights for policymakers, civil society, the media and technology companies seeking to counter the rise of AI enabled repression and human rights violations, and China’s growing efforts to project that repression beyond its borders.
The report focuses on four areas where the CCP has expanded its use of advanced AI systems most rapidly between 2023 and 2025: multimodal censorship of politically sensitive images; AI’s integration into the criminal justice pipeline; the industrialisation of online information control; and the use of AI enabled platforms by Chinese companies operating abroad. Examined together, those cases show how new AI capabilities are being embedded across domains that strengthen the CCP’s ability to shape information, behaviour and economic outcomes at home and overseas.
Because China’s AI ecosystem is evolving rapidly and unevenly across sectors, we have focused on domains where significant changes took place between 2023 and 2025, where new evidence became available, or where human rights risks accelerated. Those areas do not represent the full range of AI applications in China but are the most revealing of how the CCP is integrating AI technologies into its political control apparatus.
News article.
Against the Federal Moratorium on State-Level Regulation of AI
Dec. 15th, 2025 12:02 pmCast your mind back to May of this year: Congress was in the throes of debate over the massive budget bill. Amidst the many seismic provisions, Senator Ted Cruz dropped a ticking time bomb of tech policy: a ten-year moratorium on the ability of states to regulate artificial intelligence. To many, this was catastrophic. The few massive AI companies seem to be swallowing our economy whole: their energy demands are overriding household needs, their data demands are overriding creators’ copyright, and their products are triggering mass unemployment as well as new types of clinical psychoses. In a moment where Congress is seemingly unable to act to pass any meaningful consumer protections or market regulations, why would we hamstring the one entity evidently capable of doing so—the states? States that have already enacted consumer protections and other AI regulations, like California, and those actively debating them, like Massachusetts, were alarmed. Seventeen Republican governors wrote a letter decrying the idea, and it was ultimately killed in a rare vote of bipartisan near-unanimity.
The idea is back. Before Thanksgiving, a House Republican leader suggested they might slip it into the annual defense spending bill. Then, a draft document leaked outlining the Trump administration’s intent to enforce the state regulatory ban through executive powers. An outpouring of opposition (including from some Republican state leaders) beat back that notion for a few weeks, but on Monday, Trump posted on social media that the promised Executive Order is indeed coming soon. That would put a growing cohort of states, including California and New York, as well as Republican strongholds like Utah and Texas, in jeopardy.
The constellation of motivations behind this proposal is clear: conservative ideology, cash, and China.
The intellectual argument in favor of the moratorium is that “freedom“-killing state regulation on AI would create a patchwork that would be difficult for AI companies to comply with, which would slow the pace of innovation needed to win an AI arms race with China. AI companies and their investors have been aggressively peddling this narrative for years now, and are increasingly backing it with exorbitant lobbying dollars. It’s a handy argument, useful not only to kill regulatory constraints, but also—companies hope—to win federal bailouts and energy subsidies.
Citizens should parse that argument from their own point of view, not Big Tech’s. Preventing states from regulating AI means that those companies get to tell Washington what they want, but your state representatives are powerless to represent your own interests. Which freedom is more important to you: the freedom for a few near-monopolies to profit from AI, or the freedom for you and your neighbors to demand protections from its abuses?
There is an element of this that is more partisan than ideological. Vice President J.D. Vance argued that federal preemption is needed to prevent “progressive” states from controlling AI’s future. This is an indicator of creeping polarization, where Democrats decry the monopolism, bias, and harms attendant to corporate AI and Republicans reflexively take the opposite side. It doesn’t help that some in the parties also have direct financial interests in the AI supply chain.
But this does not need to be a partisan wedge issue: both Democrats and Republicans have strong reasons to support state-level AI legislation. Everyone shares an interest in protecting consumers from harm created by Big Tech companies. In leading the charge to kill Cruz’s initial AI moratorium proposal, Republican Senator Masha Blackburn explained that “This provision could allow Big Tech to continue to exploit kids, creators, and conservatives? we can’t block states from making laws that protect their citizens.” More recently, Florida Governor Ron DeSantis wants to regulate AI in his state.
The often-heard complaint that it is hard to comply with a patchwork of state regulations rings hollow. Pretty much every other consumer-facing industry has managed to deal with local regulation—automobiles, children’s toys, food, and drugs—and those regulations have been effective consumer protections. The AI industry includes some of the most valuable companies globally and has demonstrated the ability to comply with differing regulations around the world, including the EU’s AI and data privacy regulations, substantially more onerous than those so far adopted by US states. If we can’t leverage state regulatory power to shape the AI industry, to what industry could it possibly apply?
The regulatory superpower that states have here is not size and force, but rather speed and locality. We need the “laboratories of democracy” to experiment with different types of regulation that fit the specific needs and interests of their constituents and evolve responsively to the concerns they raise, especially in such a consequential and rapidly changing area such as AI.
We should embrace the ability of regulation to be a driver—not a limiter—of innovation. Regulations don’t restrict companies from building better products or making more profit; they help channel that innovation in specific ways that protect the public interest. Drug safety regulations don’t prevent pharma companies from inventing drugs; they force them to invent drugs that are safe and efficacious. States can direct private innovation to serve the public.
But, most importantly, regulations are needed to prevent the most dangerous impact of AI today: the concentration of power associated with trillion-dollar AI companies and the power-amplifying technologies they are producing. We outline the specific ways that the use of AI in governance can disrupt existing balances of power, and how to steer those applications towards more equitable balances, in our new book, Rewiring Democracy. In the nearly complete absence of Congressional action on AI over the years, it has swept the world’s attention; it has become clear that states are the only effective policy levers we have against that concentration of power.
Instead of impeding states from regulating AI, the federal government should support them to drive AI innovation. If proponents of a moratorium worry that the private sector won’t deliver what they think is needed to compete in the new global economy, then we should engage government to help generate AI innovations that serve the public and solve the problems most important to people. Following the lead of countries like Switzerland, France, and Singapore, the US could invest in developing and deploying AI models designed as public goods: transparent, open, and useful for tasks in public administration and governance.
Maybe you don’t trust the federal government to build or operate an AI tool that acts in the public interest? We don’t either. States are a much better place for this innovation to happen because they are closer to the people, they are charged with delivering most government services, they are better aligned with local political sentiments, and they have achieved greater trust. They’re where we can test, iterate, compare, and contrast regulatory approaches that could inform eventual and better federal policy. And, while the costs of training and operating performance AI tools like large language models have declined precipitously, the federal government can play a valuable role here in funding cash-strapped states to lead this kind of innovation.
This essay was written with Nathan E. Sanders, and originally appeared in Gizmodo.
EDITED TO ADD: Trump signed an executive order banning state-level AI regulations hours after this was published. This is not going to be the last word on the subject.
I bought something today
Dec. 14th, 2025 10:26 amI bought something for my second bike trailer build on Saturday.
The trailer’s basically been done for weeks already. I’m adding details and accessories now, like, I want to sew a cover, and I want to add reflectors. So I took it for another little shakedown ride, this time to a hardware store I found out had DOT-grade adhesive reflectors in stock for… more money than I’d like, but not unreasonable money.
Here’s what I’ve done with those stickers so far. I think it’s pretty good. The rear view is my biggest concern, given that my bike is well-lit, and this… frankly ugly flash photo… makes the reflectors pop well, showing how they’d reflect headlights. It’ll help:

But it occurred to me as I was doing all this that…
This is the first time I’ve bought something for this project.
The trailer frame was salvaged from a semi-wrecked kiddo hauler abandoned outdoors for over a year. The platform is made from a cargo pallet someone illegally dumped and I salvaged; the metal clamps holding it in place I shaped out of old building strapping. I literally found the warning flag pole on the street, and it inserts into a metal tube salvaged from a housemate’s broken laundry rack. I made a flag for it from scrap fabric. The cage is made from Buy Nothing-listed DIY cube shelving, the kind that never really works right, but there’s nothing wrong with the wire squares that a whole bunch of zip ties can’t fix. Other parts are 3D-printed, designed by me, printed by me, at home.
Everything else was just ordinary supplies I already had.
But when it came to the reflectors… I looked around a little, but then… I just went and bought something. And I have kind of mixed feelings about that!
I mean, it’s fine. Really. At some point, I’m going to want to replace these tyres, too, and that’s a purchase – they were also in the outdoors for at least a year and as a result are semi-rotted. They’re only still usable because I used a lot of silicone glue to make a reinforcement coat on the walls. (Hey, it’s not stupid if it works, and it works.) So sooner or later, money was going to be spent.
But even so, just buying something – even if it’s something you legitimately can’t make at home, like DOT-spec reflective material – feels like cheating. I kinda don’t like it.
Part of it is that I started making these cargo carriers around the time Anna got laid off, and even after she finally got a new job earlier this year, I kept the same approach. Sure, it helped that I already had basically everything I needed by that time, but also, we’re trying to make up for a lot of lost money and time, so I kept doing things the same way.
Until today, when I didn’t. I did it the normal way instead. It’s a very normal thing. You need an item, a part, whatever – you can just buy it.
And… maybe… maybe it’s just how extremely abnormal everything else is right now, in this endless emergency… but…
I just don’t know how I feel about that.
Posted via Solarbird{y|z|yz}, Collected.
Upcoming Speaking Engagements
Dec. 14th, 2025 05:10 pmThis is a current list of where and when I am scheduled to speak:
- I’m speaking and signing books at the Chicago Public Library in Chicago, Illinois, USA, at 6:00 PM CT on February 5, 2026. Details to come.
- I’m speaking at Capricon 44 in Chicago, Illinois, USA. The convention runs February 5-8, 2026. My speaking time is TBD.
- I’m speaking at the Munich Cybersecurity Conference in Munich, Germany on February 12, 2026.
- I’m speaking at Tech Live: Cybersecurity in New York City, USA on March 11, 2026.
- I’m giving the Ross Anderson Lecture at the University of Cambridge’s Churchill College on March 19, 2026.
- I’m speaking at RSAC 2026 in San Francisco, California, USA on March 25, 2026.
The list is maintained on this page.
Friday Squid Blogging: Giant Squid Eating a Diamondback Squid
Dec. 12th, 2025 10:00 pmI have no context for this video—it’s from Reddit—but one of the commenters adds some context:
Hey everyone, squid biologist here! Wanted to add some stuff you might find interesting.
With so many people carrying around cameras, we’re getting more videos of giant squid at the surface than in previous decades. We’re also starting to notice a pattern, that around this time of year (peaking in January) we see a bunch of giant squid around Japan. We don’t know why this is happening. Maybe they gather around there to mate or something? who knows! but since so many people have cameras, those one-off monster-story encounters are now caught on video, like this one (which, btw, rips. This squid looks so healthy, it’s awesome).
When we see big (giant or colossal) healthy squid like this, it’s often because a fisher caught something else (either another squid or sometimes an antarctic toothfish). The squid is attracted to whatever was caught and they hop on the hook and go along for the ride when the target species is reeled in. There are a few colossal squid sightings similar to this from the southern ocean (but fewer people are down there, so fewer cameras, fewer videos). On the original instagram video, a bunch of people are like “Put it back! Release him!” etc, but he’s just enjoying dinner (obviously as the squid swims away at the end).
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Building Trustworthy AI Agents
Dec. 12th, 2025 12:00 pmThe promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we haven’t made trustworthy. We can’t. And today’s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions.
These aren’t edge cases. They’re the result of building AI systems without basic integrity controls. We’re in the third leg of data security—the old CIA triad. We’re good at availability and working on confidentiality, but we’ve never properly solved integrity. Now AI personalization has exposed the gap by accelerating the harms.
The scope of the problem is large. A good AI assistant will need to be trained on everything we do and will need access to our most intimate personal interactions. This means an intimacy greater than your relationship with your email provider, your social media account, your cloud storage, or your phone. It requires an AI system that is both discreet and trustworthy when provided with that data. The system needs to be accurate and complete, but it also needs to be able to keep data private: to selectively disclose pieces of it when required, and to keep it secret otherwise. No current AI system is even close to meeting this.
To further development along these lines, I and others have proposed separating users’ personal data stores from the AI systems that will use them. It makes sense; the engineering expertise that designs and develops AI systems is completely orthogonal to the security expertise that ensures the confidentiality and integrity of data. And by separating them, advances in security can proceed independently from advances in AI.
What would this sort of personal data store look like? Confidentiality without integrity gives you access to wrong data. Availability without integrity gives you reliable access to corrupted data. Integrity enables the other two to be meaningful. Here are six requirements. They emerge from treating integrity as the organizing principle of security to make AI trustworthy.
First, it would be broadly accessible as a data repository. We each want this data to include personal data about ourselves, as well as transaction data from our interactions. It would include data we create when interacting with others—emails, texts, social media posts—and revealed preference data as inferred by other systems. Some of it would be raw data, and some of it would be processed data: revealed preferences, conclusions inferred by other systems, maybe even raw weights in a personal LLM.
Second, it would be broadly accessible as a source of data. This data would need to be made accessible to different LLM systems. This can’t be tied to a single AI model. Our AI future will include many different models—some of them chosen by us for particular tasks, and some thrust upon us by others. We would want the ability for any of those models to use our data.
Third, it would need to be able to prove the accuracy of data. Imagine one of these systems being used to negotiate a bank loan, or participate in a first-round job interview with an AI recruiter. In these instances, the other party will want both relevant data and some sort of proof that the data are complete and accurate.
Fourth, it would be under the user’s fine-grained control and audit. This is a deeply detailed personal dossier, and the user would need to have the final say in who could access it, what portions they could access, and under what circumstances. Users would need to be able to grant and revoke this access quickly and easily, and be able to go back in time and see who has accessed it.
Fifth, it would be secure. The attacks against this system are numerous. There are the obvious read attacks, where an adversary attempts to learn a person’s data. And there are also write attacks, where adversaries add to or change a user’s data. Defending against both is critical; this all implies a complex and robust authentication system.
Sixth, and finally, it must be easy to use. If we’re envisioning digital personal assistants for everybody, it can’t require specialized security training to use properly.
I’m not the first to suggest something like this. Researchers have proposed a “Human Context Protocol” (https://papers.ssrn.com/sol3/ papers.cfm?abstract_id=5403981) that would serve as a neutral interface for personal data of this type. And in my capacity at a company called Inrupt, Inc., I have been working on an extension of Tim Berners-Lee’s Solid protocol for distributed data ownership.
The engineering expertise to build AI systems is orthogonal to the security expertise needed to protect personal data. AI companies optimize for model performance, but data security requires cryptographic verification, access control, and auditable systems. Separating the two makes sense; you can’t ignore one or the other.
Fortunately, decoupling personal data stores from AI systems means security can advance independently from performance (https:// ieeexplore.ieee.org/document/ 10352412). When you own and control your data store with high integrity, AI can’t easily manipulate you because you see what data it’s using and can correct it. It can’t easily gaslight you because you control the authoritative record of your context. And you determine which historical data are relevant or obsolete. Making this all work is a challenge, but it’s the only way we can have trustworthy AI assistants.
This essay was originally published in IEEE Security & Privacy.
AIs Exploiting Smart Contracts
Dec. 11th, 2025 05:06 pmI have long maintained that smart contracts are a dumb idea: that a human process is actually a security feature.
Here’s some interesting research on training AIs to automatically exploit smart contracts:
AI models are increasingly good at cyber tasks, as we’ve written about before. But what is the economic impact of these capabilities? In a recent MATS and Anthropic Fellows project, our scholars investigated this question by evaluating AI agents’ ability to exploit smart contracts on Smart CONtracts Exploitation benchmark (SCONE-bench)a new benchmark they built comprising 405 contracts that were actually exploited between 2020 and 2025. On contracts exploited after the latest knowledge cutoffs (June 2025 for Opus 4.5 and March 2025 for other models), Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 developed exploits collectively worth $4.6 million, establishing a concrete lower bound for the economic harm these capabilities could enable. Going beyond retrospective analysis, we evaluated both Sonnet 4.5 and GPT-5 in simulation against 2,849 recently deployed contracts without any known vulnerabilities. Both agents uncovered two novel zero-day vulnerabilities and produced exploits worth $3,694, with GPT-5 doing so at an API cost of $3,476. This demonstrates as a proof-of-concept that profitable, real-world autonomous exploitation is technically feasible, a finding that underscores the need for proactive adoption of AI for defense.
the united states declares strategic war on the EU
Dec. 10th, 2025 09:31 amAnders Puck Nielsen speaks on the Trump/MAGA’s new U.S. National Security Strategy document:
It is official US policy to work towards regime change in European countries, and to weaken or even destroy the European Union.
https://www.youtube.com/watch?v=YAh-xEteBz4
This is correct. The document is very clear on that point. But here’s more from Anders:
The United States sees it as a strategic priority… that MAGA movements come to power in Europe, and they intend to use the means that they have to support such movements in the fight against the current centrist governments.
These are some very dramatic statements that have raised deep questions about whether there is any foundation for NATO to function going forward if the United States sees it as a strategic priority to undermine the governments of other NATO countries. …
It’s really hard to see how there can be an alliance any more. The reality is that the views expressed in this [policy document] are in many ways identical to the Russian viewpoints on Europe and the Russian goals of regime change in European countries.
He further discusses the document’s demands for ‘free speech,’ in the sense of ending social media moderation and opposing the exclusion of hate speech, the lifeblood of MAGA fascism. There are several demands in the document around these topics, which he sees – correctly – as focused on helping Elon Musk and Mark Zuckerberg push MAGA/fascist propaganda into Europe through their algorithmically-driven propaganda machines.
Elon Musk’s “X” is the bigger threat, of course. As Nielsen puts it: “If you’re European, then it is a national security priority to stop using X.” Elon Musk bought Twitter to turn it into a fascist propaganda fountain, as opposed to Zuckerberg’s primary intention of making as much money as possible, working with fascists if that’s what gets the job done.
I have, of course, been saying that it’s time to stop using X since a few months into Elon Musk’s takeover of the site because of this exact reason, but, well – who the fuck listens to me?
Anders’s final key takeaway here is that this document doesn’t show a MAGA-led US deciding not to care at all about Europe, but instead shows a US deciding to care very much about Europe – mostly western Europe – with the specific and stated intention of installing MAGA governments, telling Europe that they must be MAGA – fascist – to be allied with the US.
This move would be an extension of what MAGA see as “their” western hemisphere, which other than western Europe means North and South America, including Greenland.
Naturally, this process would include granting Russia and Trump’s second-best pal Putin their own sphere of influence in the east. This portends the US’s impending betrayal of Ukraine, and later, a betrayal of the Baltic states, Poland, probably a couple of others (Moldova? Romania? Bulgaria?) as well.
But why? Does Trump love Putin and Orban that much?
I believe It’s more than that, and more than Trump’s ego, believe it or not. It’s more than his desperate longing to be a dictator and it’s more than his sheer will to steal every dollar in sight. Trump and MAGA, well… they are definitively fools, morons, white nationalists, imperialists, longing for a white imperial past. But I still think that Putin has more choate strategic plans than Trump, and I still think Ukraine is a climate war, so…
…shall I post this line again? Sure, I’ll post this line again. Here’s what I think Putin really wants – not what he’ll get, what he wants. It’s a minimum goal, to “secure” the nation:

That’s oversimplified, of course, but this is a small map and a big thick line. The reality would be far different, and most likely more like existing national boundaries, but still: it gets the idea across.
Meanwhile, when Russian maximalists and propaganda shills talk about how “we should march all the way to Paris” – which they do, repeatedly – here’s what I think they want:

And what do these lines have in common?
Mountains.
Tall, easier to defend, mountainous, migration-blocking borders.
It’s simple-minded in a lot of ways, I suppose, but so is keeping the border at the Rhine and that kept French foreign policy busy for a few centuries, so border politics don’t have to be all that complex.
Putin et al – they know climate change is real. Trump’s a decaying fool and might not know now if he ever did, but Putin? He knows. But heading a petrostate dictatorship with lots of far-northern land? He doesn’t want to stop it, because it’s the outsourced expense of allllllll Russia’s money, and if billions die, well, that’s the cost of doing business.
I call map one Putin’s Wall. Map two? Let’s call it Solovyov’s Wall, since as far as I can tell he’s the most famous proponent of “marching all the way to Paris.” Soloyvov’s Wall isn’t attainable – it won’t happen, it’s (ugh) aspirational – but I do think Trump wants to give Putin his wall, and that Putin has enough trust in Russia’s ability to handle MAGA that he’s willing to let Trump and his replacements handle the west.
Personally, I think MAGA has enough interest in a semi-mythical White Europe that they’re willing to do it. As long as they’re lead by the right – white, fascist – governments.
Hence, this hideous betrayal of a document.
That said, let me be real clear about something: On their own, Russia cannot attain Putin’s Wall. It’d take a complete American betrayal and European capitulation for them to have any chance. They cannot do it alone.
But thanks to MAGA and Trump, they’re on the edge of getting that American betrayal. They want to push that betrayal to completion. If they get it, then they’ll help the US make MAGA happen in Europe, in order to get the second necessary condition of European collapse and capitulation.
Russia’s no match for the EU as a whole. But torn apart? Picking off one little country at a time is… it’s not easy, it’s absolutely not, but they’re willing to kill as many of their own as is necessary for as long as is necessary to do it. Particularly if they’re ethnic minorities. And since nobody wants to flee a climate disaster to a war zone anyway, so he wins either way. Whether deterred by mountains or by war, refugees would go elsewhere, or not at all.
And that’s why I think this is a climate war. Not a war triggered by climate changes in Russia, but by Russia wanting to keep oil and gas going forever and keep out the people that will starve and kill.
You noticed Iran saying that Tehran will have to be abandoned as a capital, didn’t you? It’s more corruption and incompetence than climate change – but it’s a bit of all three. Climate change has moved the timetable. Made things worse. And yet, we’re just getting started.
So, then. Where are we? Ah, yes. How this all plays out.
There’s a bit of a feeling out there that Trump is weakened and even some who think that this nightmare is… more or less over. That Trump is a “lame duck,” that there is no MAGA without him.
That’s partially true. Trump is weakened. MAGA is, too, and they’ve been dependant upon his stardom – and fandom – to reach critical mass. They will be badly wounded – but not out – once he goes.
But none of that means this is over. The more trouble MAGA and Trump think they’re in, the more Trump and MAGA will lash out, trying to push their fascist power fantasies into existence. We will all see more betrayals, more sabotage, more oppression – the ICE army of white supremacists they’re working to summon into existence, funded by the so-called “big beautiful bill,” will actively work to dwarf the violence and abuses we’ve seen this year.
It’s their vision of the future, and they’re going to fight for it. It’s what they want, it’s what they’re all in to get, and it’s what they will do anything to achieve.
And they will not go down quietly. Take heart in the recent massive election shifts. Take heart in Trump’s decay and weakness and failing… opinion polls. Take heart in the America First/MAGA civil war. Take heart in all of it.
But do not, for a moment, think this is actually over.
Posted via Solarbird{y|z|yz}, Collected.
FBI Warns of Fake Video Scams
Dec. 10th, 2025 12:05 pmThe FBI is warning of AI-assisted fake kidnapping scams:
Criminal actors typically will contact their victims through text message claiming they have kidnapped their loved one and demand a ransom be paid for their release. Oftentimes, the criminal actor will express significant claims of violence towards the loved one if the ransom is not paid immediately. The criminal actor will then send what appears to be a genuine photo or video of the victim’s loved one, which upon close inspection often reveals inaccuracies when compared to confirmed photos of the loved one. Examples of these inaccuracies include missing tattoos or scars and inaccurate body proportions. Criminal actors will sometimes purposefully send these photos using timed message features to limit the amount of time victims have to analyze the images.
Images, videos, audio: It can all be faked with AI. My guess is that this scam has a low probability of success, so criminals will be figuring out how to automate it.
AI vs. Human Drivers
Dec. 9th, 2025 12:07 pmTwo competing arguments are making the rounds. The first is by a neurosurgeon in the New York Times. In an op-ed that honestly sounds like it was paid for by Waymo, the author calls driverless cars a “public health breakthrough”:
In medical research, there’s a practice of ending a study early when the results are too striking to ignore. We stop when there is unexpected harm. We also stop for overwhelming benefit, when a treatment is working so well that it would be unethical to continue giving anyone a placebo. When an intervention works this clearly, you change what you do.
There’s a public health imperative to quickly expand the adoption of autonomous vehicles. More than 39,000 Americans died in motor vehicle crashes last year, more than homicide, plane crashes and natural disasters combined. Crashes are the No. 2 cause of death for children and young adults. But death is only part of the story. These crashes are also the leading cause of spinal cord injury. We surgeons see the aftermath of the 10,000 crash victims who come to emergency rooms every day.
The other is a soon-to-be-published book: Driving Intelligence: The Green Book. The authors, a computer scientist and a management consultant with experience in the industry, make the opposite argument. Here’s one of the authors:
There is something very disturbing going on around trials with autonomous vehicles worldwide, where, sadly, there have now been many deaths and injuries both to other road users and pedestrians. Although I am well aware that there is not, senso stricto, a legal and functional parallel between a “drug trial” and “AV testing,” it seems odd to me that if a trial of a new drug had resulted in so many deaths, it would surely have been halted and major forensic investigations carried out and yet, AV manufacturers continue to test their products on public roads unabated.
I am not convinced that it is good enough to argue from statistics that, to a greater or lesser degree, fatalities and injuries would have occurred anyway had the AVs had been replaced by human-driven cars: a pharmaceutical company, following death or injury, cannot simply sidestep regulations around the trial of, say, a new cancer drug, by arguing that, whilst the trial is underway, people would die from cancer anyway….
Both arguments are compelling, and it’s going to be hard to figure out what public policy should be.
This paper, from 2016, argues that we’re going to need other metrics than side-by-side comparisons: Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability?“:
Abstract: How safe are autonomous vehicles? The answer is critical for determining how autonomous vehicles may shape motor vehicle safety and public health, and for developing sound policies to govern their deployment. One proposed way to assess safety is to test drive autonomous vehicles in real traffic, observe their performance, and make statistical comparisons to human driver performance. This approach is logical, but it is practical? In this paper, we calculate the number of miles of driving that would be needed to provide clear statistical evidence of autonomous vehicle safety. Given that current traffic fatalities and injuries are rare events compared to vehicle miles traveled, we show that fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries. Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles—an impossible proposition if the aim is to demonstrate their performance prior to releasing them on the roads for consumer use. These findings demonstrate that developers of this technology and third-party testers cannot simply drive their way to safety. Instead, they will need to develop innovative methods of demonstrating safety and reliability. And yet, the possibility remains that it will not be possible to establish with certainty the safety of autonomous vehicles. Uncertainty will remain. Therefore, it is imperative that autonomous vehicle regulations are adaptive—designed from the outset to evolve with the technology so that society can better harness the benefits and manage the risks of these rapidly evolving and potentially transformative technologies.
One problem, of course, is that we treat death by human driver differently than we do death by autonomous computer driver. This is likely to change as we get more experience with AI accidents—and AI-caused deaths.
Substitution Cipher Based on The Voynich Manuscript
Dec. 8th, 2025 12:04 pmHere’s a fun paper: “The Naibbe cipher: a substitution cipher that encrypts Latin and Italian as Voynich Manuscript-like ciphertext“:
Abstract: In this article, I investigate the hypothesis that the Voynich Manuscript (MS 408, Yale University Beinecke Library) is compatible with being a ciphertext by attempting to develop a historically plausible cipher that can replicate the manuscript’s unusual properties. The resulting ciphera verbose homophonic substitution cipher I call the Naibbe ciphercan be done entirely by hand with 15th-century materials, and when it encrypts a wide range of Latin and Italian plaintexts, the resulting ciphertexts remain fully decipherable and also reliably reproduce many key statistical properties of the Voynich Manuscript at once. My results suggest that the so-called “ciphertext hypothesis” for the Voynich Manuscript remains viable, while also placing constraints on plausible substitution cipher structures.
Friday Squid Blogging: Vampire Squid Genome
Dec. 5th, 2025 10:06 pmThe vampire squid (Vampyroteuthis infernalis) has the largest cephalopod genome ever sequenced: more than 11 billion base pairs. That’s more than twice as large as the biggest squid genomes.
It’s technically not a squid: “The vampire squid is a fascinating twig tenaciously hanging onto the cephalopod family tree. It’s neither a squid nor an octopus (nor a vampire), but rather the last, lone remnant of an ancient lineage whose other members have long since vanished.”
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
New Anonymous Phone Service
Dec. 5th, 2025 08:08 amA new anonymous phone service allows you to sign up with just a zip code.