Judge Rejects Government’s Weak Attempt To Memory-Hole DOGE Deposition Videos [Techdirt] (04:07 , Wednesday, 25 March 2026)
Last week we covered how the government successfully convinced Judge Colleen McMahon to order the plaintiffs in the DOGE/National Endowment for the Humanities (NEH) lawsuit to “claw back” the viral deposition videos they had posted to YouTube — videos showing DOGE operatives Justin Fox and Nate Cavanaugh stumbling through questions about how they used ChatGPT to decide which humanities grants to kill, and struggling mightily to define “DEI” despite it apparently being the entire basis for their work.
The government’s argument was that the videos had led to harassment and death threats against Fox and Cavanaugh — the same two who had no problem obliterating hundreds of millions in already approved grants with a simplistic ChatGPT prompt, but apparently couldn’t handle the public seeing them struggle to explain themselves under oath. The government argued the videos needed to come down. The judge initially agreed and ordered the plaintiffs to pull them. As we noted at the time, archivists had already uploaded copies to the Internet Archive and distributed them as torrents, because that’s how the internet works.
Well, now Judge McMahon has issued a full ruling on the government’s motion for a protective order, and has reversed course. The government’s motion is denied. The videos are now back up. There are hours and hours of utter nonsense for you to enjoy. Here are just a couple of the videos:
The ruling is worth reading in full, because McMahon manages to be critical of both sides while ultimately landing firmly against the government’s attempt to suppress the videos. She spends a good chunk of the opinion scolding the plaintiffs for what she clearly views as a procedural end-run — they sent the full deposition videos to chambers on a thumb drive without ever filing them on the docket or seeking permission to do so, which she sees as a transparent attempt to manufacture a “judicial documents” argument that would give the videos a presumption of public access.
McMahon doesn’t buy it:
When deciding a motion for summary judgment, the Court wants only those portions of a deposition on which a movant actually relies, and does not want to be burdened with irrelevant testimony merely because counsel chose to, or found it more convenient to, submit it. And because videos cannot be filed on the public docket without leave of court, there was no need for the rule to contain a specific reference to video transcriptions; the only way to get such materials on the docket (and so before the Court) was to make a motion, giving the Court the opportunity to decide whether the videos should be publicly docketed. This Plaintiffs did not do.
But if Plaintiffs wanted to know whether the Court’s rule applied to video-recorded depositions, they could easily have sought clarification – just as they could easily have filed a motion seeking leave to have the Clerk of Court accept the videos and place them on the public record. Again, they did not. At the hearing held on March 17, 2026, on Defendants’ present motion for a protective order, counsel for ACLS Plaintiffs, Daniel Jacobson, acknowledged the reason, stating “Frankly, your Honor, part of it was just the amount of time that it would have taken” to submit only the portions of the videos on which Plaintiffs intended to rely. Hr’g Tr., 15:6–7. In other words, “It would have been too much work.” That is not an acceptable excuse.
The Court is left with the firm impression that at least “part of” the reason counsel did not ask for clarification was because they wished to manufacture a “judicial documents” argument and did not wish to be told they could not do so. The Court declines to indulge that tactic.
Fair enough. But having knocked the plaintiffs for their procedural maneuver, the judge then turns to the actual question: has the government shown “good cause” under Rule 26(c) to justify a protective order keeping the videos off the internet? And the answer is a pretty resounding no. And that’s because public officials acting in their official capacities have significantly diminished privacy interests in their official conduct:
The Government’s motion fails for three independent reasons. First, the materials at issue concern the conduct of public officials acting in their official capacities, which substantially diminishes any cognizable privacy interest and weighs against restriction. Second, the Government has not made the particularized showing of a “clearly defined, specific and serious injury” required by Rule 26(c). Third, the Government has not demonstrated that the prospective relief it seeks would be effective in preventing the harms it identifies, particularly where those harms arise from the conduct of third-party actors beyond the control of the parties.
She cites Garrison v. Louisiana (the case that extended the “actual malice” standard from NY Times v. Sullivan) for the proposition that the public’s interest “necessarily includes anything which might touch on an official’s fitness for office,” and that “[f]ew personal attributes are more germane to fitness for office than dishonesty, malfeasance, or improper motivation.” Given that these depositions are literally about how government officials decided to terminate hundreds of millions of dollars in grants, that framing fits.
The judge also directly calls out the government’s arguments about harassment and reputational harm, and essentially says: that’s the cost of being a public official whose official conduct is being scrutinized. Suck it up, DOGE bros.
Reputational injury, public criticism, and even harsh commentary are not unexpected consequences of disclosing information about public conduct. They are foreseeable incidents of public scrutiny concerning government action. Where, as here, the material sought to be shielded by a protective order is testimony about the actions of government officials acting in their official capacities, embarrassment and reputational harm arising from the public’s reaction to official conduct is not the sort of harm against which Rule 26(c) protects. Public officials “accept certain necessary consequences” of involvement in public affairs, including “closer public scrutiny than might otherwise be the case.”
As for the death threats and harassment — which McMahon explicitly says she takes seriously and calls “deeply troubling” and “highly inappropriate” — she notes that there are actual laws against threats and cyberstalking, and that Rule 26(c) protective orders aren’t a substitute for law enforcement doing its job:
There are laws against threats and harassment; the Government and its witnesses have every right to ask law enforcement to take action against those who engage in such conduct, by enforcing federal prohibitions on interstate threats and cyberstalking, see, e.g., 18 U.S.C. §§ 875(c), 2261A, as well as comparable state laws. Rule 26(c) is not a substitute for those remedies.
And then there’s the practical reality McMahon acknowledges directly: it’s too damn late. The videos have already spread everywhere. A protective order aimed solely at the plaintiffs would accomplish approximately nothing.
At bottom, the Government has not shown that the relief it seeks is capable of addressing the harm it identifies. The videos have already been widely disseminated across multiple platforms, including YouTube, X, TikTok, Instagram, and Reddit, where they have been shared, reposted, and viewed by at least hundreds of thousands of users, resulting in near-instantaneous and effectively permanent global distribution. This is a predictable consequence of dissemination in the modern digital environment, where content can be copied, redistributed, and indefinitely preserved beyond the control of any single actor. Given this reality, a protective order directed solely at Plaintiffs would not meaningfully limit further dissemination or mitigate the Government’s asserted harms.
Separately, the plaintiffs asked for attorney’s fees, and McMahon denied that too, noting that she wasn’t going to “reward Plaintiffs for bypassing its procedures” even though the government’s motion ultimately failed. So everyone gets a little bit scolded here. But the bottom line is clear: you don’t get to send unqualified DOGE kids to nuke hundreds of millions in grants using a ChatGPT prompt, and then ask a court to hide the video of them trying to explain themselves under oath.
Releasing full deposition videos is certainly not the norm, but given that these are government officials who were making massively consequential decisions with a chatbot and no discernible expertise, the world is much better off with this kind of transparency — even if Justin and Nate had to face some people on the internet making fun of them for it.
Summer beating out winter to make March one of warmest on record [Cardinal News] (04:00 , Wednesday, 25 March 2026)

Sunday brought on another summer afternoon in March.
High temperatures included 88 degrees at Danville, 87 at Lynchburg, 87 at Martinsville, 86 at Roanoke, and darn near the second 90-degree reading of March at South Boston, reaching 89.
These are close to normal high temperatures for mid- to late July, and similar to another heat surge 11 days earlier that was followed the next day by fairly widespread snowfall.
This past Sunday’s heat wasn’t followed by snow, but some locations did drop below the freezing mark by Tuesday morning.
We’ll be back to 70s and 80s highs on Thursday, and after some Friday showers and perhaps a few thunderstorms, it could well be below freezing again by Sunday morning, before next week warms up.

Virginia has been on the eastern fringe of an extreme “heat dome” high pressure system that is rewriting March heat records in much of the western, central and southern U.S., such an intense and expansive area of heat that this March appears almost certain to be the warmest on record in the contiguous 48 states.
Occasionally, this heat dome has expanded eastward far enough to pull Virginia into the heat wave, but we are also just far enough east that cold fronts have been pulled around the high from the north to bring short but sharp shots of cold and a few cooler days.
Overall, however, the hot spells are overwhelming the cold snaps, and this March is poised to be one of the warmest on record for much of our region.
If the month had ended Tuesday, it would rank as tied for fourth warmest March on record at Danville, fifth warmest on record at Roanoke, seventh warmest at Lynchburg, and tied for ninth warmest at Blacksburg, each 7 to 9 degrees above normal.
Even with some up-and-down rolling between cool days and warm ones, the last week of the month will likely not have enough chill to pull this March below being in the top 10 for warmth at these sites with over a century of official weather records.
And April looks like it will begin with more of the western heat dome starting to get shoved eastward toward us, as the pattern begins to shift to one with a western trough of low pressure that may ramp up the spring storms in the central U.S.

There have been a lot of 90s and even 100-plus temperatures unusually early west of the Mississippi, with at least one instance of 90 degrees in our region.
South Boston poked 90 degrees on March 11 — officially recorded as March 12 because of the 8 a.m. to 8 a.m. EDT cycle of the co-op weather station, which is different than the midnight to midnight cycle of major climate stations. (A somewhat frustrating subject for another day.)
Either way, recorded either as March 11 or March 12, South Boston’s 90-degree high this month appears to be earliest on the calendar any official weather station in all of Cardinal News’ Southwest and Southside Virginia coverage area has reached the 90-degree mark, edging out Danville hitting 91 on March 13, 1990.
In fact, that appears to be only the 12th time any official weather station in our region has recorded 90 degrees or higher on any date in March.
Danville has hit 90 in March on two other occasions, 91 on March 17, 1945, and 90 on March 31, 1985.

Lynchburg has hit 90 in March three times — all of them in 1907, before the start of data at any of the other stations that have had 90s in March. Those happened on March 22, March 23 and March 29, the latter being 92 degrees, apparently the hottest March day in official data for any location in our region.
The John H. Kerr Dam in Mecklenburg County has hit 90 four times in March, never earlier than March 17, in 1985, 2007, 2016, and 2023.
Roanoke’s hottest March day was a 90-degree high on March 19, 1945. The Star City’s high hit 87 on March 11 this year, the day before a tenth of an inch of snow, apparently the only time a temperature in the 80s and measurable snow have occurred on consecutive days at least as far back as 1912.

Cut 10 degrees off the bar and consider 80-plus temperatures, and this has still been a remarkable month of warmth.
Danville has already had six days at or above 80 degrees, which is the most in March since 2007 and tied for third most in over a century of record. Lynchburg, Roanoke and Martinsville have gone above 80 five times, among the top five for 80-degree days in March.
Even Abingdon has had three days above 80, the most in March since 1998 and third most since its records began in 1969.
After another cooldown this weekend, it appears there may be more premature sizzle on the way as March shifts to April.
Journalist Kevin Myatt has been writing about weather for 20 years. His weekly column, appearing on Wednesdays, is sponsored by Oakey’s, a family-run, locally-owned funeral home with locations throughout the Roanoke Valley.
To submit a photo, send it to weather@cardinalnews.org or tweet it to @CardinalNewsVa or @KevinMyattWx. Please identify the location and date of the photo with each submission.
Sign up for his weekly newsletter:
The post Summer beating out winter to make March one of warmest on record appeared first on Cardinal News.
VT Lacrosse vs. Duke [www.collegiatetimes.com - RSS Results for * of type article OR video OR youtube OR collection] (03:13 , Wednesday, 25 March 2026)
VT Softball vs. NC State [www.collegiatetimes.com - RSS Results for * of type article OR video OR youtube OR collection] (03:07 , Wednesday, 25 March 2026)
VT Men's Tennis vs SMU [www.collegiatetimes.com - RSS Results for * of type article OR video OR youtube OR collection] (03:02 , Wednesday, 25 March 2026)
Gov’t Admits More Than 100 Asylum Seekers Were Deported In Violation Of A *Single* Court Order [Techdirt] (02:02 , Wednesday, 25 March 2026)
By any means, necessary or not: that’s how this administration gets its bigoted version of immigration enforcement done. The surges targeting cities and states that Trump doesn’t feel are loyal enough are a double-edged sword. They punish states run by Democratic party members simply for being run by Democratic party members. And they flood courts with more cases than they can possibly handle, allowing the government to deny rights/deport people at scale.
The government doesn’t always get away with it. But given the scale, the government generally doesn’t get reined in until long after massive amounts of damage has been done.
That’s the case here in Maryland, where a lawsuit, that was initiated shortly after Trump began sending Venezuelans to El Salvador’s hellhole prison for purely punitive reasons, continues to play out. It involves a Venezuelan asylum seeker who was ejected from the country via Trump’s non-wartime invocation of the Alien Enemies Act to excuse the government’s refusal to respect due process rights.
As is the case with many federal judges dealing with Trump’s war on migrants, Maryland federal judge Stephanie Gallagher no longer takes the government at its word. That’s why she has been ordering immigration officials to testify in court, where they can be cross-examined and/or questioned by the judge herself.
And that’s the last thing this government wants, because it can’t even survive the minimal judicial scrutiny of its filed motions, which are usually crafted by teams of lawyers and not by the front-line employees and supervisors judges are ordering to testify.
David Kurtz of Talking Points Memo attended a recent hearing hosted by Judge Gallagher in this long-running case. Gallagher and the plaintiff’s attorney wanted to know why the government seemed to be violating an existing court order when it wrongfully removed two other asylum seekers in February.
What they heard instead was the perhaps inadvertent admission by the government that the three known (and potentially illegal removals) being discussed were pretty much just a rounding error:
Before today, the number of wrongfully deported asylum seekers in the case was thought to be less than a dozen. But under persistent questioning from plaintiff’s counsel, U.S. Citizenship and Immigration Services asylum officer Kimberly Sicard testified that in the past three to four weeks it had come to her attention that more than 100 asylum seekers covered by the settlement agreement have been removed. She put the number in the “low 100s.”
That’s insane. Those are the actions of a government that truly does not care what illegal acts it engages in so long as they contribute to the end goal of subtracting non-white people from this nation.
And it’s obviously intentional. That much was made clear in Sicard’s testimony.
Asked how the additional removals had come to her attention, Sicard said she wasn’t sure of the exact process but that officials had “queried systems.” As part of the process of notifying ICE of the wrongful removals, the matter went to the office of chief counsel at USCIS three to four weeks ago, Sicard said.
That means the government can query its detention databases in order to prevent possibly illegal removals. It also means the government can find out how many illegal removals it might have engaged in. The “three or four weeks” just means the USCIS chief counsel spent a lot of time trying to figure out how to legally justify illegal removals that now total in the “low hundreds.” And it means all of these things are either rarely used (or, more likely, deliberately ignored) by government agencies that have all been tasked with respecting rights first and carrying out their missions second.
Speaking of ignoring things, this revelation may never have occurred if the government had even attempted to comply with the judge’s previous court order:
The revelation was the pinnacle of a day of frustration for Gallagher. She had listed in her order calling the hearing five topics on which she expected the Trump administration to produce witnesses “with personal knowledge” to testify. The government failed to produce such witnesses.
“Failed” just means “refused” under Trump and his bigoted sidekicks. Because this administration felt this was just another court order it could ignore, someone without “personal knowledge” of the topics under discussion was sent to court to take the heat. And because she wasn’t expected to offer anything but shrugs, the USCIS lawyer responded honestly to questions that apparently weren’t covered by whatever minimal guidance DHS offered before she was put on the stand.
It’s this sort of sloppy arrogance that’s going to continue to derail some of the worst things this administration wants to do. And we’re safe to assume the arrogance and sloppiness will continue, because Trump has made absolutely no effort to rid himself of loyalists, no matter how sloppy, stupid, and undeservedly arrogant they are.
Daily Deal: Geekey Multi-Tool [Techdirt] (01:57 , Wednesday, 25 March 2026)
Geekey is an innovative, compact multi-tool like nothing seen before. It’s truly a work of art with engineering that combines everyday common tools into one sleek little punch that delivers endless capability. Geekey features many common tools that have been used for decades and proven essential for everyday fixes. It’s on sale for $19.55 with the code MARCH15 at checkout.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
VT Men's Tennis vs. Boston College [www.collegiatetimes.com - RSS Results for * of type article OR video OR youtube OR collection] (12:38 , Wednesday, 25 March 2026)
AI Might Be Our Best Shot At Taking Back The Open Web [Techdirt] (12:28 , Wednesday, 25 March 2026)
I remember, pretty clearly, my excitement over the early World Wide Web. I had been on the internet for a year or two at that point, mostly using IRC, Usenet, and Gopher (along with email, naturally). Some friends I had met on Usenet were students at the University of Illinois at Urbana-Champaign, and told me to download NCSA Mosaic (this would have been early 1994). And suddenly the possibility of the internet as a visual medium became clear. I rushed down to the university bookstore and picked up a giant 400ish page book on building websites with HTML (I only finally got rid of that book a few years ago). I don’t think I ever read beyond the first chapter. But what I did do was learn how to right click on webpages and “view source.”
And from that, magic came.
I had played around with trying to build websites, and I remember another friend telling me about GeoCities (I can’t quite recall if this was before or after they had changed their name from their original “Beverly Hills Internet”) handing out web sites for free. You just had to create the HTML pages and upload them via FTP.
And so I started designing really crappy websites. I don’t remember what the early ones had, but like all early websites they probably used the blink tag and had under construction images and eventually a “web counter.”
But the thing I do remember was the first time I came across Derek Powazek’s Fray online magazine. It was the first time I had seen a website look beautiful. This was without CSS and without Javascript. I still remember quite clearly an “issue” of Fray that used frames to create some kind of “doors” you could slide open to reveal an article inside.
Right click. View source. Copy. Mess around. A week later I had my own (very different) version of the sliding doors on my GeoCities site, but using the same HTML bones as Derek’s brilliant work.
You could just build stuff. You could look at what others were doing and play around with it. Copy the source, make adjustments, try things, and have something new. There were, certainly, limitations of the technology, but it was incredibly easy for anyone to pick up. Yes, you had to “learn” HTML, but you could pick up enough basics in an afternoon to build a decent looking website.
But then two things happened, and it’s worth separating them because they’re different problems with different causes.
First, the technical barrier went up. CSS and Javascript opened up incredible possibilities to make websites beautiful and interactive, but they also meant it was a lot more difficult to just view source, copy, and mess around. The gap between “basic functional website” and “actually looks good” widened into a chasm that required real expertise to cross. Plenty of dedicated people learned these skills, but the casual tinkerer — the person who’d spend an afternoon copying Derek’s frames to make sliding doors — increasingly couldn’t keep up.
But the technical complexity alone didn’t kill amateur web building. The centralization did. While there was an interim period where people set up their own blogs, it quickly moved to walled “social media gardens” where some giant tech company decided what your page looked like. Why bother learning CSS when you could just dump text in a Facebook box and reach more people? The incentive to build your own thing evaporated, replaced by the convenience of posting to someone else’s platform under someone else’s (hopefully benign) rules.
These two problems reinforced each other. The harder it got to build your own thing, the more attractive the walled gardens became. The more people moved to walled gardens, the less reason there was to learn to build.
The rise of agentic AI tools is opening up an opportunity to bring us back to that original world of wonder where you could just build what you wanted, even without a CS degree. And here I need to be specific about what I mean by “agentic AI” — because too many people are overly focused on the chatbots that answer questions or generate text or images for you. I’m talking about AI systems that can actually do things: write code, execute it, debug it, iterate on it based on your feedback. Tools like Claude Code, Cursor, Codex, Antigravity, or similar coding agents that can take a description of what you want and actually build it.
For all those years that tech bros would shout “learn to code” at journalists, the reality now is that being able to write well and accurately describe things is a superpower that is even better than code. You can tell a coding agent what to do… and for the most part it will do it.
Let me give you the example that still kind of blows my mind. A few weeks ago, in the course of a Saturday — most of which I actually spent building a fence in my yard — I had a coding agent build an entire video conferencing platform. It built a completely functional platform with specific features I’d wanted for years but couldn’t find in existing tools. I’ve now used it for actual staff meetings. The fence took longer to build than the software.
All it took was describing what I wanted to an agent that could code it for me. And it addresses both problems I described earlier: it lowers the technical barrier back down to “can you describe what you want clearly?” while also enabling you to build your own thing rather than accepting whatever some platform offers you.
Over the last few months I’ve been finding I need to retrain my brain a bit about what we accept and learn to deal with vs. what we can fix ourselves. In the past I’ve talked about the learned helplessness many people feel about the tech that we use. We know that it’s vaguely working against us, and we all have to figure out what trade-offs we’re willing to accept to accomplish whatever goals we have.
But what if we could just fix things rather than accepting the tradeoffs?
I’ve talked in the past about how I’ve used an AI-assisted writing tool called Lex over the past few years, which doesn’t write for me, but is a very useful editorial assistant. Over the last few months, though, I decided to see if I could effectively rebuild that tool myself, fully controlled by me, without having to rely on a company that might change or enshittify the app. I actually built it directly into the other big AI experiment I’ve spoken about: my task management tool, which I’ve also moved away from a third party hosting service onto a local machine. Indeed, I’m writing this article right now in this tool (I first created a task to write about it, and then by clicking a checkbox that it was a “writing project” it automatically opens up a blank page for me to write in, and when I’m done, I’ll click a button and it will do a first pass editorial review).
But the amazing thing to me is that I keep remembering I can fix anything I come across that doesn’t work the way I want it to. With any other software I have to adjust. With this software, I just say “oh hey, let’s change this.” I find that a few times a week I’ll make a small tweak here or there that just makes the software even better. In the past, I would just note a slight annoyance and figure out how to just deal with software not working the way I wanted. But now, my mind is open to the fact that I can just make it better. Myself.
An example: literally last night, I realized that the page in the task tool that lists all the writing projects I’m working on was getting cluttered by older completed projects that were listed as still being in “drafting” mode. With other tools (including the old writing tool I was using), I would just learn to mentally compartmentalize the fact that the list of articles was a mess and train myself to ignore the older articles and the digital clutter. But here, I could just lay out the issue to my coding agent, and after some back and forth, we came up with a system whereby once a task on the task management side was checked off as “completed” the corresponding writing project would similarly get marked as completed and then would be hidden away in a minimized list.
I keep coming across little things like this that, in the past, I would have been mildly annoyed by, but needed to live with. And it’s taking some effort to remind myself “wait, I don’t have to live with this, I can fix it.” Rather than training my brain to accept a product that doesn’t do what I want, I can just tell it to work better. And it does.
And, the more I do that, the more I start to open up my mind to possibilities that were impossible before. “Huh, wouldn’t it be nice if this tool also had this other feature? Let’s try it!” I find that the more I do this, the bigger my vision gets of what I can do because the large segment of things that were fundamentally impossible before are now open to me, just by describing what I want.
It really does give me that same underlying feeling that I felt when I was first playing around with HTML and being able to “just make things.” Except, now, it’s way more powerful. Rather than copying Derek’s use of HTML frames to create “sliding doors” on a webpage, I can create basically anything I dream up.
Then, when combined with open social protocols, you can build in social features or identity to any service as well — without having to worry about getting other users. They’re already there. For example, my task management tool sends me a “morning briefing” every day that, among other things, scans through Bluesky to see if there’s anything that might need my attention.
Now, there are legitimate criticisms of “vibe coded” tools. Critics point out that AI-generated code can be buggy, insecure, hard to maintain, and that users who can’t read the code can’t verify what it’s actually doing. These are real concerns — for certain contexts.
The thing is, most of these criticisms apply to tools being built as businesses to serve customers at scale. If you’re shipping code to millions of users who are depending on it, you absolutely need security audits, proper testing, maintainable architecture. But that’s not what I’m talking about. I’m talking about building totally customized, personal tools for yourself—tools where you’re the only user, where the stakes are “my task list doesn’t sync properly” rather than “customer data got leaked.”
There’s also a more subtle concern worth addressing: is this actually democratizing, or does it just shift which skills you need? After all, you still need to accurately describe what you want, debug when things go wrong, and understand what’s even possible. That’s different from learning HTML, but it’s still a skill. I think the honest answer is that the kind of skill needed has shifted. “Learn to code” becomes “learn to think clearly and describe things precisely” — which happens to be a superpower that writers, editors, and domain experts already have. The barrier has moved to territory that many more people already inhabit.
It’s also an area where you can easily start small, learn, and grow. I started by building a few smaller apps with simpler features, but the more I do, the more I realize what’s possible.
Also, I’d note that this is actually an area where the LLM chatbots are kind of useful. Before I kick off an actual project with a coding agent, I’ve found that talking it through with an LLM first helps sharpen my thinking on what to tell the agent. I don’t outsource my mind to the chatbot, and will often reject some of its suggestions, but in having the discussion before setting the agent to work, it often clarifies tradeoffs and makes me consider how to best phrase things when I do move over to the agent.
What gets missed in most conversations about AI and the open web: these two pieces need each other. Open social protocols without AI tools stay stuck in the domain of developers and the highly technical — which is exactly why adoption has been slow. And AI tools without open protocols just replicate the old problem: you’re building cool stuff, but you’re still trapped inside someone else’s walls.
Put them together, though, and something clicks. Open protocols like ATProto give AI agents bounded, consent-driven contexts to work in — your agent can scan your Bluesky feed because the protocol allows that, not because some company decided to grant API access that it could revoke tomorrow. And AI agents give regular people the ability to actually build on those protocols without needing an engineering team. My morning briefing tool scans Bluesky not because I wrote a bunch of API calls, but because I described what I wanted and a coding agent made it happen.
Each piece makes the other more powerful and safer.
Blaine Cook — who was Twitter’s original architect back when it was still a protocol-minded company — recently wrote a piece at New_ Public that gets at this from the infrastructure side:
My long-standing hope has been that we’re able to move past the extractive, monopolizing, and competitive phase of social networks, and into a new era of creativity, collaboration, and diversity. I believe we’re poised to see a Cambrian explosion of new ways to interact online, and there’s evidence to suggest that it’s already happening: just today, I saw three new apps to share what you’re reading and watching with friends, each with their own unique take on the subject!
In this light, LLMs may be a killer app for decentralized networks — and decentralized networks may be the missing constraint that makes LLM integrations safer, more legible, and more aligned with user interests. It’s a symbiosis, and I believe we need both pieces. Rather than trying to integrate LLMs with everything, I think that deliberately bounded, consent-driven integrations will produce better outcomes.
Cook’s framing of LLMs as a “killer app for decentralized networks” is exactly right — and it runs the other way too. Decentralized networks might be the killer app for making AI tools something other than another vector for corporate lock-in, or just another clone of an existing centralized service.
Now, I can already hear the objection, and it’s a fair one: am I really suggesting we escape dependence on giant tech platforms by… becoming dependent on giant AI companies? Companies that have scraped the entire web, that burn massive amounts of energy and water, that are built on the labor of underpaid content moderators, and that seem to want to consolidate power in ways that look an awful lot like the last generation of tech giants?
Yeah, I get it. If the pitch is “use OpenAI to free yourself from Meta,” that’s just switching landlords.
But that’s not actually where this is heading. The trajectory matters more than the current snapshot.
First, if you’re using frontier models through the API or a pro subscription, you have significantly more control than most people realize. Your data generally isn’t feeding back into training. You’re using the model as a tool, not handing over your content to a platform. That’s a meaningfully different relationship than the one you have with social media companies, where you’re feeding them data, and their business model is based on monetizing that data.
But much more importantly, you don’t have to use the frontier models at all. Open source AI is maturing fast — models like Qwen, Kimi, and Mistral can run entirely on certain hardware, no cloud required. They’re behind the frontier models, but only by a bit. Six months to a year, roughly. But for a lot of the “build your own tools” use cases I’m describing, they’re already good enough.
Musician and YouTuber Rick Beato recently showed how easy it was for him to install local models on his own machine, and why he thinks the largest AI companies will eventually be undercut by home AI usage:
I’ve been doing something similar with Ollama hosting a Qwen model locally. It’s slower and less sophisticated. But it works. And I already use different models for different tasks, defaulting to local when I can. As those models improve — and they are improving quickly — the frontier labs become less necessary, not more. If you’re a professional, perhaps you’ll still need them. But if you’re just building something for yourself, it’s less and less necessary.
This is what the “AI is just another Big Tech power grab” critics are missing: the technology is moving toward decentralization, not away from it. That’s unusual. Social media started decentralized and got captured. AI is starting captured and getting more open over time. The economic pressure from open source models is real, and it’s pushing in the right direction. But it’s important we keep things moving that way and not slow down the development of open source LLMs.
On the training data question — which is a legitimate concern whether or not you think training on copyrighted works is fair use — efforts like Common Corpus are building large-scale training sets from public domain and openly licensed materials. Anil Dash has been writing about what “good AI” looks like in practice — AI that’s transparent about its training data, that respects consent, that minimizes externalities rather than ignoring them. There are ways to do this right.
None of this is fully solved yet. But the direction is clear, and the tools to do it responsibly are improving faster than most critics acknowledge.
When you use AI as a tool (rather than letting it use you as the tool), it can give you a kind of superpower to get past the learned helplessness of relying on whatever choices some billionaire or random product manager made for you. You can get past having to mentally compensate for your tools not really working the way you think they should work. Instead, you can just have the internet and your tools work the way you want them to. It’s the most excited I’ve been about the open web since those early days of realizing I could right click, copy and then figure out how to build sliding doors out of frames.
The promise of the open web was colonized by internet giants. But the power of LLMs and agentic coding means we can start to take it back. We can build customized, personal software for ourselves that does what we want. We can connect with communities via open social protocols that allow us to control the relationship rather than a billionaire intermediary. This is what the Resonant Computing Manifesto was all about, and why I’ve argued ATproto is so key to that vision.
But the other part of realizing the manifesto is the LLM side. That made some people scoff early on, but hopefully this piece shows how these things work hand in hand. These agentic AI tools give the power back to you and me.
Thirty years ago, I right-clicked on Derek Powazek’s beautiful website, viewed the source, copied it, messed around with it, and built something new. I didn’t ask anyone’s permission. I didn’t agree to terms of service. I didn’t fit my ideas into someone else’s template. I just built the thing I wanted to build.
Then we gave that away. We traded it for convenience, for reach, for the path of least resistance — and we got walled gardens, manipulated feeds, and the quiet understanding that our tools would never quite work the way we wanted them to, because they weren’t really ours.
Today’s equivalent of right-clicking on Derek’s site is describing what you want to a coding agent, watching it build, telling it what’s wrong, and iterating until it works for you. Different mechanics, same magic. And this time, with open protocols and increasingly open models, we have a shot at keeping it.
Let’s not give it away again.
The Next Ascent [35mmc] (12:00 , Wednesday, 25 March 2026)
The season of ennui arrives quietly, coasting in months after my 41st birthday. I am halfway there. The climb toward the apex of existence has been tumultuous. I can almost make out my obituary from this peak. It’s also here that I can see the choices unmade scattered along the path behind me. A quick...
The post The Next Ascent appeared first on 35mmc.
Google bumps up Q Day deadline to 2029, far sooner than previously thought [Biz & IT - Ars Technica] (11:49 , Wednesday, 25 March 2026)
Google is dramatically shortening its deadline readiness for the arrival of Q Day, the point at which existing quantum computers can break public-key cryptography algorithms that secure decades' worth of secrets belonging to militaries, banks, governments, and nearly every individual on earth.
In a post published on Wednesday, Google said it is giving itself until 2029 to prepare for this event. The post went on to warn that the rest of the world needs to follow suit by adopting PQC—short for post-quantum cryptography—algorithms to augment or replace elliptic curves and RSA, both of which will be broken.
“As a pioneer in both quantum and PQC, it’s our responsibility to lead by example and share an ambitious timeline,” wrote Heather Adkins, Google’s VP of security engineering, and Sophie Schmieg, a senior cryptography engineer. “By doing this, we hope to provide the clarity and urgency needed to accelerate digital transitions not only for Google, but also across the industry.”
Google bumps up Q Day deadline to 2029, far sooner than previously thought [Biz & IT - Ars Technica] (11:49 , Wednesday, 25 March 2026)
Google is dramatically shortening its readiness deadline for the arrival of Q Day, the point at which existing quantum computers can break public-key cryptography algorithms that secure decades' worth of secrets belonging to militaries, banks, governments, and nearly every individual on earth.
In a post published on Wednesday, Google said it is giving itself until 2029 to prepare for this event. The post went on to warn that the rest of the world needs to follow suit by adopting PQC—short for post-quantum cryptography—algorithms to augment or replace elliptic curves and RSA, both of which will be broken.
“As a pioneer in both quantum and PQC, it’s our responsibility to lead by example and share an ambitious timeline,” wrote Heather Adkins, Google’s VP of security engineering, and Sophie Schmieg, a senior cryptography engineer. “By doing this, we hope to provide the clarity and urgency needed to accelerate digital transitions not only for Google, but also across the industry.”
TPU Tubes: How fast are they? [Rene Herse Cycles] (11:02 , Wednesday, 25 March 2026)
TPU tubes have been widely hailed as the ‘next big thing’ in the mainstream media. TPU tubes are lighter, faster and more durable than butyl tubes, while holding air far better than latex tubes. That brings up the question: How fast are they on real roads? The following article was originally published in Bicycle Quarterly 86. It has been lightly edited to match the format of the Rene Herse Journal.
Tests on steel drums show TPU tubes rolling faster than butyl tubes. The big questions is: Does this translate into the real world?
There is reason to be skeptical: Stiff tires roll faster on steel drums, but slower on real roads. Lab tests without a rider do not measure suspension losses—energy dissipated inside the rider’s body—and thus favor stiff tires. Many tire makers have been moving to harder tread rubber, because it tests better on steel drums. In the real world, the harder rubber actually rolls slower. Would the same apply to TPU tubes? Would they only provide a benefit in the lab, but roll slower on real roads?
As we considered developing our own TPU tubes for Rene Herse Cycles, this question became urgent and pressing. We did not want to develop a tube that, while lighter and more durable, made our bikes slower on real roads. We had to test the performance of TPU tubes—and we had to do it quickly!

The Perfect Test Location
Japan’s Kanto Plain is made of many small depressions, so there are many roads that are perfect for roll-down tests of tire and tube performance. We were looking for a road that starts with a relatively steep slope, then flattens out to at least 200 meters (656 ft) at a constant 2%. The steep start allows the bike to pick up speed quickly, so there’s little time spent at ultra-low speeds, where wobbles can affect the rate of acceleration. The 2% grade results in a constant speed of about 6 m/s (21.6 km/h; 13.4 mph)—fast enough to be representative of real-world riding, but slow enough that aerodynamic resistance does not drown out other factors. (The actual testing was performed on a paved road with a similar profile to the one shown above.)
Japan in winter often sees days of absolute calm, with no measurable wind at all. This adds greatly to the precision of our measurements. (You can measure wind speed and direction, and then correct for the influence of wind, but this adds a layer of complexity and introduces potential errors.)
For this test, we used the same bike (Firefly titanium), the same wheels and the same set of 26” x 2.3” Rat Trap Pass Extralight tires. We switched between extralight butyl tubes (Schwalbe SV 14A; 95 g) and prototypes of the Rene Herse TPU tubes (48 g). To do this, we removed the tires and exchanged the tubes between tests. All tubes were inflated to 1.8 bar (26 psi), the ‘soft’ value of the Rene Herse Tire Pressure Calculator.

We did three runs with the TPU tubes, then changed to butyl tubes for six runs, before doing another six runs with TPU tubes.
We did not record the first run each time after changing the tubes. Tubes tend to move slightly immediately after installation, before they settle into their final position, and these first runs tend to be very slightly slower than later runs. This applies consistently to TPU and butyl tubes.
Running TPU tubes at the beginning and the end of the test series ensures that conditions haven’t changed over the period of the test. The testing was performed over a period of one hour on a day with zero wind and constant temperature. (Speedy tire mounting is essential for these tests!)
The results for each tube were extremely consistent, showing that other factors, like wind, rider position, temperature, etc. did not change during the testing.1 All runs with the TPU tubes fell within a narrow range, between 31.9 and 32.6 seconds (Fig. 1). The runs with butyl tubes took between 33.0 and 33.6 seconds. There was no overlap between the results for TPU and for butyl tubes. On average, the TPU tubes were 1.3 seconds or 3.6% faster. These differences are statistically significant.
Translated to Watts
To determine the power savings of the TPU tubes, we calculated the Crr (Coefficient of Rolling Resistance) for the two sets of runs (TPU and butyl tubes), using input parameters determined in wind tunnel testing and verified in real-road tests with power meters.2
We did the same for our previous tests that compared butyl tubes to ultra-thin latex tubes3—the same type used in high-end tubular tires. Weighing just 54 g each, these tubes are much thinner and lighter than standard latex tubes that weigh 132 g (Vittoria) in the same size. The ultra-thin latex tubes represent a ‘best-case’ scenario for latex tubes. Commonly used latex tubes are thicker, heavier and likely slower than the latex tubes we tested.
We found is that the performance of TPU tubes and ultra-light latex tubes is indistinguishable. (The small difference in our results is not statistically significant.)
In the past, we also tested tubeless setups compared to standard-thickness butyl tubes.4 We used the same methodology as in the test of TPU vs. butyl tubes: We first ran the same tires with butyl tubes, then removed the tubes and installed the tires tubeless, using a minimum of sealant, making this a best-case scenario for tubeless tires. Then we ran tubes again. Our back-to-back testing found that tubeless tires roll at the same speed as butyl tubes. Apparently, the liquid sealant inside the tires causes as much resistance as the added membrane of the butyl tube.
We then calculated the power requirements for each setup and compared that with the differences between different tires from Rene Herse and other manufacturers.

Both the TPU and the latex tubes are significantly faster than butyl tubes and tubeless setups. (Grouped bars show tire/tube setups where the results are not statistically significant.)
Compared to the ultralight butyl tubes, the TPU tubes save
This is similar to the savings of switching from Rene Herse Standard to Extralight casings.
As a percentage of the total power output, the savings are greater at lower speeds: TPU tubes make a 20-km/h-rider 5% faster, whereas a 40-km/h-rider saves only 2%. For casual riders, saving 5% allows them to ride further and keep up with friends without less strain. For racers, who compete with others with similar power-to-weight ratios, a saving of 2% is highly significant and can make the difference between winning a race or not even finishing on the podium.
Conclusion
The performance benefits of TPU tubes are real and borne out in real-road tests. If anything, TPU tubes perform even better in the real world, with a rider on the bike, than in lab tests on steel drums—a phenomenon also observed with supple tires. TPU tubes are as fast as ultra-thin latex tubes. TPU tubes are significantly faster than butyl tubes or tubeless setups. (They are also stronger, lighter and offer better ride feel than butyl tubes.) Based on these results, we went ahead with the development of our Rene Herse TPU tubes, confident that they would provide a real-world benefit for us and our customers.
More Information:
Notes:
1 The consistent runs for each tube setup show that extraneous variables, like changes in rider position (and aerodynamics), tiny air currents and/or changes in temperature did not significantly affect the results. The statistical analysis shows a very high probability that the observed differences between butyl and TPU tubes are due to actual performance differences and not random ‘noise’ in our data. See also: How We Tested Tires. Bicycle Quarterly 78 (Autumn 2020), p. 70.
2 Frontal area = 0.5 m2; Cd = 0.9; Weight (bicycle + rider) = 80 kg
3 Are latex tubes faster? Bicycle Quarterly 74 (Winter 2020), p. 102.
4 Testing tires: How fast do they roll? Bicycle Quarterly 73 (Autumn 2020), p. 70.
TPU Tubes: How fast are they? [Rene Herse Cycles] (11:02 , Wednesday, 25 March 2026)
TPU tubes have been widely hailed as the ‘next big thing’ in the mainstream media. TPU tubes are lighter, faster and more durable than butyl tubes, while holding air far better than latex tubes. That brings up the question: How fast are they on real roads? The following article was originally published in Bicycle Quarterly 86. It has been lightly edited to match the format of the Rene Herse Journal.
Tests on steel drums show TPU tubes rolling faster than butyl tubes. The big questions is: Does this translate into the real world?
There is reason to be skeptical: Stiff tires roll faster on steel drums, but slower on real roads. Lab tests without a rider do not measure suspension losses—energy dissipated inside the rider’s body—and thus favor stiff tires. Many tire makers have been moving to harder tread rubber, because it tests better on steel drums. In the real world, the harder rubber actually rolls slower. Would the same apply to TPU tubes? Would they only provide a benefit in the lab, but roll slower on real roads?
As we considered developing our own TPU tubes for Rene Herse Cycles, this question became urgent and pressing. We did not want to develop a tube that, while lighter and more durable, made our bikes slower on real roads. We had to test the performance of TPU tubes—and we had to do it quickly!

The Perfect Test Location
Japan’s Kanto Plain is made of many small depressions, so there are many roads that are perfect for roll-down tests of tire and tube performance. We were looking for a road that starts with a relatively steep slope, then flattens out to at least 200 meters (656 ft) at a constant 2%. The steep start allows the bike to pick up speed quickly, so there’s little time spent at ultra-low speeds, where wobbles can affect the rate of acceleration. The 2% grade results in a constant speed of about 6 m/s (21.6 km/h; 13.4 mph)—fast enough to be representative of real-world riding, but slow enough that aerodynamic resistance does not drown out other factors. (The actual testing was performed on a paved road with a similar profile to the one shown above.)
Japan in winter often sees days of absolute calm, with no measurable wind at all. This adds greatly to the precision of our measurements. (You can measure wind speed and direction, and then correct for the influence of wind, but this adds a layer of complexity and introduces potential errors.)
For this test, we used the same bike (Firefly titanium), the same wheels and the same set of 26” x 2.3” Rat Trap Pass Extralight tires. We switched between extralight butyl tubes (Schwalbe SV 14A; 95 g) and prototypes of the Rene Herse TPU tubes (48 g). To do this, we removed the tires and exchanged the tubes between tests. All tubes were inflated to 1.8 bar (26 psi), the ‘soft’ value of the Rene Herse Tire Pressure Calculator.

We did three runs with the TPU tubes, then changed to butyl tubes for six runs, before doing another six runs with TPU tubes.
We did not record the first run each time after changing the tubes. Tubes tend to move slightly immediately after installation, before they settle into their final position, and these first runs tend to be very slightly slower than later runs. This applies consistently to TPU and butyl tubes.
Running TPU tubes at the beginning and the end of the test series ensures that conditions haven’t changed over the period of the test. The testing was performed over a period of one hour on a day with zero wind and constant temperature. (Speedy tire mounting is essential for these tests!)
The results for each tube were extremely consistent, showing that other factors, like wind, rider position, temperature, etc. did not change during the testing.1 All runs with the TPU tubes fell within a narrow range, between 31.9 and 32.6 seconds (Fig. 1). The runs with butyl tubes took between 33.0 and 33.6 seconds. There was no overlap between the results for TPU and for butyl tubes. On average, the TPU tubes were 1.3 seconds or 3.6% faster. These differences are statistically significant.
Translated to Watts
To determine the power savings of the TPU tubes, we calculated the Crr (Coefficient of Rolling Resistance) for the two sets of runs (TPU and butyl tubes), using input parameters determined in wind tunnel testing and verified in real-road tests with power meters.2
We did the same for our previous tests that compared butyl tubes to ultra-thin latex tubes3—the same type used in high-end tubular tires. Weighing just 54 g each, these tubes are much thinner and lighter than standard latex tubes that weigh 132 g (Vittoria) in the same size. The ultra-thin latex tubes represent a ‘best-case’ scenario for latex tubes. Commonly used latex tubes are thicker, heavier and likely slower than the latex tubes we tested.
We found is that the performance of TPU tubes and ultra-light latex tubes is indistinguishable. (The small difference in our results is not statistically significant.)
In the past, we also tested tubeless setups compared to standard-thickness butyl tubes.4 We used the same methodology as in the test of TPU vs. butyl tubes: We first ran the same tires with butyl tubes, then removed the tubes and installed the tires tubeless, using a minimum of sealant, making this a best-case scenario for tubeless tires. Then we ran tubes again. Our back-to-back testing found that tubeless tires roll at the same speed as butyl tubes. Apparently, the liquid sealant inside the tires causes as much resistance as the added membrane of the butyl tube.
We then calculated the power requirements for each setup and compared that with the differences between different tires from Rene Herse and other manufacturers.

Both the TPU and the latex tubes are significantly faster than butyl tubes and tubeless setups. (Grouped bars show tire/tube setups where the results are not statistically significant.)
Compared to the ultralight butyl tubes, the TPU tubes save
This is similar to the savings of switching from Rene Herse Standard to Extralight casings.
As a percentage of the total power output, the savings are greater at lower speeds: TPU tubes make a 20-km/h-rider 5% faster, whereas a 40-km/h-rider saves only 2%. For casual riders, saving 5% allows them to ride further and keep up with friends without less strain. For racers, who compete with others with similar power-to-weight ratios, a saving of 2% is highly significant and can make the difference between winning a race or not even finishing on the podium.
Conclusion
The performance benefits of TPU tubes are real and borne out in real-road tests. If anything, TPU tubes perform even better in the real world, with a rider on the bike, than in lab tests on steel drums—a phenomenon also observed with supple tires. TPU tubes are as fast as ultra-thin latex tubes. TPU tubes are significantly faster than butyl tubes or tubeless setups. (They are also stronger, lighter and offer better ride feel than butyl tubes.) Based on these results, we went ahead with the development of our Rene Herse TPU tubes, confident that they would provide a real-world benefit for us and our customers.
More Information:
Notes:
1 The consistent runs for each tube setup show that extraneous variables, like changes in rider position (and aerodynamics), tiny air currents and/or changes in temperature did not significantly affect the results. The statistical analysis shows a very high probability that the observed differences between butyl and TPU tubes are due to actual performance differences and not random ‘noise’ in our data. See also: How We Tested Tires. Bicycle Quarterly 78 (Autumn 2020), p. 70.
2 Frontal area = 0.5 m2; Cd = 0.9; Weight (bicycle + rider) = 80 kg
3 Are latex tubes faster? Bicycle Quarterly 74 (Winter 2020), p. 102.
4 Testing tires: How fast do they roll? Bicycle Quarterly 73 (Autumn 2020), p. 70.
Acros Makes a $29 Headset Press [BIKEPACKING.com] (10:28 , Wednesday, 25 March 2026)
Designed with the occasional home mechanic in mind, the Acros DIY Headset Tool is a capable, compact, and inexpensive alternative to headset presses that can cost well over $100. Learn more about the economical gadget from the German brand here...
The post Acros Makes a $29 Headset Press appeared first on BIKEPACKING.com.
The BTCHN’ Bikes MTB Bar Lineup Has a New Look [BIKEPACKING.com] (09:33 , Wednesday, 25 March 2026)
Handmade in California, the redesigned Chromoly steel mountain bike handlebar lineup from BTCHN' Bikes features refined specs and a new bent cross-bar. Take a closer look at the entire lineup here...
The post The BTCHN’ Bikes MTB Bar Lineup Has a New Look appeared first on BIKEPACKING.com.
The BTCHN’ Bikes MTB Bar Lineup Has a New Look [BIKEPACKING.com] (09:33 , Wednesday, 25 March 2026)
Handmade in California, the redesigned Chromoly steel mountain bike handlebar lineup from BTCHN' Bikes features refined specs and a new bent cross-bar. Take a closer look at the entire lineup here...
The post The BTCHN’ Bikes MTB Bar Lineup Has a New Look appeared first on BIKEPACKING.com.
Toronto’s Third Annual Vintage Mountain Bike Show on Film [BIKEPACKING.com] (09:17 , Wednesday, 25 March 2026)
Toronto's third annual Vintage Mountain Bike Show, hosted by BoneshakerMTB and Gremlins Bicycle Emporium, took place earlier this month and celebrated the bikes and culture of the sport we love. Dismount Bike Shop passed out point-and-shoot film cameras to attendees for a unique look at this year's event. Find those, a gallery of vintage mountain bikes on display, and some words by Jake London and Carson Lessif here...
The post Toronto’s Third Annual Vintage Mountain Bike Show on Film appeared first on BIKEPACKING.com.
The Tenkara Rod Co. Anytimer Rod Lets You Fish Anytime [BIKEPACKING.com] (09:02 , Wednesday, 25 March 2026)
Now on Kickstarter, the packable Anytimer Rod from Tenkara Rod Co. aims to give those with overlapping outdoor interests the flexibility to cast a line just about anywhere. Get to know all the details of this highly portable fly fishing rod below...
The post The Tenkara Rod Co. Anytimer Rod Lets You Fish Anytime appeared first on BIKEPACKING.com.
The Tenkara Rod Co. Anytimer Rod Lets You Fish Anytime [BIKEPACKING.com] (09:02 , Wednesday, 25 March 2026)
Now on Kickstarter, the packable Anytimer Rod from Tenkara Rod Co. aims to give those with overlapping outdoor interests the flexibility to cast a line just about anywhere. Get to know all the details of this highly portable fly fishing rod below...
The post The Tenkara Rod Co. Anytimer Rod Lets You Fish Anytime appeared first on BIKEPACKING.com.
The New 800mm Wilde Bullwinkle Bar is Made for Modern ATBs [BIKEPACKING.com] (08:45 , Wednesday, 25 March 2026)
Wilde Bikes revived the iconic bullmoose bar with a twist, blending classic design with updated geometry for modern ATB and dirt-touring builds. The new Wilde Bullwinkle pairs an 800mm-wide platform with thoughtful geometry and tubing details for long days of off-road riding and loaded bikepacking. See what sets the Wilde Bullwinkle apart here…
The post The New 800mm Wilde Bullwinkle Bar is Made for Modern ATBs appeared first on BIKEPACKING.com.
The story of OpenBSD on Motorola 88000 series processors [OpenBSD Journal] (08:24 , Wednesday, 25 March 2026)
Regular readers will be aware that Miod Vallat (miod@) is documenting the adventures of porting OpenBSD to various architectures in his OpenBSD Stories collection.
The latest addition is OpenBSD on Motorola 88000 processors, where the first two of a planned total of nine chapters have been published.
The first chapter, The Forsaken RISC Architecture, takes us through some background and pre-history of the architecture.
The second chapter, A New Hope, gives insight into the early porting efforts.
We very much look forward to seeing the further chapters of the OpenBSD on Motorola 88000 processors saga.
Brendan Carr Tries To ‘Ban’ All Foreign Routers In Lazy, Legally Dubious Shakedown [Techdirt] (08:22 , Wednesday, 25 March 2026)
Taking a break from attacking the First Amendment, FCC boss Brendan Carr this week engaged in a strange bit of performance art: his FCC announced that they’d be effectively adding all foreign-made routers to the agency’s “covered list,” in a bid to ban their sale in the United States.
That is unless manufacturers obtain “conditional approval” (including all appropriate application fees and favors, of course) from the Trump administration via the Department of Defense or Department of Homeland Security. In other words, the Trump administration is attempting to shake down manufacturers of all routers manufactured outside the United States (which again, is nearly all of them) under the pretense of cybersecurity.
You can probably see how this might result in some looming legal action. And who knows what other “favors” to the Trump administration might be required to get conditional approval, like the inclusion of backdoors accessible by our current authoritarian government.
A fact sheet insists this was all necessary because many foreign routers have been exploited by foreign actors:
“Recently, malicious state and non-state sponsored cyber attackers have increasingly leveraged the vulnerabilities in small and home office routers produced abroad to carry out direct attacks against American civilians in their homes.”
But the biggest recent cybersecurity incident in recent U.S. memory, the Chinese Salt Typhoon hack (which involved Chinese state-sanctioned hackers massively compromising U.S. telecom networks to spy on important people for years) largely involved the broadly deregulated U.S. telecom sector failing to do basic things like change default admin passwords. And then trying to hide additional evidence of intrusion for liability reasons. A very domestic failure.
We’ve discussed at length that while Brendan Carr loves to pretend he’s doing important things on cybersecurity, most of his policies have made the U.S. less secure. Like his mindless deregulation of the privacy and security standards of domestic telecoms and hardware makers. Or his destruction of smart home testing programs just because they had some operations in China.
Most of the Trump administration “cybersecurity” solutions have been indistinguishable from a foreign attack. They’ve gutted numerous government cybersecurity programs (including a board investigating Salt Typhoon), and dismantled the Cyber Safety Review Board (CSRB) (responsible for investigating significant cybersecurity incidents). The administration claims to be worried about cybersecurity, but then goes out of its way to ensure domestic telecoms see no meaningful oversight whatsoever.
I’d argue Trump administration destruction of corporate oversight of domestic telecom privacy/security standards is a much bigger threat to national security and consumer safety than 90% of foreign routers, but good luck finding any news outlet that brings that up in their coverage of the FCC’s latest move.
In reality, the biggest current threat to U.S. national security is the Trump administration’s rampant, historic corruption. Absolutely any time you see the Trump administration taking steps to “improve national security,” or “address cybersecurity” you can just easily assume there’s some ulterior motive of personal benefit to the president, as we saw when the great hyperventilation over TikTok was “fixed” by offloading the app to Trump’s dodgy billionaire friends.
Michigan Off-Road Expedition (M.O.R.E.) [BIKEPACKING.com] (07:41 , Wednesday, 25 March 2026)
Photos by Matt Acker, Neil Beltchenko, and Garrett Hein M.O.R.E. stands for Michigan Off Road Expedition, a 1,050-mile bikepacking route across the two great peninsulas in the state. Despite lacking […]
The post Michigan Off-Road Expedition (M.O.R.E.) appeared first on BIKEPACKING.com.
Nikkor AF 85mm 1.8 D – A Rediscovery [35mmc] (06:00 , Wednesday, 25 March 2026)
If you’ve read any of my posts on 35mmc, you’d think I was a strictly a B&W film shooter. There are times, however, when I fall off the film wagon and shoot digital for two reasons: The necessity for projects that require lots of low light shooting with quick turnarounds. Events that require distribution of...
The post Nikkor AF 85mm 1.8 D – A Rediscovery appeared first on 35mmc.
7 things to watch as Roanoke begins discussions of its proposed budget [Cardinal News] (04:45 , Wednesday, 25 March 2026)

Roanoke City Manager Valmarie Turner and city finance staff this week proposed a budget to the city council that included major cuts to staff, programming and maintenance.
Turner on Monday proposed a balanced budget of $421.5 million, which is a little over a 3% increase from the year before.
Fiscal year 2027 is expected to be the first year in recent history where expenditures outpace revenues, according to Turner’s presentation.
The proposed budget is a preliminary spending plan. The council did not take a vote on Monday, and the budget could change before final approval in May, as the General Assembly has not settled on its final budget yet. The state’s spending plan could affect local budgets depending on how it allocates money for programs.
A city spokesperson said Tuesday evening that city officials were not yet able to answer a list of questions about the budget proposal emailed by Cardinal News on Monday evening.
The following are a few of the many things to keep an eye on this budget season, as Turner and the finance staff balance an $18.9 million gap.
While the city’s revenue is still growing year over year, the rate at which it’s growing is expected to significantly slow down this year.
Roanoke showed strong revenue growth post-pandemic, with an increase of $62 million across in all tax categories between fiscal year 2021 and fiscal year 2025, said Trinity Kaseke, the city’s budget manager.
The public will have two opportunities to speak with city staff and ask questions about the budget. Both meetings will be open-house style.
In the upcoming fiscal year, the city expects its smallest revenue increase in the last six years, at $6.9 million.
Roanoke has seen its real estate valuations rise over the last four years, with real estate tax assessed growth averaging 8.85% between 2022 and 2025.
This year, the assessed value percentage change was 6.55%.
The current tax rate is $1.22 per $100 of valuation, which has not changed since 2015.
Councilman Peter Volosin asked finance staff during Monday’s meeting why the city is still having to make cuts after seeing such high tax revenues in recent years. In fiscal year 2026, the city collected over $130 million in real estate taxes. In fiscal year 2027, the city expects that to increase by a little less than $10 million.
Turner said this is because each budget season, she starts with the base budget from the year before. Anything new needs to come out of increased revenue, she said.
The city will issue about three years’ worth of debt — $79 million — in one year, Tanya Catron, the city’s capital improvement finance manager, said during Monday’s meeting.
In a statement emailed to Cardinal News, the city said the debt is higher than usual because $57.5 million in short-term financing needs to be converted into long-term financing. This debt was secured in 2024 to “fast-track” capital projects, the city said.
Catron said the debt was supposed to be issued previously, but due to staff turnover, that never happened. City officials did not answer emailed questions about the staff turnover or the debt.
“The projects needed to start, right, they had been approved. So fast forward, we have to basically have a larger-than-normal bond issuance to cover all those projects,” she said.
Catron said the city should have set aside more money for future debt service in anticipation of having to permanently finance its bond anticipation notes. That didn’t happen, Turner said at the meeting.
The city council must approve the debt issuances through a public hearing, which the city expects to happen over the summer.
The city set aside $1.2 million in the budget for parks maintenance in the upcoming fiscal year. But according to the recommended capital improvement program, after the upcoming fiscal year, no money will be allocated to park maintenance through FY2031.
“It feels to me like we disproportionately cut parks and recreation, and we’ve done that again and again and again over the years, we chronically underfund it,” Councilman Terry McGuire said during the meeting. “It’s really problematic that so many of our parks and rec projects are being zeroed out.”
The council also learned Monday that the grass mowing schedule might also be cut in half, and multiple parks and recreation projects, including renovations to the Fishburn Mansion, were removed from the capital improvement plan. Councilwoman Evelyn Powers suggested that residents of the city’s adult detention center could cut the grass.
“I’m going to have a really hard time supporting some of this,” McGuire said of the cuts.
The city will cut 29 positions altogether, and freeze another 80 to 95 positions. It’s unclear how many of those positions are currently vacant at this point, as the city did not answer that emailed question.
Turner said because the city already has a high level of vacancies, layoffs aren’t being considered yet. “But we’re going to have to manage this budget … every single month, which may require us to shift every single month.”
The city plans to cut more than $600,000 from fleet management, which would cover rising fleet management and fuel costs.
Volosin asked what happens if fuel prices continue to rise as they have in recent weeks.
Turner said in that case, and in other cases of unexpected rises in expenditures, she would have to look at additional staffing cuts.
The proposed budget removes more than $50 million in projects from the city’s capital improvement plan.
These projects include upgrading Fire Station #2; expanding the Belmont Library; renovating the Fallon Park Pool, the downtown pedestrian bridge and the Martin Luther King Jr. pedestrian bridge; rehabilitating the Mill Mountain Star; and other projects.
The city also removed more than $5 million in items from its base budget, or the budget from the previous year that finance staff begins with each budget season.
These include cutting almost $300,000 from the Greater Roanoke Transit Company subsidy, cutting library program activities such as the summer reading program, and closing the Grandin location for the Youth Development team, which provides afterschool programming at three of the city’s recreation centers.
Public safety will not go untouched in this budget, even as Turner has said that it’s a top priority for the city.
Police, fire-EMS and the sheriff’s office would be among those agencies affected by the 29 proposed job cuts. Public safety departments also are expected to lose funding for overtime and temporary wages.
The sheriff’s office will lose more than $300,000 that would have increased its Virginia Retirement System multiplier, which determines an employee’s retirement benefits, to match surrounding localities.
Funding for the public school division represents 26% of the city’s total operating budget, or about $108 million in the coming fiscal year, according to Monday’s presentation.
Over the past few years, changes to how the city funds its schools have led to the school division approving a preliminary budget that includes about $16 million in cuts to staff, programming, transportation and maintenance.
In January, the city council voted to change its school funding policy. While the division is still likely to receive more money this upcoming year than it did the last year, it anticipates getting less than what was expected with the earlier formula that had been in place since 2011.
The city also changed the way the school division handles its fund balance. In the past, the division held onto that money, with the understanding that it could use it as a “rainy day fund.”
At the end of the fiscal year 2024, the school division had a general fund balance, or surplus funding, of almost $23 million.
The city council approved for RCPS to use a little over $10 million of that surplus to balance the current fiscal year budget.
Tensions are high between the school board and the city council, and Mayor Joe Cobb addressed these tensions during a joint school board and city council meeting on March 16.
“I don’t always get it right, I get frustrated, we all get frustrated, and we sometimes say things that we wish we hadn’t said. So for that I apologize,” Cobb said. “But I also want to thank you all for our willingness to be here together, to work through this, and to map out some steps we can take toward reconciliation.”
The post 7 things to watch as Roanoke begins discussions of its proposed budget appeared first on Cardinal News.
7 things to watch as Roanoke begins discussions of its proposed budget [Cardinal News] (04:45 , Wednesday, 25 March 2026)

Roanoke City Manager Valmarie Turner and city finance staff this week proposed a budget to the city council that included major cuts to staff, programming and maintenance.
Turner on Monday proposed a balanced budget of $421.5 million, which is a little over a 3% increase from the year before.
Fiscal year 2027 is expected to be the first year in recent history where expenditures outpace revenues, according to Turner’s presentation.
The proposed budget is a preliminary spending plan. The council did not take a vote on Monday, and the budget could change before final approval in May, as the General Assembly has not settled on its final budget yet. The state’s spending plan could affect local budgets depending on how it allocates money for programs.
A city spokesperson said Tuesday evening that city officials were not yet able to answer a list of questions about the budget proposal emailed by Cardinal News on Monday evening.
The following are a few of the many things to keep an eye on this budget season, as Turner and the finance staff balance an $18.9 million gap.
While the city’s revenue is still growing year over year, the rate at which it’s growing is expected to significantly slow down this year.
Roanoke showed strong revenue growth post-pandemic, with an increase of $62 million across in all tax categories between fiscal year 2021 and fiscal year 2025, said Trinity Kaseke, the city’s budget manager.
The public will have two opportunities to speak with city staff and ask questions about the budget. Both meetings will be open-house style.
In the upcoming fiscal year, the city expects its smallest revenue increase in the last six years, at $6.9 million.
Roanoke has seen its real estate valuations rise over the last four years, with real estate tax assessed growth averaging 8.85% between 2022 and 2025.
This year, the assessed value percentage change was 6.55%.
The current tax rate is $1.22 per $100 of valuation, which has not changed since 2015.
Councilman Peter Volosin asked finance staff during Monday’s meeting why the city is still having to make cuts after seeing such high tax revenues in recent years. In fiscal year 2026, the city collected over $130 million in real estate taxes. In fiscal year 2027, the city expects that to increase by a little less than $10 million.
Turner said this is because each budget season, she starts with the base budget from the year before. Anything new needs to come out of increased revenue, she said.
The city will issue about three years’ worth of debt — $79 million — in one year, Tanya Catron, the city’s capital improvement finance manager, said during Monday’s meeting.
In a statement emailed to Cardinal News, the city said the debt is higher than usual because $57.5 million in short-term financing needs to be converted into long-term financing. This debt was secured in 2024 to “fast-track” capital projects, the city said.
Catron said the debt was supposed to be issued previously, but due to staff turnover, that never happened. City officials did not answer emailed questions about the staff turnover or the debt.
“The projects needed to start, right, they had been approved. So fast forward, we have to basically have a larger-than-normal bond issuance to cover all those projects,” she said.
Catron said the city should have set aside more money for future debt service in anticipation of having to permanently finance its bond anticipation notes. That didn’t happen, Turner said at the meeting.
The city council must approve the debt issuances through a public hearing, which the city expects to happen over the summer.
The city set aside $1.2 million in the budget for parks maintenance in the upcoming fiscal year. But according to the recommended capital improvement program, after the upcoming fiscal year, no money will be allocated to park maintenance through FY2031.
“It feels to me like we disproportionately cut parks and recreation, and we’ve done that again and again and again over the years, we chronically underfund it,” Councilman Terry McGuire said during the meeting. “It’s really problematic that so many of our parks and rec projects are being zeroed out.”
The council also learned Monday that the grass mowing schedule might also be cut in half, and multiple parks and recreation projects, including renovations to the Fishburn Mansion, were removed from the capital improvement plan. Councilwoman Evelyn Powers suggested that residents of the city’s adult detention center could cut the grass.
“I’m going to have a really hard time supporting some of this,” McGuire said of the cuts.
The city will cut 29 positions altogether, and freeze another 80 to 95 positions. It’s unclear how many of those positions are currently vacant at this point, as the city did not answer that emailed question.
Turner said because the city already has a high level of vacancies, layoffs aren’t being considered yet. “But we’re going to have to manage this budget … every single month, which may require us to shift every single month.”
The city plans to cut more than $600,000 from fleet management, which would cover rising fleet management and fuel costs.
Volosin asked what happens if fuel prices continue to rise as they have in recent weeks.
Turner said in that case, and in other cases of unexpected rises in expenditures, she would have to look at additional staffing cuts.
The proposed budget removes more than $50 million in projects from the city’s capital improvement plan.
These projects include upgrading Fire Station #2; expanding the Belmont Library; renovating the Fallon Park Pool, the downtown pedestrian bridge and the Martin Luther King Jr. pedestrian bridge; rehabilitating the Mill Mountain Star; and other projects.
The city also removed more than $5 million in items from its base budget, or the budget from the previous year that finance staff begins with each budget season.
These include cutting almost $300,000 from the Greater Roanoke Transit Company subsidy, cutting library program activities such as the summer reading program, and closing the Grandin location for the Youth Development team, which provides afterschool programming at three of the city’s recreation centers.
Public safety will not go untouched in this budget, even as Turner has said that it’s a top priority for the city.
Police, fire-EMS and the sheriff’s office would be among those agencies affected by the 29 proposed job cuts. Public safety departments also are expected to lose funding for overtime and temporary wages.
The sheriff’s office will lose more than $300,000 that would have increased its Virginia Retirement System multiplier, which determines an employee’s retirement benefits, to match surrounding localities.
Funding for the public school division represents 26% of the city’s total operating budget, or about $108 million in the coming fiscal year, according to Monday’s presentation.
Over the past few years, changes to how the city funds its schools have led to the school division approving a preliminary budget that includes about $16 million in cuts to staff, programming, transportation and maintenance.
In January, the city council voted to change its school funding policy. While the division is still likely to receive more money this upcoming year than it did the last year, it anticipates getting less than what was expected with the earlier formula that had been in place since 2011.
The city also changed the way the school division handles its fund balance. In the past, the division held onto that money, with the understanding that it could use it as a “rainy day fund.”
At the end of the fiscal year 2024, the school division had a general fund balance, or surplus funding, of almost $23 million.
The city council approved for RCPS to use a little over $10 million of that surplus to balance the current fiscal year budget.
Tensions are high between the school board and the city council, and Mayor Joe Cobb addressed these tensions during a joint school board and city council meeting on March 16.
“I don’t always get it right, I get frustrated, we all get frustrated, and we sometimes say things that we wish we hadn’t said. So for that I apologize,” Cobb said. “But I also want to thank you all for our willingness to be here together, to work through this, and to map out some steps we can take toward reconciliation.”
The post 7 things to watch as Roanoke begins discussions of its proposed budget appeared first on Cardinal News.
U.S. Supreme Court may nix counting mail ballots after Election Day. The Virginia math says it doesn’t matter. [Cardinal News] (04:15 , Wednesday, 25 March 2026)

I hate to burst anyone’s bubble — well, actually, I don’t mind, because some bubbles do need bursting.
Some on the right are celebrating, and some on the left are groaning, after the U.S. Supreme Court this week seemed to take a dim view of states — such as Virginia — that allow mail ballots to be counted even if they arrive after Election Day.
This may be a great principle to those on both sides, just in different ways, but here’s some inconvenient math: There aren’t really enough of these ballots to matter.
In theory, they could in a very close election, of course, and those on each side can hold to their principles that either a) counting these post-election ballots feels very sketchy (the conservative view) or b) disallowing them feels like disenfranchising some voters (the liberal view).
Then there’s this numerical view:
In last year’s governor’s race, Virginia saw 3,433,340 votes cast in the governor’s race, more than ever before.
Of those, 29,794 were mail ballots that were mailed before the deadline on Election Day but didn’t arrive until after the election. That’s 0.8% of the total.
Those who find this practice offensive would prefer the number to be 0.00%, but in practical terms, that 0.8% did not come anywhere close to making a difference. That small number ought to be reassuring to both sides: For conservatives who don’t like post-election ballots, they don’t matter in the big scheme of things; for liberals who worry that the Supreme Court is about to outlaw this practice, well, these votes don’t seem to matter in the big scheme of things. This legal fight is more about principle than practice, but the impact either way seems pretty negligible.
The closest House of Delegates race in Virginia last fall was in Harrisonburg and Rockingham County, where Republican incumbent Tony Wilt held off a stiff challenge by Democrat Andrew Payton in House District 34.
That race saw 28,927 votes cast. Of those, 90 were post-election ballots — 0.3%, lower than the statewide percentage.
Here’s where some politics do come in: Those who cast mail ballots are overwhelmingly Democratic, so these late-arriving ballots (which aren’t really late-arriving, they’re still on time) do skew quite blue. In that Wilt-Payton race, those 90 post-election ballots broke 69 for Payton, 21 for Wilt. That was a pickup for Payton of 48 votes — but it didn’t make a difference even in the closest race in the state. Wilt still won by 257 votes.
I can’t rule out that there’s some local race in Virginia where post-election mail ballots have made the difference, but they haven’t in any statewide or legislative election. If someone knows of a local election where they’d have, please let me know.
Allowing mail ballots to be counted as long as they’re postmarked in time, even if they arrive after Election Day, isn’t necessarily a liberal idea: The state at the center of the Supreme Court argument is Mississippi, which probably hasn’t done anything liberal in quite some time. Texas is another.
Virginia law allows for properly postmarked mail ballots if they arrive by noon on the Friday after Election Day, although a bill pending before Gov. Abigail Spanberger would extend that to 5 p.m. California allows a week. Alaska and Maryland allow up to 10 days. (Plus Guam and the Virgin Islands. How slow is the mail on Guam and the Virgin Islands?) Illinois allows up to 14 days. (You can find a full list here.)
The philosophical reason for allowing these ballots is the same as your tax return: The law doesn’t require the Internal Revenue Service to have your tax return in hand by April 15, just that you have to mail it by then. Why should we treat elections differently?
That’s balanced against history and some of the darker impulses of human nature to cheat: the 1948 Democratic Senate primary in Texas, where at first Lyndon Johnson appeared to have lost narrowly but then won when 202 votes were mysteriously “discovered” in one county. Those votes were in a box, not a mailbox, but the fear remains the same. We’re accustomed to election nights providing finality to an election, and so having a question mark remain for several days makes some people queasy — and suspicious.
This legal battle of counting deadlines on mail ballots is really just a byproduct of the expansion of voting by mail. In Virginia, we’ve now had mail voting for five years, starting in 2021. Over those five years, the percentage of Virginians who vote by mail has been remarkably consistent — from a low of 9.9% in the 2022 midterms to a high of 11.1% in both the 2023 legislative and local elections and the 2024 presidential and congressional elections.
When 1 in 10 voters prefer to mail in their ballots, that’s not an insignificant number. Now, remember what I said about how Democrats prefer mail voting? Here are the numbers.
Last fall, 14.2% of the voters in bright blue Fairfax County voted by mail. In Alexandria, 13.5% did. In Arlington County, 12.9%.
However, in the strongest Republican localities in the state — in Southwest Virginia — mail voting barely registered. In Scott County, 3.9% of the ballots were cast by mail. In Lee County, just 4.1%. In Buchanan and Russell counties, 4.5%.
Perhaps some Republicans think that if they could restrict or even abolish mail voting (as President Donald Trump would like to do, even though he votes by mail), they could depress the Democratic vote somewhat. Maybe so, but that’s actually not in the party’s best interests. The party that needs mail voting most is the Republican Party.
The counties with the lowest turnout are traditionally Republican counties — and not just any Republican counties, but the strongest ones in the state. These are also the counties with the lowest percentage of voters voting by mail. If Republicans want to be competitive in Virginia, they may need better candidates and better campaign messages — but they definitely need better turnout from their strongest counties. One easy way to do that (well, easy in theory) is to persuade more people to vote by mail.
Let’s look at some numbers. (Yes, I love numbers.)
Last year in Scott County, 83% of the voters cast ballots for the Republican candidate for governor, Winsome Earle-Sears, and that was actually on the low side. In the lieutenant governor’s race, the Republican percentage rose to 84.14% and hit 85.77% in the attorney general’s race.
However, only 45.8% of the county’s voters bothered to take part in the election. That was below the state average of 54.9%, which was pulled up by some Democratic-voting places such as Albemarle County, where 66.2% voted.
Scott County, as we’ve seen, also had the state’s lowest figures on mail-in ballots.
Scott County didn’t even have the lowest overall turnout in the state. That was in Buchanan County, where voter turnout was just 37.8% in a county that voted 81.91% to 83.98% for candidates on the Republican ticket. If a party has a locality where they know they can run up a score of 80% or more, they need to maximize their vote there, and Republicans aren’t. Here’s one way to look at how inefficient Republicans are with their base in Southwest Virginia. Bright red Buchanan County is slightly bigger than bright blue Falls Church. In Falls Church last fall, though, the voter turnout was 64.5% to Buchanan’s 37.8%. In practical terms, 7,600 people cast ballots in Falls Church while only 5,413 did in Buchanan. Put another way, Falls Church delivered 6,407 votes for Abigail Spanberger while Buchanan contributed 4,434 to to Winsome Earle-Sears’ tally. If people in Buchanan County voted at the same rate as those in Falls Church, Buchanan could have delivered about 2,000 more votes to the Republican cause. That wouldn’t have been enough to change the outcome, but the point is that Republicans are leaving a lot of votes uncast in Southwest Virginia.
If Republicans want to win more statewide elections, they somehow need to persuade more rural voters — especially in Southwest Virginia — to start voting. The ease of mail voting would seem to be one way to do that. In Falls Church last year 867 people voted by mail, 800 of them for Spanberger, a share of 92.2%. In Buchanan County, just 244 people voted by mail, 48.7% for Spanberger in a county where overall she could only muster 17.9%. What that tells me is that Republicans are reluctant to vote by mail, which is their right, but that because so few people in Buchanan County vote anyway, Republicans would be wise to invest in a vote-by-mail push to increase turnout.
If the U.S. Supreme Court eventually nixes counting mail ballots that arrive after the deadline, Republicans can claim a victory in principle, but Democrats won’t have lost anything in practice. If Republicans want to claim more victories at the ballot box in Virginia, though, they need more mail voting. Just make sure those ballots get there on time.
Want more politics and analysis? Sign up for West of the Capital, our weekly political newsletter that goes out on Fridays. Sign up here:
The post U.S. Supreme Court may nix counting mail ballots after Election Day. The Virginia math says it doesn’t matter. appeared first on Cardinal News.
U.S. Supreme Court may nix counting mail ballots after Election Day. The Virginia math says it doesn’t matter. [Cardinal News] (04:15 , Wednesday, 25 March 2026)

I hate to burst anyone’s bubble — well, actually, I don’t mind, because some bubbles do need bursting.
Some on the right are celebrating, and some on the left are groaning, after the U.S. Supreme Court this week seemed to take a dim view of states — such as Virginia — that allow mail ballots to be counted even if they arrive after Election Day.
This may be a great principle to those on both sides, just in different ways, but here’s some inconvenient math: There aren’t really enough of these ballots to matter.
In theory, they could in a very close election, of course, and those on each side can hold to their principles that either a) counting these post-election ballots feels very sketchy (the conservative view) or b) disallowing them feels like disenfranchising some voters (the liberal view).
Then there’s this numerical view:
In last year’s governor’s race, Virginia saw 3,433,340 votes cast in the governor’s race, more than ever before.
Of those, 29,794 were mail ballots that were mailed before the deadline on Election Day but didn’t arrive until after the election. That’s 0.08% of the total.
Those who find this practice offensive would prefer the number to be 0.00%, but in practical terms, that 0.08% did not come anywhere close to making a difference. That small number ought to be reassuring to both sides: For conservatives who don’t like post-election ballots, they don’t matter in the big scheme of things; for liberals who worry that the Supreme Court is about to outlaw this practice, well, these votes don’t seem to matter in the big scheme of things. This legal fight is more about principle than practice, but the impact either way seems pretty negligible.
The closest House of Delegates race in Virginia last fall was in Harrisonburg and Rockingham County, where Republican incumbent Tony Wilt held off a stiff challenge by Democrat Andrew Payton in House District 34.
That race saw 28,927 votes cast. Of those, 90 were post-election ballots — 0.03%, lower than the statewide percentage.
Here’s where some politics do come in: Those who cast mail ballots are overwhelmingly Democratic, so these late-arriving ballots (which aren’t really late-arriving, they’re still on time) do skew quite blue. In that Wilt-Payton race, those 90 post-election ballots broke 69 for Payton, 21 for Wilt. That was a pickup for Payton of 48 votes — but it didn’t make a difference even in the closest race in the state. Wilt still won by 257 votes.
I can’t rule out that there’s some local race in Virginia where post-election mail ballots have made the difference, but they haven’t in any statewide or legislative election. If someone knows of a local election where they’d have, please let me know.
Allowing mail ballots to be counted as long as they’re postmarked in time, even if they arrive after Election Day, isn’t necessarily a liberal idea: The state at the center of the Supreme Court argument is Mississippi, which probably hasn’t done anything liberal in quite some time. Texas is another.
Virginia law allows for properly postmarked mail ballots if they arrive by noon on the Friday after Election Day, although a bill pending before Gov. Abigail Spanberger would extend that to 5 p.m. California allows a week. Alaska and Maryland allow up to 10 days. (Plus Guam and the Virgin Islands. How slow is the mail on Guam and the Virgin Islands?) Illinois allows up to 14 days. (You can find a full list here.)
The philosophical reason for allowing these ballots is the same as your tax return: The law doesn’t require the Internal Revenue Service to have your tax return in hand by April 15, just that you have to mail it by then. Why should we treat elections differently?
That’s balanced against history and some of the darker impulses of human nature to cheat: the 1948 Democratic Senate primary in Texas, where at first Lyndon Johnson appeared to have lost narrowly but then won when 202 votes were mysteriously “discovered” in one county. Those votes were in a box, not a mailbox, but the fear remains the same. We’re accustomed to election nights providing finality to an election, and so having a question mark remain for several days makes some people queasy — and suspicious.
This legal battle of counting deadlines on mail ballots is really just a byproduct of the expansion of voting by mail. In Virginia, we’ve now had mail voting for five years, starting in 2021. Over those five years, the percentage of Virginians who vote by mail has been remarkably consistent — from a low of 9.9% in the 2022 midterms to a high of 11.1% in both the 2023 legislative and local elections and the 2024 presidential and congressional elections.
When 1 in 10 voters prefer to mail in their ballots, that’s not an insignificant number. Now, remember what I said about how Democrats prefer mail voting? Here are the numbers.
Last fall, 14.2% of the voters in bright blue Fairfax County voted by mail. In Alexandria, 13.5% did. In Arlington County, 12.9%.
However, in the strongest Republican localities in the state — in Southwest Virginia — mail voting barely registered. In Scott County, 3.9% of the ballots were cast by mail. In Lee County, just 4.1%. In Buchanan and Russell counties, 4.5%.
Perhaps some Republicans think that if they could restrict or even abolish mail voting (as President Donald Trump would like to do, even though he votes by mail), they could depress the Democratic vote somewhat. Maybe so, but that’s actually not in the party’s best interests. The party that needs mail voting most is the Republican Party.
The counties with the lowest turnout are traditionally Republican counties — and not just any Republican counties, but the strongest ones in the state. These are also the counties with the lowest percentage of voters voting by mail. If Republicans want to be competitive in Virginia, they may need better candidates and better campaign messages — but they definitely need better turnout from their strongest counties. One easy way to do that (well, easy in theory) is to persuade more people to vote by mail.
Let’s look at some numbers. (Yes, I love numbers.)
Last year in Scott County, 83% of the voters cast ballots for the Republican candidate for governor, Winsome Earle-Sears, and that was actually on the low side. In the lieutenant governor’s race, the Republican percentage rose to 84.14% and hit 85.77% in the attorney general’s race.
However, only 45.8% of the county’s voters bothered to take part in the election. That was below the state average of 54.9%, which was pulled up by some Democratic-voting places such as Albemarle County, where 66.2% voted.
Scott County, as we’ve seen, also had the state’s lowest figures on mail-in ballots.
Scott County didn’t even have the lowest overall turnout in the state. That was in Buchanan County, where voter turnout was just 37.8% in a county that voted 81.91% to 83.98% for candidates on the Republican ticket. If a party has a locality where they know they can run up a score of 80% or more, they need to maximize their vote there, and Republicans aren’t. Here’s one way to look at how inefficient Republicans are with their base in Southwest Virginia. Bright red Buchanan County is slightly bigger than bright blue Falls Church. In Falls Church last fall, though, the voter turnout was 64.5% to Buchanan’s 37.8%. In practical terms, 7,600 people cast ballots in Falls Church while only 5,413 did in Buchanan. Put another way, Falls Church delivered 6,407 votes for Abigail Spanberger while Buchanan contributed 4,434 to to Winsome Earle-Sears’ tally. It people in Buchanan County voted at the same rate as those in Falls Church, Buchanan could have delivered about 2,000 more votes to the Republican cause. That wouldn’t have been enough to change the outcome, but the point is that Republicans are leaving a lot of votes uncast in Southwest Virginia.
If Republicans want to win more statewide elections, they somehow need to persuade more rural voters — especially in Southwest Virginia — to start voting. The ease of mail voting would seem to be one way to do that. In Falls Church last year 867 people voted by mail, 800 of them for Spanberger, a share of 92.2%. In Buchanan County, just 244 people voted by mail, 48.7% for Spanberger in a county where overall she could only muster 17.9%. What that tells me is that Republicans are reluctant to vote by mail, which is their right, but that because so few people in Buchanan County vote anyway, Republicans would be wise to invest in a vote-by-mail push to increase turnout.
If the U.S. Supreme Court eventually nixes counting mail ballots that arrive after the deadline, Republicans can claim a victory in principle, but Democrats won’t have lost anything in practice. If Republicans want to claim more victories at the ballot box in Virginia, though, they need more mail voting. Just make sure those ballots get there on time.
Want more politics and analysis? Sign up for West of the Capital, our weekly political newsletter that goes out on Fridays. Sign up here:
The post U.S. Supreme Court may nix counting mail ballots after Election Day. The Virginia math says it doesn’t matter. appeared first on Cardinal News.
U.S. Supreme Court may nix counting mail ballots after Election Day. The Virginia math says it doesn’t matter. [Cardinal News] (04:15 , Wednesday, 25 March 2026)

I hate to burst anyone’s bubble — well, actually, I don’t mind, because some bubbles do need bursting.
Some on the right are celebrating, and some on the left are groaning, after the U.S. Supreme Court this week seemed to take a dim view of states — such as Virginia — that allow mail ballots to be counted even if they arrive after Election Day.
This may be a great principle to those on both sides, just in different ways, but here’s some inconvenient math: There aren’t really enough of these ballots to matter.
In theory, they could in a very close election, of course, and those on each side can hold to their principles that either a) counting these post-election ballots feels very sketchy (the conservative view) or b) disallowing them feels like disenfranchising some voters (the liberal view).
Then there’s this numerical view:
In last year’s governor’s race, Virginia saw 3,433,340 votes cast in the governor’s race, more than ever before.
Of those, 29,794 were mail ballots that were mailed before the deadline on Election Day but didn’t arrive until after the election. That’s 0.8% of the total.
Those who find this practice offensive would prefer the number to be 0.00%, but in practical terms, that 0.8% did not come anywhere close to making a difference. That small number ought to be reassuring to both sides: For conservatives who don’t like post-election ballots, they don’t matter in the big scheme of things; for liberals who worry that the Supreme Court is about to outlaw this practice, well, these votes don’t seem to matter in the big scheme of things. This legal fight is more about principle than practice, but the impact either way seems pretty negligible.
The closest House of Delegates race in Virginia last fall was in Harrisonburg and Rockingham County, where Republican incumbent Tony Wilt held off a stiff challenge by Democrat Andrew Payton in House District 34.
That race saw 28,927 votes cast. Of those, 90 were post-election ballots — 0.3%, lower than the statewide percentage.
Here’s where some politics do come in: Those who cast mail ballots are overwhelmingly Democratic, so these late-arriving ballots (which aren’t really late-arriving, they’re still on time) do skew quite blue. In that Wilt-Payton race, those 90 post-election ballots broke 69 for Payton, 21 for Wilt. That was a pickup for Payton of 48 votes — but it didn’t make a difference even in the closest race in the state. Wilt still won by 257 votes.
I can’t rule out that there’s some local race in Virginia where post-election mail ballots have made the difference, but they haven’t in any statewide or legislative election. If someone knows of a local election where they’d have, please let me know.
Allowing mail ballots to be counted as long as they’re postmarked in time, even if they arrive after Election Day, isn’t necessarily a liberal idea: The state at the center of the Supreme Court argument is Mississippi, which probably hasn’t done anything liberal in quite some time. Texas is another.
Virginia law allows for properly postmarked mail ballots if they arrive by noon on the Friday after Election Day, although a bill pending before Gov. Abigail Spanberger would extend that to 5 p.m. California allows a week. Alaska and Maryland allow up to 10 days. (Plus Guam and the Virgin Islands. How slow is the mail on Guam and the Virgin Islands?) Illinois allows up to 14 days. (You can find a full list here.)
The philosophical reason for allowing these ballots is the same as your tax return: The law doesn’t require the Internal Revenue Service to have your tax return in hand by April 15, just that you have to mail it by then. Why should we treat elections differently?
That’s balanced against history and some of the darker impulses of human nature to cheat: the 1948 Democratic Senate primary in Texas, where at first Lyndon Johnson appeared to have lost narrowly but then won when 202 votes were mysteriously “discovered” in one county. Those votes were in a box, not a mailbox, but the fear remains the same. We’re accustomed to election nights providing finality to an election, and so having a question mark remain for several days makes some people queasy — and suspicious.
This legal battle of counting deadlines on mail ballots is really just a byproduct of the expansion of voting by mail. In Virginia, we’ve now had mail voting for five years, starting in 2021. Over those five years, the percentage of Virginians who vote by mail has been remarkably consistent — from a low of 9.9% in the 2022 midterms to a high of 11.1% in both the 2023 legislative and local elections and the 2024 presidential and congressional elections.
When 1 in 10 voters prefer to mail in their ballots, that’s not an insignificant number. Now, remember what I said about how Democrats prefer mail voting? Here are the numbers.
Last fall, 14.2% of the voters in bright blue Fairfax County voted by mail. In Alexandria, 13.5% did. In Arlington County, 12.9%.
However, in the strongest Republican localities in the state — in Southwest Virginia — mail voting barely registered. In Scott County, 3.9% of the ballots were cast by mail. In Lee County, just 4.1%. In Buchanan and Russell counties, 4.5%.
Perhaps some Republicans think that if they could restrict or even abolish mail voting (as President Donald Trump would like to do, even though he votes by mail), they could depress the Democratic vote somewhat. Maybe so, but that’s actually not in the party’s best interests. The party that needs mail voting most is the Republican Party.
The counties with the lowest turnout are traditionally Republican counties — and not just any Republican counties, but the strongest ones in the state. These are also the counties with the lowest percentage of voters voting by mail. If Republicans want to be competitive in Virginia, they may need better candidates and better campaign messages — but they definitely need better turnout from their strongest counties. One easy way to do that (well, easy in theory) is to persuade more people to vote by mail.
Let’s look at some numbers. (Yes, I love numbers.)
Last year in Scott County, 83% of the voters cast ballots for the Republican candidate for governor, Winsome Earle-Sears, and that was actually on the low side. In the lieutenant governor’s race, the Republican percentage rose to 84.14% and hit 85.77% in the attorney general’s race.
However, only 45.8% of the county’s voters bothered to take part in the election. That was below the state average of 54.9%, which was pulled up by some Democratic-voting places such as Albemarle County, where 66.2% voted.
Scott County, as we’ve seen, also had the state’s lowest figures on mail-in ballots.
Scott County didn’t even have the lowest overall turnout in the state. That was in Buchanan County, where voter turnout was just 37.8% in a county that voted 81.91% to 83.98% for candidates on the Republican ticket. If a party has a locality where they know they can run up a score of 80% or more, they need to maximize their vote there, and Republicans aren’t. Here’s one way to look at how inefficient Republicans are with their base in Southwest Virginia. Bright red Buchanan County is slightly bigger than bright blue Falls Church. In Falls Church last fall, though, the voter turnout was 64.5% to Buchanan’s 37.8%. In practical terms, 7,600 people cast ballots in Falls Church while only 5,413 did in Buchanan. Put another way, Falls Church delivered 6,407 votes for Abigail Spanberger while Buchanan contributed 4,434 to to Winsome Earle-Sears’ tally. If people in Buchanan County voted at the same rate as those in Falls Church, Buchanan could have delivered about 2,000 more votes to the Republican cause. That wouldn’t have been enough to change the outcome, but the point is that Republicans are leaving a lot of votes uncast in Southwest Virginia.
If Republicans want to win more statewide elections, they somehow need to persuade more rural voters — especially in Southwest Virginia — to start voting. The ease of mail voting would seem to be one way to do that. In Falls Church last year 867 people voted by mail, 800 of them for Spanberger, a share of 92.2%. In Buchanan County, just 244 people voted by mail, 48.7% for Spanberger in a county where overall she could only muster 17.9%. What that tells me is that Republicans are reluctant to vote by mail, which is their right, but that because so few people in Buchanan County vote anyway, Republicans would be wise to invest in a vote-by-mail push to increase turnout.
If the U.S. Supreme Court eventually nixes counting mail ballots that arrive after the deadline, Republicans can claim a victory in principle, but Democrats won’t have lost anything in practice. If Republicans want to claim more victories at the ballot box in Virginia, though, they need more mail voting. Just make sure those ballots get there on time.
Want more politics and analysis? Sign up for West of the Capital, our weekly political newsletter that goes out on Fridays. Sign up here:
The post U.S. Supreme Court may nix counting mail ballots after Election Day. The Virginia math says it doesn’t matter. appeared first on Cardinal News.
U.S. Supreme Court may nix counting mail ballots after Election Day. The Virginia math says it doesn’t matter. [Cardinal News] (04:15 , Wednesday, 25 March 2026)

I hate to burst anyone’s bubble — well, actually, I don’t mind, because some bubbles do need bursting.
Some on the right are celebrating, and some on the left are groaning, after the U.S. Supreme Court this week seemed to take a dim view of states — such as Virginia — that allow mail ballots to be counted even if they arrive after Election Day.
This may be a great principle to those on both sides, just in different ways, but here’s some inconvenient math: There aren’t really enough of these ballots to matter.
In theory, they could in a very close election, of course, and those on each side can hold to their principles that either a) counting these post-election ballots feels very sketchy (the conservative view) or b) disallowing them feels like disenfranchising some voters (the liberal view).
Then there’s this numerical view:
In last year’s governor’s race, Virginia saw 3,433,340 votes cast in the governor’s race, more than ever before.
Of those, 29,794 were mail ballots that were mailed before the deadline on Election Day but didn’t arrive until after the election. That’s 0.8% of the total.
Those who find this practice offensive would prefer the number to be 0.00%, but in practical terms, that 0.8% did not come anywhere close to making a difference. That small number ought to be reassuring to both sides: For conservatives who don’t like post-election ballots, they don’t matter in the big scheme of things; for liberals who worry that the Supreme Court is about to outlaw this practice, well, these votes don’t seem to matter in the big scheme of things. This legal fight is more about principle than practice, but the impact either way seems pretty negligible.
The closest House of Delegates race in Virginia last fall was in Harrisonburg and Rockingham County, where Republican incumbent Tony Wilt held off a stiff challenge by Democrat Andrew Payton in House District 34.
That race saw 28,927 votes cast. Of those, 90 were post-election ballots — 0.3%, lower than the statewide percentage.
Here’s where some politics do come in: Those who cast mail ballots are overwhelmingly Democratic, so these late-arriving ballots (which aren’t really late-arriving, they’re still on time) do skew quite blue. In that Wilt-Payton race, those 90 post-election ballots broke 69 for Payton, 21 for Wilt. That was a pickup for Payton of 48 votes — but it didn’t make a difference even in the closest race in the state. Wilt still won by 257 votes.
I can’t rule out that there’s some local race in Virginia where post-election mail ballots have made the difference, but they haven’t in any statewide or legislative election. If someone knows of a local election where they’d have, please let me know.
Allowing mail ballots to be counted as long as they’re postmarked in time, even if they arrive after Election Day, isn’t necessarily a liberal idea: The state at the center of the Supreme Court argument is Mississippi, which probably hasn’t done anything liberal in quite some time. Texas is another.
Virginia law allows for properly postmarked mail ballots if they arrive by noon on the Friday after Election Day, although a bill pending before Gov. Abigail Spanberger would extend that to 5 p.m. California allows a week. Alaska and Maryland allow up to 10 days. (Plus Guam and the Virgin Islands. How slow is the mail on Guam and the Virgin Islands?) Illinois allows up to 14 days. (You can find a full list here.)
The philosophical reason for allowing these ballots is the same as your tax return: The law doesn’t require the Internal Revenue Service to have your tax return in hand by April 15, just that you have to mail it by then. Why should we treat elections differently?
That’s balanced against history and some of the darker impulses of human nature to cheat: the 1948 Democratic Senate primary in Texas, where at first Lyndon Johnson appeared to have lost narrowly but then won when 202 votes were mysteriously “discovered” in one county. Those votes were in a box, not a mailbox, but the fear remains the same. We’re accustomed to election nights providing finality to an election, and so having a question mark remain for several days makes some people queasy — and suspicious.
This legal battle of counting deadlines on mail ballots is really just a byproduct of the expansion of voting by mail. In Virginia, we’ve now had mail voting for five years, starting in 2021. Over those five years, the percentage of Virginians who vote by mail has been remarkably consistent — from a low of 9.9% in the 2022 midterms to a high of 11.1% in both the 2023 legislative and local elections and the 2024 presidential and congressional elections.
When 1 in 10 voters prefer to mail in their ballots, that’s not an insignificant number. Now, remember what I said about how Democrats prefer mail voting? Here are the numbers.
Last fall, 14.2% of the voters in bright blue Fairfax County voted by mail. In Alexandria, 13.5% did. In Arlington County, 12.9%.
However, in the strongest Republican localities in the state — in Southwest Virginia — mail voting barely registered. In Scott County, 3.9% of the ballots were cast by mail. In Lee County, just 4.1%. In Buchanan and Russell counties, 4.5%.
Perhaps some Republicans think that if they could restrict or even abolish mail voting (as President Donald Trump would like to do, even though he votes by mail), they could depress the Democratic vote somewhat. Maybe so, but that’s actually not in the party’s best interests. The party that needs mail voting most is the Republican Party.
The counties with the lowest turnout are traditionally Republican counties — and not just any Republican counties, but the strongest ones in the state. These are also the counties with the lowest percentage of voters voting by mail. If Republicans want to be competitive in Virginia, they may need better candidates and better campaign messages — but they definitely need better turnout from their strongest counties. One easy way to do that (well, easy in theory) is to persuade more people to vote by mail.
Let’s look at some numbers. (Yes, I love numbers.)
Last year in Scott County, 83% of the voters cast ballots for the Republican candidate for governor, Winsome Earle-Sears, and that was actually on the low side. In the lieutenant governor’s race, the Republican percentage rose to 84.14% and hit 85.77% in the attorney general’s race.
However, only 45.8% of the county’s voters bothered to take part in the election. That was below the state average of 54.9%, which was pulled up by some Democratic-voting places such as Albemarle County, where 66.2% voted.
Scott County, as we’ve seen, also had the state’s lowest figures on mail-in ballots.
Scott County didn’t even have the lowest overall turnout in the state. That was in Buchanan County, where voter turnout was just 37.8% in a county that voted 81.91% to 83.98% for candidates on the Republican ticket. If a party has a locality where they know they can run up a score of 80% or more, they need to maximize their vote there, and Republicans aren’t. Here’s one way to look at how inefficient Republicans are with their base in Southwest Virginia. Bright red Buchanan County is slightly bigger than bright blue Falls Church. In Falls Church last fall, though, the voter turnout was 64.5% to Buchanan’s 37.8%. In practical terms, 7,600 people cast ballots in Falls Church while only 5,413 did in Buchanan. Put another way, Falls Church delivered 6,407 votes for Abigail Spanberger while Buchanan contributed 4,434 to to Winsome Earle-Sears’ tally. If people in Buchanan County voted at the same rate as those in Falls Church, Buchanan could have delivered about 2,000 more votes to the Republican cause. That wouldn’t have been enough to change the outcome, but the point is that Republicans are leaving a lot of votes uncast in Southwest Virginia.
If Republicans want to win more statewide elections, they somehow need to persuade more rural voters — especially in Southwest Virginia — to start voting. The ease of mail voting would seem to be one way to do that. In Falls Church last year 867 people voted by mail, 800 of them for Spanberger, a share of 92.2%. In Buchanan County, just 244 people voted by mail, 48.7% for Spanberger in a county where overall she could only muster 17.9%. What that tells me is that Republicans are reluctant to vote by mail, which is their right, but that because so few people in Buchanan County vote anyway, Republicans would be wise to invest in a vote-by-mail push to increase turnout.
If the U.S. Supreme Court eventually nixes counting mail ballots that arrive after the deadline, Republicans can claim a victory in principle, but Democrats won’t have lost anything in practice. If Republicans want to claim more victories at the ballot box in Virginia, though, they need more mail voting. Just make sure those ballots get there on time.
Want more politics and analysis? Sign up for West of the Capital, our weekly political newsletter that goes out on Fridays. Sign up here:
The post U.S. Supreme Court may nix counting mail ballots after Election Day. The Virginia math says it doesn’t matter. appeared first on Cardinal News.
Sites in Bedford County, Lynchburg and Roanoke added to Landmarks Register [Cardinal News] (04:05 , Wednesday, 25 March 2026)

Virginia’s Board of Historic Resources has added seven sites to the state’s Landmarks Register, including ones in Bedford County, Lynchburg and Roanoke.
Those three are the old Montvale High School, the historic district around Randolph College and the Fishburn Park Keepers Cottage.
Other new additions to the register are in Fauquier County, Hanover County, Newport News and Suffolk.
The state advises: “Listing a property in the state or national registers is honorary and sets no restrictions on what owners may do with their property. The designation is foremost an invitation to learn about and experience authentic and significant places in Virginia’s history. Designating a property to the state or national registers—either individually or as a contributing building in a historic district—provides an owner the opportunity to pursue historic rehabilitation tax credit improvements to the building.”
Here’s a list of the seven sites, with descriptions by the Department of Historic Resources:
From 1870, the year public education was established in Virginia, through the early 20th century, rural schools were small one- or two-room buildings that lacked uniformity. Montvale High School in Bedford County, built in 1930 for White students in grades one through twelve, is representative of the commonwealth’s efforts to improve public education in rural Virginia during the Progressive Era by consolidating and standardizing schools. Designed in the Colonial Revival architectural style with Art Deco influences, the school’s consistent expansion from the 1930s to the 1960s demonstrates its increasing importance as a community educational center as well as a social hub in rural Bedford County during the 20th century.
The Saunders House in the Fauquier County town of Warrenton was built in 1870 for the mercantile Saunders family during the Reconstruction period as a way to protect family assets from creditors. Designed in the Italianate architectural style, the house features a floor plan that is notably popular in rural parts of Fauquier County.
Henry Clay Elementary School in the Hanover County town of Ashland was built in 1934 for White students using funds from the Public Works Administration and the Virginia Literary Fund. Initially completed as a one-story brick Colonial Revival building featuring 13 classrooms, the school was expanded in subsequent decades to include additional classrooms, a library and a covered walkway to connect the school with the Ashland War Memorial Building, which was constructed about 75 yards west of the school in 1951. Henry Clay Elementary closed in the spring of 2024.
Located in the Rivermont neighborhood in the City of Lynchburg, the Randolph-Macon Woman’s College Historic District includes the 53-acre historic campus of Randolph-Macon Woman’s College, one of the earliest and longest-operating women’s colleges in the South. Established in 1891, the college evolved over time in response to rising demand for higher education as women’s roles in society changed following the Civil War. The campus buildings, which date from 1891 to 1975, were designed by prominent local and national architects and encompass a wide variety of architectural styles.
Since the 1920s, 2108 Jefferson Avenue — currently the site of Pearlie’s Restaurant — in the East End community of the City of Newport News has been home to a variety of restaurants owned and operated by African Americans, including African American women. In 1962, the present Commercial-style building, constructed ca. 1951, was listed as Grant’s Restaurant in Virginia’s Negro Motorist Green Book, a guide to hotels, restaurants, service stations and other businesses that welcomed Black travelers during Jim Crow. 2108 Jefferson Avenue was listed in the Green Book as The Tavern restaurant from 1939 to 1950. The building is the only documented Green Book resource in Newport News that has remained under the same use as its Green Book listing. 2108 Jefferson Avenue was designated a landmark under The Negro Traveler’s Green Book in Virginia Multiple Property Documentation Form (MPD), which the Board of Historic Resources also approved during its meeting on March 19.
Built in phases between ca. 1820 and ca. 1850, Fishburn Park Keeper’s Cottage is possibly the oldest surviving building within the City of Roanoke. The cottage exemplifies an early 19th-century hewn-log farmhouse that was enlarged to accommodate a growing family with the addition of lateral wings, one built of logs and the other of timber frame. While both types of construction were once common in Southwest Virginia dwellings, the assortment of joinery techniques in Keeper’s Cottage makes it a unique building in the region.
In the City of Suffolk, the Ames-Old Farm exemplifies the typical home of a yeoman farmer in Virginia’s Tidewater region in the early 19th century. In addition to the farm’s original, two-and-a-half-story main house, which was built in 1815 in the Federal architectural style, the property also encompasses six agricultural outbuildings, all of which were built in the mid- to late 19th century and early 20th century. In around 1875, the house was expanded to include a dining room and kitchen, marking the first phase of modifications, before additional rooms were added in 1965 and a garage in 1985. The property’s evolution reflects its owners’ efforts to accommodate shifting domestic and agricultural needs at the turn of the 20th century.
The post Sites in Bedford County, Lynchburg and Roanoke added to Landmarks Register appeared first on Cardinal News.
Tech briefs: After Trump’s AI framework released, members of Virginia’s congressional delegation weigh in [Cardinal News] (04:05 , Wednesday, 25 March 2026)

Thanks, Cardinal readers, for checking out the latest tech briefs, covering the digital and life sciences landscapes. The briefs go live every Wednesday in Cardinal News.
Got tips and/or questions? Reach out to me via tad@cardinalnews.org.
The Trump administration last week released what it called a national policy framework for artificial intelligence, which it said would supersede “a patchwork of conflicting state laws” that would “undermine American innovation.”
U.S. Rep. Morgan Griffith, R-Salem, and Sen. Tim Kaine, D-Va., said they had not fully reviewed the proposal yet. Sen. Mark Warner, D-Va., in a Friday news release, said it “takes steps in the right direction but lacks significant substance” and would hamstring states’ ability to address issues important to their constituents.
The White House’s list of legislative recommendations addresses issues that include parental control of children’s access and privacy, plus protection from deepfake images; energy ratepayers’ responsibility for data centers’ power use; large language models’ use of humans’ original works; accelerating AI’s use across industry; workforce training and new job creation; and preempting state AI laws that would interfere with federal objectives.
Congress would have to develop and pass any such legislation.
Griffith said through a spokesman on Friday that he looks forward to reviewing the framework “on how to best continue the United States’ position as the global leader in AI.”
He added: “As a member of the Energy and Commerce Committee, which has jurisdiction over significant elements of U.S. policy in the AI race, I will work with other leaders in Congress to solve issues that may be impediments to American industry in this important global field.”
Kaine, in a statement through his office on Monday, said he was still reviewing the proposal.
“AI needs strong legislative and regulatory safeguards to keep minors safe, protect consumer privacy, and gather information on workforce effects, all while promoting innovation,” he said. “Since federal AI standards would preempt state-level standards, including those of states that have stronger protections in place already, any nationwide standard must be carefully considered and bipartisan.”
In his news release, Warner said that he has long supported bipartisan legislation regarding children’s privacy and data, and has advocated for action on nonconsensual deepfake images. Warner said that his recently introduced Economy of the Future Commission Act is intended as a first step to developing laws about education, labor, commerce and economic policy.
For two consecutive years, the Senate Intelligence Committee, of which Warner is vice chair, passed bills requiring national security agencies to address AI-related national security threats, he said. Both years, they died, which Warner blamed on President Donald Trump’s congressional allies.
“The framework is worse than silent on AI-powered mis- and disinformation, a real and growing threat to our elections, our markets, and our country,” he said in the news release. “Instead, it trots out the same old talking points about combatting partisan or ideological bias to cloak its own inaction on — and worse, its encouragement and use of — deepfakes and other AI slop being used for a wide range of harmful activity.”
An event that’s like an Academy Awards or Grammys celebration for Southwest Virginia tech and life sciences is returning on May 7.
The Roanoke Blacksburg Technology Council uses its annual TechNite to present awards, including top entrepreneur, innovator, tech company, leadership, educators, rising stars and the council’s hall of fame, while guests dine and sip cocktails.
TechNite is set for the Hotel Roanoke and Conference Center. Register and get more information at rbtc.tech/events/technite.
The post Tech briefs: After Trump’s AI framework released, members of Virginia’s congressional delegation weigh in appeared first on Cardinal News.
Agenda Lynchburg: City to get first look at proposed budget on Thursday [Cardinal News] (04:00 , Wednesday, 25 March 2026)

Lynchburg’s city manager will present the city’s proposed 2027 fiscal year budget this week, kicking off a months-long process of work sessions and public engagement opportunities leading up to a scheduled budget adoption in late May.
Residents’ first opportunity to hear about the budget is set for 6 p.m.Thursday, during a city council meeting in city hall. The budget presentation will include proposals for the city’s general operating budget, water and stormwater rates, and capital improvement program for the fiscal year that begins July 1 and runs through June 30, 2027.
Thursday’s presentation is the first of many budget meetings scheduled for this spring. The city council’s budget retreat is set for April 3, followed by two work sessions on April 14 — one with the school board and one with the Greater Lynchburg Transit Company.
The first time residents can weigh in on the proposed budget will be during a public hearing scheduled for 6 p.m. April 23.
From there, the city council is scheduled to hold a budget work session on April 28, a first reading of the budget on May 12 and a second reading of the budget on May 26. The city has historically adopted its budget during the second reading meeting.
Lynchburg residents can learn about the proposed budget at a series of information sessions next month. The informal drop-in meetings are scheduled for:
Specific meeting information and additional budget resources, including a capital improvement project dashboard, can be found on the city’s budget webpage. As of 6 p.m. Tuesday, budget documents had not yet been posted to the city’s website.
City department heads were asked to submit flat budgets for Lynchburg’s proposed 2027 fiscal year budget, said Chief Financial Officer Donna Witt at a January work session.
“They couldn’t turn in a budget that exceeded their ’26 budget allocations. … So if anything was contractual or inflationary, they had to be absorbed within that allocation. That’s not happy people,” Witt said, referencing department heads who struggle to keep their budgets flat as prices rise. “And then, if you have a new program or a service that you wanted to do, you had to figure out how to do it within that flat budget. So maybe you stop doing something to do something else.”
Last year, Lynchburg’s budget season stretched from March 11, when the city manager first presented the proposed budget, to June 30, when the 2026 budget was adopted on the last day of the 2025 fiscal year. It made significant investments in city initiatives, including a cost-of-living adjustment for general city employees, a pay progression plan for sworn police and fire personnel, and a contribution of more than $80 million to the capital improvement program, which includes $60 million to maintain aging school infrastructure, $12.5 million to renovate the library and $10.4 million to improve Miller Park Pool.
Last year’s budget discussions often centered on the city’s real estate tax rate, which was reevaluated as city council members wrestled with the reality of an uncharacteristically high real estate assessment.
The approved 84-cent rate in effect today represents a decrease in the tax rate from 2025 but an increase in tax payments for most residents due to the reassessment that upped the value of real estate. City department heads made cuts last budget season to adjust their spending levels to the lowered city revenue, resulting in the closure of the city’s environmental learning center called the Nature Zone, reduced operating hours for the Lynchburg Visitor Center and Museum and the loss of some staff positions and other services.
Real estate reassessments happen every other year in Lynchburg, meaning real estate values will hold steady during this budget season.
The post Agenda Lynchburg: City to get first look at proposed budget on Thursday appeared first on Cardinal News.
Cannabis advocates: Why Virginia cannabis retail must wait for Virginia cannabis supply [Cardinal News] (04:00 , Wednesday, 25 March 2026)

It takes six months to turn a cannabis clone into a tested, packaged product on a dispensary shelf. The bill on Gov. Spanberger’s desk sets January 1, 2027, as the first day of adult-use sales. New cultivators will need to get approved, build a facility and complete that entire biological process on a timeline that realistically requires two years. The math does not work, and when states ignore this math, the results are not theoretical. They are documented.
The General Assembly passed a bill that launches retail before Virginia-grown supply exists on dispensary shelves. This does not eliminate the illicit market; it invites it in. Through a phenomenon called inversion, illicit products enter the legal supply chain because legal supply does not exist to meet legal demand.
New Jersey just demonstrated exactly how this works.
In March 2026, New Jersey’s Cannabis Regulatory Commission suspended the cultivation and manufacturing licenses of Jersey Strong/Mollitiam after a routine inspection revealed the company was growing cannabis at a secret, unlicensed outdoor site and moving it into its licensed facility. The acting executive director told the commission that “everyone was deceived.” Multiple companies across the state had already purchased Mollitiam’s material and manufactured distillate from it. By the time regulators caught it, illicit products had contaminated finished goods across New Jersey’s legal supply chain.
This was not a rogue actor exploiting a healthy market. This was a predictable outcome of a market that launched retail sales nearly four years ago and still cannot produce enough legal supply to fill its own shelves. New Jersey has issued 552 cultivation licenses since its April 2022 adult-use launch. Forty-six are operational. That is 8.3% — almost four years in.
When legal demand exceeds legal supply, basic economics fills the gap. Licensed businesses facing empty shelves and rent payments do not close their doors. They source products wherever they can find them. New York experienced the same dynamic where licensed businesses were caught with illicit, out-of-state products.
This is not an enforcement failure. It is a sequencing failure. The states built the storefront before they built the supply chain.
Building a legal cannabis supply chain is not like opening a liquor store. Virginia cannot import cannabis from other states. Every product on a dispensary shelf must be grown, tested and tracked within the commonwealth.
A licensed cultivation facility requires 12 to 18 months to build before a single seed enters the ground. Site design, environmental permits and regulatory approvals consume 3 to 6 months. Construction, specialized HVAC, electrical infrastructure for commercial grow lights and environmental controls can take another 6 to 12 months. Then, systems must be commissioned and calibrated before cultivation begins. All of this assumes compliant real estate has already been secured and no equipment delays materialize, problems that have plagued cannabis buildouts in every state.
Once the building is finished, production timelines are biological. Propagation requires 2 to 3 weeks. Vegetative growth takes 1 to 4 weeks. Flowering spans 8 to 10 weeks. Post-harvest processing, drying, curing, laboratory testing and packaging all add another 4 to 6 weeks before the product can legally reach a dispensary through track-and-trace.
Total timeline from facility completion to first legal sale: approximately six months. And the first harvest from a brand-new facility is essentially a trial run. Consistent production takes at least another cycle to dial in.
On January 1, 2027, only the five existing pharmaceutical processors, who already have flower curing in their vaults, could possibly be ready. Every independent cultivator licensed under the new framework will have barely broken ground.
“We need tax revenue now.” Tax revenue from a market contaminated by inversion is illusory. New York and New Jersey have collected taxes while simultaneously funding enforcement actions against the retailers generating them and now against the cultivators supplying them. Stable revenue requires a stable market. A stable market requires time to build a diversified grower base.
“Delaying retail helps the illicit market.” The opposite is true. Opening retail without supply creates the conditions that pull the illicit market into licensed stores. That is what inversion is. New Jersey’s 8.4% cultivator activation rate did not keep illicit products out. It guaranteed illegal products got in.
“Medical operators can handle early demand.” Medical cultivators produce for their existing patient population, not an entire state’s recreational consumers. Illinois proved this in its first week — operators admitted the shortage was structurally unavoidable regardless of how much they stockpiled. Overwhelming medical facilities with adult-use demand does not expand the legal market. It destabilizes the medical one, forcing over 100,000 Virginia patients to compete for supply or seek alternatives elsewhere.
Virginia should adopt a Ready Together framework that ties retail authorization to demonstrate supply readiness, not an arbitrary calendar date.
The mechanism is straightforward. In each Health Service Area, pharmaceutical processors convert to adult-use sales only after enough independent cultivators and retailers are operational to establish genuine wholesale and retail competition, a threshold the governor has the authority to set. A cultivator is operational when it has completed a harvest and transferred tested product to a licensed retailer. A retailer is operational when it has completed sales to consumers.
This is not a delay. It is a launch sequence. It prevents the geographic concentration and supply starvation that produced inversion in New Jersey and New York. It protects Virginia’s medical patients from a supply crisis that Illinois documented in its first week and New Jersey documented with $360,000 in fines against operators, two of whom hold Virginia pharmaceutical processor permits, for violating medical patient protections during their own adult-use launch. And it aligns every incentive correctly: processors who want to convert have every reason to support the independent licensing process, not obstruct it.
The compromise bill is on Gov. Spanberger’s desk. She has the final authority to determine when this market opens and how. Legal states have created the lessons and data to inform Virginia in a way that no other state had when starting. Shifting how states legalize regulated sales sets a new standard that’s not just best for Virginia, but models new industry best practices.
The governor can sign a launch date, or she can sign an innovative, but practical, launch standard.
Chelsea Higgs Wise, MSW, is the co-founder and executive director of Marijuana Justice, which organizes for fair and equitable legalization in Virginia. chelsea@marijuanajustice.org
Max Jackson is the founder of Cannabis Wise Guys and specializes in translating between cannabis operations, investment and policy. He has provided expert testimony to the Virginia legislature on preventing market consolidation in emerging cannabis markets. He can be reached at Max@cannabiswiseguys.com.
The post Cannabis advocates: Why Virginia cannabis retail must wait for Virginia cannabis supply appeared first on Cardinal News.
Headlines from across the state: Dominion produces first power from Coastal Virginia Offshore Wind project; more … [Cardinal News] (03:45 , Wednesday, 25 March 2026)

Economy:
Dominion produces first power from Coastal Virginia Offshore Wind project. — Virginia Mercury.
Barboursville Vineyards sells to private investors. — The (Charlottesville) Daily Progress (paywall).
Politics:
On 16th anniversary of Affordable Care Act, Virginia’s federal lawmakers and health leaders weigh risks. — Virginia Mercury.
Culture:
Taking inspiration from Buddhist brothers, Louisa monks plan their own Walk for Peace. — The (Charlottesville) Daily Progress (paywall).
Weather:
For more weather news, follow weather journalist Kevin Myatt on Twitter / X at @kevinmyattwx and sign up for his free weather email newsletter. His weekly column appears in Cardinal News each Wednesday afternoon.
The post Headlines from across the state: Dominion produces first power from Coastal Virginia Offshore Wind project; more … appeared first on Cardinal News.
Deep Breath: Okay, Let’s Talk About That Controversial DLSS 5 Demo [Techdirt] (11:06 , Tuesday, 24 March 2026)
The polarization over any and all uses of artificial intelligence and machine learning continues. And, to be clear, I very much understand why this is all so controversial. Any new technology that has the chance to be transformative will also necessarily be disruptive and that causes fear. Fear that is not entirely unfounded, no matter your other opinions on the matter. If that’s you, cool, I get it.
I’ll start this off by pointing to the latest edition of the Techdirt podcast in which both Mike and Karl engaged in a fantastic discussion about the use of AI. I’ve listened to it twice now; it’s that good. And, while I found myself arguing out loud with the both of them at certain points during the podcast, despite the fact that neither of them could hear my retorts, it presents a grounded, often nuanced conversation, which we need much more of in this space.
And now, in what might be a subconscious attempt by this writer to commit suicide by comments section, let’s talk about that controversial demo of NVIDIA’s forthcoming DLSS 5 technology. What DLSS 5 does compared with previous versions of the technology is indeed new, but what is not new is the introduction of AI and machine learning into the equation. DLSS 2 and 3 had that already, in the form of pixel reconstruction and frame generation. DLSS 5, however, introduced what is being labeled as “neural rendering”, which uses machine learning to alter the lighting and detailed appearances in environments and, most importantly, character rendering over the engine on top of the 2D image output. Here’s the video demo that got everyone talking.
The backlash to the video was wide, immediate, and furious. There was a great deal of talk about the alteration of artistic intent, about whether this changed what the original developers were attempting to portray when they created the games, and, of course, industry jobs. I want to talk about the major complaint pillars seen across many outlets below, but this backlash also supposedly came with death threats foisted upon NVIDIA employees. I would very much hope we could all at least agree that any threats of that nature are completely inappropriate and absurd.
With that, here is what I’ve seen in the backlash and what I’d want to say about it.
Get your damned AI out of my games!
Perhaps not the most common pushback I saw in all of this, but a very common one. And a silly one, too. As I mentioned above, DLSS versions already used some version of AI and machine learning. That isn’t new. How it’s applied is certainly new, but that isn’t the same as the demand to keep AI entirely out of the video game industry.
And if that’s where you are, go ahead and shake your fist at the clouds in the sky. AI is a tool and, as I’ve now said repeatedly, the conversation we should be having is how it’s used in gaming, not if it’s used. That’s because its use is largely a foregone conclusion and it is an open question as to whether its use will be a net benefit or negative overall to the industry. Dogmatic purists on AI have a stance that is understandable, but also untenable. We’re too far down this road to turn around and go home. And if the tech were able to lower the barriers of entry to the gaming industry, acting as the fertilizer that allows a thousand indie studios to sprout roots, would that really be so bad for the gaming ecosystem?
I can appreciate the purists’ point of view. I really can. I just don’t see where they have a place in the conversation when it comes to gaming.
It overrides artistic intent!
Does it? If it did, then hell yes that’s bad. But if it doesn’t, then this concern goes away entirely.
DLSS 5 is built with options and customizable sliders for game developers. That’s really, really important here. At the macro level, a developer that has decided to use DLSS 5, or decided and customized how it’s used in their games, is exercising consent over their products. That should be obvious.
But then we get into really interesting questions of art, the actual artist, and the ownership of that art, because those last two are very different things. As Digital Foundry outlines:
It may even raise consent and other questions surrounding artistic integrity. On site and witnessing the demos in motion, concerns about this seemed less of a problem when the games we saw had been signed off by the studios that made them – the contentious assets we’ve seen, likewise. Nothing from the DLSS 5 reveal released by Nvidia has not been approved by the studios that own those games. But perhaps the issue isn’t just about specific approvals by specific developers on agreed DLSS 5 integrations, but rather the whole concept of a GPU reinterpreting game visuals according to a neural model that has its own ideas about what photo-realism should look like.
While we’ve seen endorsements from Bethesda’s Todd Howard and Capcom’s Jun Takeuchi, to what extent does that consent apply to the entire development team and other artists associated with the production? And by extension, there is also the question of whether now is the right time to launch DLSS 5 at a time when the games industry is under enormous pressure, jobs are on the line and cost-cutting is a major focus in the triple-A space. The technology itself cannot function without the work of game creators – it needs final game imagery to work at all – but the extent to which it could be viewed as a worrying sign of “things to come” cannot be overstated bearing in mind the reactions elsewhere to generative AI.
That strikes me as a valid and interesting ethical question when it comes to the use of this technology, but one that is probably overwrought. Individual artists who work on video games already have their artistic output live at the pleasure of the game developers they contract with. Those developers already can use this game art in all kinds of ways that the individual artist may not have had in mind when creating it, or indeed have even considered such possibilities. DLSS 5 is just one more version of that, with the main difference being that it involves AI making changes to game images. That’s an important thing to consider, sure, but there are cousins to this ethical question that we’ve all come to accept already. This strikes me more as part of the “all AI is bad all the time” crowd finding a foothold in something other than dogma to grab onto.
Developers and publishers own their games. If they want to use DLSS 5 in those games, there is little other than specific work for hire or other contractual stipulations with individual artists that would keep them from implementing it. If artists don’t like that, I completely understand that point of view, but that’s what contract negotiations and language are for.
Bottom line: I have been as vocal as anyone arguing that video games are a form of art for well over a decade now and I struggle to agree that an optional technology that has approved buy in from game developers and publishers equates to “overriding artistic intent”, writ large.
The faces in these examples look like shit, are “yassified”, or suffer from the uncanny valley effect!
Look, here we’re going to get into matters of opinion. I have to say that when I viewed the demo video myself, I had the opposite reaction. And, yes, this opens me up to claims that I am somehow a massive fan of AI-created pornography (this is where the yassified comments come in), or that I just want all the characters to look “hot” (I’m too old for that shit), or that my older age of 44 means I’ve lost touch with what video games should look like. Despite my genuine respect for the dissenting opinions here, allow me to say this: bullshit.
The caveat to all of this is that the demo revealed very little in the way of this technology working within these games in motion. It’s also certainly true that NVIDIA chose the best potential images to show off its new technology. If the DLSS 5 rendering sucks out loud in a larger in-motion game, or if the images it creates end up being inconsistent throughout gameplay, or if it does just end up looking shitty, then I’ll be right there with you with a torch and pitchfork in hand.
And here’s the other thing to consider with this particular complaint, combined with the previous one about artistic intent: do any of you use visual mods in your games? I do. A ton of them. For a variety of reasons. I have used them to alter the faces and models for games like Starfield and Skyrim, among many others. Do I need to feel bad for altering the artist’s intent? Do I need to apologize for incorporating mods to make characters and environments appear in a way that helps me better connect with the game I’m playing?
Because I’m not going to do either. And I don’t expect you to. Nor do I expect game developers that choose to use this optional technology to beg for forgiveness for their own output.
The hardware demands to run all of this are insane!
Fine, then you’ll get what you want and nobody will be able to use this technology anyway. But I don’t think that will be the case. NVIDIA knows what it will take to run this tech once it leaves the demo stage and goes into production. The idea that they would hype up technology that nobody can use strikes me as unlikely in the extreme.
Conclusion: everyone take a breath
This still strikes me as more of a “all AI is bad” crowd grasping at lots of other things to buttress their pushback than anything else. AI has plenty, plenty of potential pitfalls. Worried about jobs in the gaming industry and elsewhere? Me too! But if you’re not also looking at the potential upsides for the industry, then you’re engaging in dogma, not conversation.
Will DLSS 5 be good? I have no idea and neither do you. Will DLSS 5 alter previously released games in a way that fundamentally alters how we play these games? I have no idea and neither do you. Will it negatively impact the gaming industry when it comes to the number of jobs within it? I have no idea and neither do you.
This was a tech demo. Details on how it works are still trickling out. Most recently, there has been some clarification as to the 2D rendering nature of the technology and what that means for the output on the screen. As an early demo of the technology, feedback is going to be important, so long as it’s informed and reasonable feedback.
The technology may end up being trash and hated for reasons other than “all AI is bad all the time.” If that ends up being the case, I trust the gaming market to work that out for itself. But a lot of the hand-wringing here looks to me to be speculative at best.
Deep Breath: Okay, Let’s Talk About That Controversial DLSS 5 Demo [Techdirt] (11:06 , Tuesday, 24 March 2026)
The polarization over any and all uses of artificial intelligence and machine learning continues. And, to be clear, I very much understand why this is all so controversial. Any new technology that has the chance to be transformative will also necessarily be disruptive and that causes fear. Fear that is not entirely unfounded, no matter your other opinions on the matter. If that’s you, cool, I get it.
I’ll start this off by pointing to the latest edition of the Techdirt podcast in which both Mike and Karl engaged in a fantastic discussion about the use of AI. I’ve listened to it twice now; it’s that good. And, while I found myself arguing out loud with the both of them at certain points during the podcast, despite the fact that neither of them could hear my retorts, it presents a grounded, often nuanced conversation, which we need much more of in this space.
And now, in what might be a subconscious attempt by this writer to commit suicide by comments section, let’s talk about that controversial demo of NVIDIA’s forthcoming DLSS 5 technology. What DLSS 5 does compared with previous versions of the technology is indeed new, but what is not new is the introduction of AI and machine learning into the equation. DLSS 2 and 3 had that already, in the form of pixel reconstruction and frame generation. DLSS 5, however, introduced what is being labeled as “neural rendering”, which uses machine learning to alter the lighting and detailed appearances in environments and, most importantly, character rendering over the engine on top of the 2D image output. Here’s the video demo that got everyone talking.
The backlash to the video was wide, immediate, and furious. There was a great deal of talk about the alteration of artistic intent, about whether this changed what the original developers were attempting to portray when they created the games, and, of course, industry jobs. I want to talk about the major complaint pillars seen across many outlets below, but this backlash also supposedly came with death threats foisted upon NVIDIA employees. I would very much hope we could all at least agree that any threats of that nature are completely inappropriate and absurd.
With that, here is what I’ve seen in the backlash and what I’d want to say about it.
Get your damned AI out of my games!
Perhaps not the most common pushback I saw in all of this, but a very common one. And a silly one, too. As I mentioned above, DLSS versions already used some version of AI and machine learning. That isn’t new. How it’s applied is certainly new, but that isn’t the same as the demand to keep AI entirely out of the video game industry.
And if that’s where you are, go ahead and shake your fist at the clouds in the sky. AI is a tool and, as I’ve now said repeatedly, the conversation we should be having is how it’s used in gaming, not if it’s used. That’s because its use is largely a foregone conclusion and it is an open question as to whether its use will be a net benefit or negative overall to the industry. Dogmatic purists on AI have a stance that is understandable, but also untenable. We’re too far down this road to turn around and go home. And if the tech were able to lower the barriers of entry to the gaming industry, acting as the fertilizer that allows a thousand indie studios to sprout roots, would that really be so bad for the gaming ecosystem?
I can appreciate the purists’ point of view. I really can. I just don’t see where they have a place in the conversation when it comes to gaming.
It overrides artistic intent!
Does it? If it did, then hell yes that’s bad. But if it doesn’t, then this concern goes away entirely.
DLSS 5 is built with options and customizable sliders for game developers. That’s really, really important here. At the macro level, a developer that has decided to use DLSS 5, or decided and customized how it’s used in their games, is exercising consent over their products. That should be obvious.
But then we get into really interesting questions of art, the actual artist, and the ownership of that art, because those last two are very different things. As Digital Foundry outlines:
It may even raise consent and other questions surrounding artistic integrity. On site and witnessing the demos in motion, concerns about this seemed less of a problem when the games we saw had been signed off by the studios that made them – the contentious assets we’ve seen, likewise. Nothing from the DLSS 5 reveal released by Nvidia has not been approved by the studios that own those games. But perhaps the issue isn’t just about specific approvals by specific developers on agreed DLSS 5 integrations, but rather the whole concept of a GPU reinterpreting game visuals according to a neural model that has its own ideas about what photo-realism should look like.
While we’ve seen endorsements from Bethesda’s Todd Howard and Capcom’s Jun Takeuchi, to what extent does that consent apply to the entire development team and other artists associated with the production? And by extension, there is also the question of whether now is the right time to launch DLSS 5 at a time when the games industry is under enormous pressure, jobs are on the line and cost-cutting is a major focus in the triple-A space. The technology itself cannot function without the work of game creators – it needs final game imagery to work at all – but the extent to which it could be viewed as a worrying sign of “things to come” cannot be overstated bearing in mind the reactions elsewhere to generative AI.
That strikes me as a valid and interesting ethical question when it comes to the use of this technology, but one that is probably overwrought. Individual artists who work on video games already have their artistic output live at the pleasure of the game developers they contract with. Those developers already can use this game art in all kinds of ways that the individual artist may not have had in mind when creating it, or indeed have even considered such possibilities. DLSS 5 is just one more version of that, with the main difference being that it involves AI making changes to game images. That’s an important thing to consider, sure, but there are cousins to this ethical question that we’ve all come to accept already. This strikes me more as part of the “all AI is bad all the time” crowd finding a foothold in something other than dogma to grab onto.
Developers and publishers own their games. If they want to use DLSS 5 in those games, there is little other than specific work for hire or other contractual stipulations with individual artists that would keep them from implementing it. If artists don’t like that, I completely understand that point of view, but that’s what contract negotiations and language are for.
Bottom line: I have been as vocal as anyone arguing that video games are a form of art for well over a decade now and I struggle to agree that an optional technology that has approved buy in from game developers and publishers equates to “overriding artistic intent”, writ large.
The faces in these examples look like shit, are “yassified”, or suffer from the uncanny valley effect!
Look, here we’re going to get into matters of opinion. I have to say that when I viewed the demo video myself, I had the opposite reaction. And, yes, this opens me up to claims that I am somehow a massive fan of AI-created pornography (this is where the yassified comments come in), or that I just want all the characters to look “hot” (I’m too old for that shit), or that my older age of 44 means I’ve lost touch with what video games should look like. Despite my genuine respect for the dissenting opinions here, allow me to say this: bullshit.
The caveat to all of this is that the demo revealed very little in the way of this technology working within these games in motion. It’s also certainly true that NVIDIA chose the best potential images to show off its new technology. If the DLSS 5 rendering sucks out loud in a larger in-motion game, or if the images it creates end up being inconsistent throughout gameplay, or if it does just end up looking shitty, then I’ll be right there with you with a torch and pitchfork in hand.
And here’s the other thing to consider with this particular complaint, combined with the previous one about artistic intent: do any of you use visual mods in your games? I do. A ton of them. For a variety of reasons. I have used them to alter the faces and models for games like Starfield and Skyrim, among many others. Do I need to feel bad for altering the artist’s intent? Do I need to apologize for incorporating mods to make characters and environments appear in a way that helps me better connect with the game I’m playing?
Because I’m not going to do either. And I don’t expect you to. Nor do I expect game developers that choose to use this optional technology to beg for forgiveness for their own output.
The hardware demands to run all of this are insane!
Fine, then you’ll get what you want and nobody will be able to use this technology anyway. But I don’t think that will be the case. NVIDIA knows what it will take to run this tech once it leaves the demo stage and goes into production. The idea that they would hype up technology that nobody can use strikes me as unlikely in the extreme.
Conclusion: everyone take a breath
This still strikes me as more of a “all AI is bad” crowd grasping at lots of other things to buttress their pushback than anything else. AI has plenty, plenty of potential pitfalls. Worried about jobs in the gaming industry and elsewhere? Me too! But if you’re not also looking at the potential upsides for the industry, then you’re engaging in dogma, not conversation.
Will DLSS 5 be good? I have no idea and neither do you. Will DLSS 5 alter previously released games in a way that fundamentally alters how we play these games? I have no idea and neither do you. Will it negatively impact the gaming industry when it comes to the number of jobs within it? I have no idea and neither do you.
This was a tech demo. Details on how it works are still trickling out. Most recently, there has been some clarification as to the 2D rendering nature of the technology and what that means for the output on the screen. As an early demo of the technology, feedback is going to be important, so long as it’s informed and reasonable feedback.
The technology may end up being trash and hated for reasons other than “all AI is bad all the time.” If that ends up being the case, I trust the gaming market to work that out for itself. But a lot of the hand-wringing here looks to me to be speculative at best.
An Open Training Set For AI Goes Global [Techdirt] (06:37 , Tuesday, 24 March 2026)
As many of the AI stories on Walled Culture attest, one of the most contentious areas in the latest stage of AI development concerns the sourcing of training data. To create high-quality large language models (LLMs) massive quantities of training data are required. In the current genAI stampede, many companies are simply scraping everything they can off the Internet. Quite how that will work out in legal terms is not yet clear. Although a few court cases involving the use of copyright material for training have been decided, many have not, and the detailed contours of the legal landscape remain uncertain.
However, there is an alternative to this “grab it all” approach. It involves using materials that are either in the public domain or released under a “permissive” license that allows LLMs to be trained on them without any problems. There’s plenty of such material online, but its scattered nature puts it at a serious disadvantage compared to downloading everything without worrying about licensing issues. To address that, the Common Corpus was created and released just over a year ago by the French startup Pleias. A press release from the AI Alliance explains the key characteristics of the Common Corpus:
Truly Open: contains only data that is permissively licensed and provenance is documented
Multilingual: mostly representing English and French data, but contains at least 1[billion] tokens for over 30 languages
Diverse: consisting of scientific articles, government and legal documents, code, and cultural heritage data, including books and newspapers
Extensively Curated: spelling and formatting has been corrected from digitized texts, harmful and toxic content has been removed, and content with low educational content has also been removed.
There are five main categories of material: OpenGovernment, OpenCulture, OpenScience, OpenWeb, and OpenSource:
OpenGovernment contains Finance Commons, a dataset of financial documents from a range of governmental and regulatory bodies. Finance Commons is a multimodal dataset, including both text and PDF corpora. OpenGovernment also contains Legal Commons, a dataset of legal and administrative texts. OpenCulture contains cultural heritage data like books and newspapers. Many of these texts come from the 18th and 19th centuries, or even earlier.
OpenScience data primarily comes from publicly available academic and scientific publications, which are most often released as PDFs. OpenWeb contains datasets from YouTube Commons, a dataset of transcripts from public domain YouTube videos, and websites like Stack Exchange. Finally, OpenSource comprises code collected from GitHub repositories which were permissibly licensed.
The initial release contained over 2 trillion tokens – the usual way of measuring the volume of training material, where tokens can be whole words and parts of words. A significant recent update of the corpus has taken that to over 2.267 trillion tokens. Just as important as the greater size, is the wider reach: there are major additions of material from China, Japan, Korea, Brazil, India, Africa and South-East Asia. Specifically, the latest release contains data for eight languages with more than 10 billion tokens (English, French, German, Spanish, Italian, Polish, Greek, Latin) and 33 languages with more than 1 billion tokens. Because of the way the dataset has been selected and curated, it is possible to train LLMs on fully open data, which leads to auditable models. Moreover, as the original press release explains:
By providing clear provenance and using permissibly licensed data, Common Corpus exceeds the requirements of even the strictest regulations on AI training data, such as the EU AI Act. Pleias has also taken extensive steps to ensure GDPR compliance, by developing custom procedures to enable personally identifiable information (PII) removal for multilingual data. This makes Common Corpus an ideal foundation for secure, enterprise-grade models. Models trained on Common Corpus will be resilient to an increasingly regulated industry.
Another advantage for many users is that material with high “toxicity scores” has already been removed, thus ensuring that any LLMs trained on the Common Corpus will have fewer problems in this regard.
The Common Corpus is a great demonstration of the power of openness and permissive copyright licensing, and how they bring benefits that other approaches can’t match. For example: “Common Corpus makes it possible to train models compatible with the Open Source Initiative’s definition of open-source AI, which includes openness of use, meaning use is permitted for ‘any purpose and without having to ask for permission’. ” That fact, along with the multilingual nature of the Common Corpus, would make the latest version a great fit for any EU move to create “public AI” systems, something advocated on this blog a few months back. The French government is already backing the project, as are other organizations supporting openness:
The Corpus was built up with the support and concerted efforts of the AI Alliance, the French Ministry of Culture as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC).
This dataset was also made in partnership with Wikimedia Enterprise and Wikidata/Wikimedia Germany. We’re also thankful to our partner Libraries Without Borders for continuous assistance on extending low resource language support.
The corpus was stored and processed with the generous support of the AI Alliance, Jean Zay (Eviden, Idris), Tracto AI, Mozilla.
The unique advantages of the Common Corpus mean that more governments should be supporting it as an alternative to proprietary systems, which generally remain black boxes in terms of where their training data comes from. Publishers too would also be wise to fund it, since it offers a powerful resource explicitly designed to avoid some of the thorniest copyright issues plaguing the generative AI field today.
Follow me @glynmoody on Mastodon and on Bluesky. Originally published to Walled Culture.
Techdirt Podcast Episode 447: The Future Of Section 230 [Techdirt] (04:30 , Tuesday, 24 March 2026)
Last month, Mike participated in the Cato Institute‘s Section 230 at 30 event to mark the 30th anniversary of the passage of Section 230. The event featured a series of fireside chats and panels that went deep on the past, present, and future of the all-important law, and you can watch videos of all of them on Cato’s website — but for this week’s episode of the podcast, we’ve got the audio of Mike’s panel (moderated by Jennifer Huddleston and also featuring Jess Miers, Matt Perault, and Matt Reeder), all about how Section 230 and similar policies will apply to new technologies like decentralized protocols and artificial intelligence.
You can also download this episode directly in MP3 format.
Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Techdirt Podcast Episode 447: The Future Of Section 230 [Techdirt] (04:30 , Tuesday, 24 March 2026)
Last month, Mike participated in the Cato Institute‘s Section 230 at 30 event to mark the 30th anniversary of the passage of Section 230. The event featured a series of fireside chats and panels that went deep on the past, present, and future of the all-important law, and you can watch videos of all of them on Cato’s website — but for this week’s episode of the podcast, we’ve got the audio of Mike’s panel (moderated by Jennifer Huddleston and also featuring Jess Miers, Matt Perault, and Matt Reeder), all about how Section 230 and similar policies will apply to new technologies like decentralized protocols and artificial intelligence.
You can also download this episode directly in MP3 format.
Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
ALPR Tech Now Preventing Parents From Enrolling Their Kids In School [Techdirt] (03:19 , Tuesday, 24 March 2026)
All the people who have always brushed off concerns about surveillance tech, please come get your kids. And then let someone else raise them.
Lots of people are fine with mass surveillance because they believe the horseshit spewed by the immediate beneficiaries of this tech: law enforcement agencies that claim every encroachment on your rights might (MIGHT!) lead to the arrest of a dangerous criminal.
Running neck and neck with government surveillance state enthusiasts are extremely wealthy Americans. When they’re not adding new levels of surveillance to the businesses they own, they’re scattering cameras all around their gated communities and giving cops unfettered access to any images these cameras record.
Here’s how it plays out at the ground level: parents can’t get their kids enrolled in the nearest school because of surveillance tech. In one recent case, license plate reader data was used to deny enrollment because the data collected claimed the parent didn’t actually reside in the school district.
Just over a year ago, Thalía Sánchez became the proud owner of a home in Alsip. She decided to leave the bustle of the city for a quiet neighborhood setting and the best possible education for her daughter.
However, to this day, despite providing all required paperwork including her driver’s license, utility bills, vehicle registration, and mortgage statement, the Alsip Hazelgreen Oak Lawn School District 126 has repeatedly denied her daughter’s enrollment.
Why would the district do this? Well, it’s apparently because it has decided to trust the determinations made by its surveillance tech partner, rather than documents actually seen in person by the people making these determinations.
According to the school district, her daughter’s new student enrollment form was denied due to “license plate recognition software showing only Chicago addresses overnight” in July and August. In an email sent to Sánchez in August, the school district told her, “Although you are the owner on record of a house in our district boundaries, your license plate recognition shows that is not the place where you reside.”
But that’s obviously not true. Sanchez says the only reason plate reader data would have shown her car as “staying” in Chicago was because she lent it to a relative during that time period. The school insists this data is enough to overturn the documents she’s provided because… well, it doesn’t really say. It just claims it “relies” on this information gathering to determine residency for students.
All of this can be traced back to Thompson Reuters, which apparently has branched out into the AI-assisted, ALPR-enabled business of denying enrollment to students based on assumptions made by its software.
Here’s what little there is of additional information, as obtained by The Register while reporting on this case:
Thomson Reuters Clear, which more broadly is an AI-assisted records investigation tool, has a page dedicated to its application for school districts. It sells Clear as a tool for residency verification, claiming that it can “automate” such tasks with “enhanced reliability,” and can take care of them “in minutes, not months.”
One of the particular things the Clear page notes is its ability to access license plate data “and develop pattern of life information” that helps identify whether those who are claiming they’re residents for the sake of getting a kid enrolled in school are lying or not.
Thomson Reuters does not specify where it gets its license plate reader data and did not respond to questions.
We’ll get to the highlighted sentence in a moment, but let’s just take a beat and consider how creepy and weird this Thomson Reuters promotional pitch is:

The text reads:
Gain deeper insights into mismatched data to support meaningful conversations with families and ensure students are where they need to be. Identify where cars have been seen to establish pattern of life information.
No one expects a law enforcement agency to do this (at least without a warrant or reasonable suspicion), much less a school district. Government agencies shouldn’t have unfettered access to “pattern of life” information just because. It’s not like the people being surveilled here are engaged in criminal activity. They’re just trying to make sure their kids receive an education. And while there will always be people who game the system to get their kids into better schools, that’s hardly justification for subjecting every enrolling student’s family to expansive surveillance-enabled background checks.
And while Thomson Reuters (and the district itself) has refused to comment on the source of its plate reader data, it can safely be assumed that it’s Flock Safety. Flock Safety has never shown any concern about who accesses the data it compiles, much less why they choose to do it. Flock is swiftly becoming the leading provider of ALPR cameras and given its complete lack of internal or external oversight, it’s more than likely the case that its feeding this data to third parties like Thomson Reuters that are willing to pay a premium for data that simply can’t be had elsewhere.
We’re not catching criminals with this tech. Sure, it may happen now and then. But the real value is repeated resale of “pattern of life” data to whoever is willing to purchase it. That’s a massive problem that’s only going to get worse… full stop.
ALPR Tech Now Preventing Parents From Enrolling Their Kids In School [Techdirt] (03:19 , Tuesday, 24 March 2026)
All the people who have always brushed off concerns about surveillance tech, please come get your kids. And then let someone else raise them.
Lots of people are fine with mass surveillance because they believe the horseshit spewed by the immediate beneficiaries of this tech: law enforcement agencies that claim every encroachment on your rights might (MIGHT!) lead to the arrest of a dangerous criminal.
Running neck and neck with government surveillance state enthusiasts are extremely wealthy Americans. When they’re not adding new levels of surveillance to the businesses they own, they’re scattering cameras all around their gated communities and giving cops unfettered access to any images these cameras record.
Here’s how it plays out at the ground level: parents can’t get their kids enrolled in the nearest school because of surveillance tech. In one recent case, license plate reader data was used to deny enrollment because the data collected claimed the parent didn’t actually reside in the school district.
Just over a year ago, Thalía Sánchez became the proud owner of a home in Alsip. She decided to leave the bustle of the city for a quiet neighborhood setting and the best possible education for her daughter.
However, to this day, despite providing all required paperwork including her driver’s license, utility bills, vehicle registration, and mortgage statement, the Alsip Hazelgreen Oak Lawn School District 126 has repeatedly denied her daughter’s enrollment.
Why would the district do this? Well, it’s apparently because it has decided to trust the determinations made by its surveillance tech partner, rather than documents actually seen in person by the people making these determinations.
According to the school district, her daughter’s new student enrollment form was denied due to “license plate recognition software showing only Chicago addresses overnight” in July and August. In an email sent to Sánchez in August, the school district told her, “Although you are the owner on record of a house in our district boundaries, your license plate recognition shows that is not the place where you reside.”
But that’s obviously not true. Sanchez says the only reason plate reader data would have shown her car as “staying” in Chicago was because she lent it to a relative during that time period. The school insists this data is enough to overturn the documents she’s provided because… well, it doesn’t really say. It just claims it “relies” on this information gathering to determine residency for students.
All of this can be traced back to Thompson Reuters, which apparently has branched out into the AI-assisted, ALPR-enabled business of denying enrollment to students based on assumptions made by its software.
Here’s what little there is of additional information, as obtained by The Register while reporting on this case:
Thomson Reuters Clear, which more broadly is an AI-assisted records investigation tool, has a page dedicated to its application for school districts. It sells Clear as a tool for residency verification, claiming that it can “automate” such tasks with “enhanced reliability,” and can take care of them “in minutes, not months.”
One of the particular things the Clear page notes is its ability to access license plate data “and develop pattern of life information” that helps identify whether those who are claiming they’re residents for the sake of getting a kid enrolled in school are lying or not.
Thomson Reuters does not specify where it gets its license plate reader data and did not respond to questions.
We’ll get to the highlighted sentence in a moment, but let’s just take a beat and consider how creepy and weird this Thomson Reuters promotional pitch is:

The text reads:
Gain deeper insights into mismatched data to support meaningful conversations with families and ensure students are where they need to be. Identify where cars have been seen to establish pattern of life information.
No one expects a law enforcement agency to do this (at least without a warrant or reasonable suspicion), much less a school district. Government agencies shouldn’t have unfettered access to “pattern of life” information just because. It’s not like the people being surveilled here are engaged in criminal activity. They’re just trying to make sure their kids receive an education. And while there will always be people who game the system to get their kids into better schools, that’s hardly justification for subjecting every enrolling student’s family to expansive surveillance-enabled background checks.
And while Thomson Reuters (and the district itself) has refused to comment on the source of its plate reader data, it can safely be assumed that it’s Flock Safety. Flock Safety has never shown any concern about who accesses the data it compiles, much less why they choose to do it. Flock is swiftly becoming the leading provider of ALPR cameras and given its complete lack of internal or external oversight, it’s more than likely the case that its feeding this data to third parties like Thomson Reuters that are willing to pay a premium for data that simply can’t be had elsewhere.
We’re not catching criminals with this tech. Sure, it may happen now and then. But the real value is repeated resale of “pattern of life” data to whoever is willing to purchase it. That’s a massive problem that’s only going to get worse… full stop.
The Trump Admin’s Own Investigators Found No EU Internet Censorship. So They Ignored The Findings. [Techdirt] (01:44 , Tuesday, 24 March 2026)
The Washington Post just published a deeply reported story about the Trump administration’s campaign to “expand free speech” in Europe. That headline alone should tell you something about how the story is framed — it takes the administration’s self-description at face value, as though we’re watching some noble effort to export the First Amendment across the Atlantic.
But if you get past the incredibly misleading headline, the actual reporting reveals quite an admission from within the administration, and it fundamentally undercuts everything they’ve been doing supposedly regarding “EU internet censorship.” The story reveals that the Trump administration ran its own investigation into EU censorship, found nothing, and then barreled ahead with the entire crusade anyway.
Worth repeating, because it’s the whole story (even if WaPo buried it with their headline): the Trump admin investigated “EU censorship.” The Trump admin came up empty. And then the administration just kept going as if it were undeniable that what their own investigators couldn’t find must have happened anyway.
The Post’s opening gets to it relatively quickly, but treats it as mere scene-setting rather than the incredible revelation it actually is:
In early 2025, aides to Vice President JD Vance ordered a small office at the State Department to document how European regulators were censoring online speech.
Staffers launched an investigation focusing on the European Union’s Digital Services Act, a sweeping 2022 social media law requiring large tech companies to limit the spread of harmful or illegal speech on the continent.
The weeks-long investigation, details of which have not previously been reported, uncovered no records indicating censorship, according to two people familiar with the matter, who spoke on the condition of anonymity for fear of retribution.
“There is no evidence that Member States of the European Union are overreaching the DSA to censor and criminalize online content,” they wrote in conclusion.
“There is no evidence.” That’s the conclusion of the Trump administration’s own investigators, put in writing. And then, an even more remarkable quote from someone involved:
“We did not find anything,” said one of the people. “It was not politically convenient that we could not find anything.”
“It was not politically convenient that we could not find anything.”
That is quite an admission. A government official is telling you directly that the conclusions were inconvenient, and therefore irrelevant. The investigation was entirely about manufacturing justification for a policy that was already decided. When the justification didn’t materialize, they just ignored it and moved forward anyway.
This is the hallucination presidency in action: when the facts don’t match the narrative, just assert the narrative anyway and hope no one checks.
The Washington Post, to its credit, did the hard reporting here and obtained those quotes. But the headline (“Inside the Trump administration’s campaign to expand ‘free speech’ in Europe”) and subhed (“The United States has banned some European researchers from entering the country and dismantled federal programs intended to fight foreign disinformation campaigns”) describe the administration’s actions without conveying the most explosive finding of the piece: that the evidentiary foundation for all of these actions does not exist. The actual story here is far bigger than the Post’s framing lets on.
Because here’s what the administration did after its own investigators told them there was no evidence of EU censorship: pretty much everything you could imagine a government would do if it had found evidence.
Despite the finding, the Trump administration has pressed ahead with a wide-ranging State Department effort to crack down on what it alleges is widespread censorship in the E.U., according to documents reviewed by The Post and nine people involved or aware of the campaign, many of whom spoke on the condition of anonymity to protect their livelihoods.
It has banned some European researchers from entering the United States and dismantled federal programs intended to fight foreign disinformation campaigns. Behind the scenes, the administration has crafted a plan to allow American tech companies to skirt European rules, using the federal government’s powers to control exports, according to two of the people and documents.
The department is preparing to launch a website to host banned content. A teaser for the site, freedom.gov, includes a mounted Paul Revere-type figure galloping over the words “Freedom is coming.”
Yes, there is literally going to be a government website with a Paul Revere figure galloping over the words “Freedom is coming.” Your tax dollars at work. There is a certain kind of person in government who genuinely confuses propaganda aesthetics with policy substance, and this is a pristine example.
The State Department’s official response to the Post is also worth noting for its brazenness:
The State Department said in a statement that it has been consistent in raising concerns about the Digital Services Act and a similar British law and had “never ‘concluded’ anything to the contrary.”
They’re claiming they “never concluded” that the DSA wasn’t censorship — even though their own staffers put it in writing that they found no evidence of censorship. The scare quotes around “concluded” are doing a lot of heavy lifting there. They’re trying to gaslight their own investigation.
Now, I want to be clear about something. I have been critical of aspects of the DSA for years. There are real concerns about how expansive content regulation can be abused — by governments on either side of the Atlantic. When former EU Commissioner Thierry Breton tried to use the DSA to pressure Elon Musk into not platforming Donald Trump, I called it out as a clear overreach and a genuine threat to free speech principles.
But the Trump administration’s campaign has almost nothing to do with those legitimate concerns. Instead, it’s built on vibes and political convenience, disconnected from anything their own investigators could actually find.
And we know this because we’ve already watched this play out in real time. The single biggest piece of “evidence” the administration and its allies keep pointing to is the EU’s $140 million fine against X (formerly Twitter) from December 2025. The House Judiciary Committee’s Jim Jordan called it “the Commission’s most aggressive censorship step to date,” describing it as “obvious retaliation for its protection of free speech around the globe” in a recently released report.
Sounds terrifying. Except that fine had literally nothing to do with censorship. The violations were about three specific transparency failures: misleading users when Elon changed verification from actual verification to “pay $8 for a checkmark,” maintaining a broken ad repository, and refusing to share required data with researchers. As Stanford’s expert on platform regulation, Daphne Keller, explained at the time:
Don’t let anyone — not even the United States Secretary of State — tell you that the European Commission’s €120 million enforcement against Elon Musk’s X under the Digital Service Act (DSA) is about censorship or about what speech users can post on the platform. That would, indeed, be interesting. But this fine is just the EU enforcing some normal, boring requirements of its law. Many of these requirements resemble existing US laws or proposals that have garnered bipartisan support.
Zero of the charges were about what content X allowed or didn’t allow on its platform.
Meanwhile, the real-world consequences of this evidence-free campaign are landing on actual people. We discussed how absolutely backwards it is for the US to be banning critics under the banner of free speech, and The Post reports on how that’s playing out with the German group, HateAid, that supports victims of online abuse, and whose CEO had her US entry banned:
Josephine Ballon, the group’s chief executive, learned just before Christmas that she had been banned from entering the United States. The State Department issued the ban on the grounds that Ballon and others “led organized efforts to coerce American platforms to censor, demonetize, and suppress American viewpoints they oppose,” which she denies.
She compared Trump’s tactics to those used by the online bullies that her organization teaches victims about.
“This is intended to intimidate us and silence us,” she said in an interview. “We are not silenced by the German far right and we will not be by the U.S.”
The US is banning people from entering the country due to their speech — to “protect free speech” — based on claims its own investigators couldn’t substantiate.
I think we found the censorship. And it’s coming from inside the US.
And the hypocrisy runs even deeper than the empty evidentiary cupboard, as we’ve documented before. While the Trump administration screams about EU censorship, FCC Chair Brendan Carr — the same person who traveled to Barcelona to give a speech declaring that “free speech” was “in retreat” because of the DSA — has been actively using his government position to threaten American media companies into silence. When he pressured Disney into temporarily pulling Jimmy Kimmel off the air, he faced zero consequences. He’s still in the job, still making threats.
Meanwhile, the EU actually pushed out Thierry Breton when he overstepped and tried to abuse the DSA to pressure platforms on content. The system the Trump administration claims is an engine of censorship responded to actual overreach by removing the overreacher. The system the Trump administration runs rewarded its overreacher with continued power and more threats.
I keep coming back to that quote: “It was not politically convenient that we could not find anything.” That may be the most honest sentence anyone in this administration has uttered about this entire campaign. The conclusion was written before the investigation started. The policy was set before the evidence was gathered. When reality failed to cooperate with the narrative, reality was simply discarded.
Policy by vibes. Governance by meme. With real consequences for real people and real institutions — imposed by the very people who cannot stop telling you how much they care about free expression. The same people whose own investigators found nothing — and whose response to finding nothing was to start banning foreigners from entering the country for their speech.
Daily Deal: Build A Weather App With Ruby On Rails [Techdirt] (01:39 , Tuesday, 24 March 2026)
It’s time you get up to speed with Ruby on Rails! This full-stack web framework is all about letting you build applications quickly. Its elegance, flexibility, and speed make Ruby on Rails a popular choice for businesses, so taking the time to master it can pay huge dividends down the road. In this course, you’ll follow along with the instructor as he uses Ruby on Rails to create an ozone air quality monitoring weather app. You’ll understand Ruby on Rails in just two hours and know how to use it to build awesome web apps. It’s on sale for $17 when you use the code MARCH15 at checkout (through 3/29/26).
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
5th Circuit Flips Cop V. Protester Case To Jury After Spending 7 Years Pretending The 1st Amendment Doesn’t Exist [Techdirt] (12:25 , Tuesday, 24 March 2026)
“Exhaustion” is a legal term. It means plaintiffs need to explore the rest of their options before asking a court to handle their case or ask a higher court to handle a case the lower court has declared not quite exhausted enough.
“Exhaustion” is also a human term. And that’s where we are with this case, nearly nine years since a federal court first told the (then-anonymous) cop to GTFO with his weird-ass complaints against [checks original filing] Twitter, the entire Black Lives Matter social movement, and lifelong anti-police violence activist DeRay Mckesson.
The origin of this case is Mckesson’s appearance at a Black Lives Matter demonstration in Baton Rouge, Louisiana all the way back in July of 2016. So, we’re a decade in and yet, this cop (now known as John Ford) gets to keep trying to make things worse for DeRay and the First Amendment. And the Fifth Circuit Appeals Court seems hellbent on letting him do this.
The 2019 ruling made it abundantly clear Officer John Ford could not sue Twitter, a Twitter hashtag, or Mckesson for injuries he sustained when someone who was not DeRay Mckesson lobbed a projectile and hit him in the head.
This should have been obvious to everyone, even someone recently recovering from a head wound. But on appeal, the Fifth Circuit simply feigned ignorance of the law. I am not even kidding. It said Mckesson had a duty of care during his peaceful protest that it would never apply to cops who hurl flashbangs into toddler’s cribs:
Given the intentional lawlessness of this aspect of the demonstration, Mckesson should have known that leading the demonstrators onto a busy highway was most nearly certain to provoke a confrontation between police and the mass of demonstrators, yet he ignored the foreseeable danger to officers, bystanders, and demonstrators, and notwithstanding, did so anyway. By ignoring the foreseeable risk of violence that his actions created, Mckesson failed to exercise reasonable care in conducting his demonstration.
Yep, just because the protest closed off a roadway, Mckesson MIGHT be responsible for any other lawless activities other than his own. Mckesson was never criminally charged for blocking off a highway. Nevertheless, the court thought it might be possible that he was somehow responsible for someone else deciding to lob a chunk of concrete at nearby police officers.
The Fifth is a Circus, not a Circuit. Even the Supreme Court — as chock full of MAGA loyalists as it is — found this to be a bit too much, something it tends to find quite often when dealing with appeals bubbling up from the Fifth’s primordial ooze. It sent the case back down to the Fifth, which then decided it should make this a state law case, in obvious hopes of finding some way to keep this cop’s bullshit lawsuit alive.
The dissent in this ruling, which turfed it to the state’s top court, made it explicitly clear that the majority was twisting itself into legal pretzels just to give this aggrieved cop several more bites of this rotting apple:
Indeed, the lone “inciteful” speech quoted in Doe’s complaint is something Mckesson said not to a fired-up protestor but to a mic’ed-up reporter—the day following the protest: “The police want protestors to be too afraid to protest.” Tellingly, not a single word even obliquely references violence, much less advocates it. Temporally, words spoken after the protest cannot possibly have incited violence during the protest. And tacitly, the majority opinion seems to discard the suggestion that Mckesson uttered anything to incite violence against Officer Doe.
The case has now been returned to the Fifth Circuit. The Louisiana Supreme Court ruled that Mckesson’s actions could amount to the sort of negligence that might satisfy statutory requirements, but it never said one way or another whether or not it actually believed his presence at this protest approached these standards.
So, this case has been remanded (once) by the US Supreme Court due to the Fifth’s faulty logic. It has been sent back to the district level twice, with the court finding in both cases that Mckesson cannot be held liable for the actions of the person who hit the cop with a rock. A huge stack of adverse rulings have been generated by the Fifth’s refusal to respect the First Amendment and/or force the cop to sue the person who actually injured him.
And yet, the Fifth persists. Because it’s the Fifth. It draws heavily from the state Supreme Court ruling — one in which the court was only asked (1) whether such a charge might be plausible and (2) whether damages could be recovered if said accusation proved to be true. No certified question about the constitutional issues raised by suing a protester for being at a protest where someone else injured a cop. No question was asked as to whether or not it was constitutional to treat every person at a protest equally liable for any crime that might be committed during a protest.
Those questions weren’t asked because the Fifth Circuit didn’t want those answers. All it wanted was a reason to allow this cop to sue a Black protester because this was the only name the cop had managed to gather during his nine years of litigation.
And here’s a court that would move heaven and earth to prevent a lawsuit against a cop to be handled by a jury moving heaven and earth [PDF] to ensure it will happen when a cop sues a regular person. (h/t Gabriel Malor on Bluesky)
And what’s said by the court is disturbing — not just because it attempts to hold recognizable people who are easy targets for lawsuits responsible for other people’s actions, but also because it attempts to smear an entire movement (especially as personified by the defendant in this case) as inherently dangerous and unlawful. There’s a lot of loaded language here, which is especially suspect when the court is claiming the right thing to do is hand this off to an impartial jury:
[T]he district court erred because the evidence in the record corroborates Officer Ford’s testimony. As recounted above, the evidence demonstrates that Mckesson helped plan the protest, was a leader in many protests that have turned violent, amplified messages about the protest on social media, and gave orders to the crowd during the protest. Additionally, a video of Mckesson’s position near the police as they cut off the protestors from accessing the interstate substantiates the other evidence. This evidence all tends to support that Mckesson was a leader of the protest, if the jury so determines.
[…]
Mckesson supported these violent protests, and he refused to condemn the use of violence in a televised interview on CNN. Consequently, whether Mckesson breached his duty to Officer Ford and others raises a triable jury question.
The only supporting documents the court offers are those submitted by the officer. There are lots of things citing the officer’s complaint, but that’s not the stuff the court is supposed to be citing as supportive in this appeal. Remember, Doe/Ford was the losing party in the district court case. He’s the moving party, as the legal parlance goes. The appellate court is supposed to grant more deference to the non-moving party during appeals. But the Fifth has gone the other way… multiple times in the same case! The cop got his deference at the lower level as the plaintiff. He’s not supposed to get it again when he loses.
Having done the wrong thing at least twice, the court tosses it to what the majority must feel might be a sympathetic (to the cop) jury in Louisiana. While it’s always happy to terminate litigation when cops are the defendants, it’s seeming more than willing to extend litigation when it’s the cops who are suing citizens.
There’s a dissent that runs nearly as long as the majority ruling. It’s great that it’s there and that it recognizes the Fifth’s willingness to pretend the First Amendment doesn’t matter when it’s a cop that’s doing the complaining (in the legal sense of the word)[and also the regular sense of the word].
But the majority makes the rules. The Fifth has decided that — at least in this case — it will side with the moving party and pretend that holding protesters or protest organizers legally responsible for any criminal or civil violations committed by other protesters doesn’t have any affect on the First Amendment whatsoever. It’s a convenient abdication of its role of a check/balance — one delivered by court that has, for years, demonstrated it would rather see 100 innocent people punished than allow one guilty cop to suffer the consequences of their actions.
Bedrock Rockhound Sandal: Adventure Flip-Flops? [BIKEPACKING.com] (12:06 , Tuesday, 24 March 2026)
The new Bedrock Rockhound Sandal is a flip-flop with a supportive footbed and a tough Vibram outsole. And just like Bedrock Sandals' other models, they are repairable and resoleable. Take a closer look here...
The post Bedrock Rockhound Sandal: Adventure Flip-Flops? appeared first on BIKEPACKING.com.
Bedrock Rockhound Sandal: Adventure Flip-Flops? [BIKEPACKING.com] (12:06 , Tuesday, 24 March 2026)
The new Bedrock Rockhound Sandal is a flip-flop with a supportive footbed and a tough Vibram outsole. And just like Bedrock Sandals' other models, they are repairable and resoleable. Take a closer look here...
The post Bedrock Rockhound Sandal: Adventure Flip-Flops? appeared first on BIKEPACKING.com.
A Lucky Shot with a Kiev 60 – One Shot Story [35mmc] (12:00 , Tuesday, 24 March 2026)
The story behind this photo is, actually, the story of the others that I failed to take on this roll of Ferrania P30 film. Having remembered that I had a number of mid-format rolls, I decided to spend a few times with my Soviet-era Kiev 60. My sample was, as many of this model, affected...
The post A Lucky Shot with a Kiev 60 – One Shot Story appeared first on 35mmc.
Computing on real iron [Open source software and nice hardware] (10:44 , Tuesday, 24 March 2026)
+++ Tuesday 24 March 2026 +++ Computing on real iron ====================== When someone mentions computing on `real iron', the term nowadays is often used in reference to a system installed on real hardware. The term is used to distinguish between using a virtual machine or a container, from using real hardware. This hasn't always been the case. In the past, the term was used to distinguish between a system running on a main frame and a system running on a PC. Although PC's were (and are) used as server, at that time often the people referencing a main frame as `real iron' considered a PC a toy system, not for serious tasks. Although the first PC's where very expensive, they indeed were not very capable and soon one of the uses was gaming. The first `servers', f.e., in a Novell network, were based on the 80286 processor. Later followed by 80386, 80486, and eventually Pentium, and after that by even more capable processors. When people started to compute on GPU's and the first GPU clusters appeared, the PC entered the realm of supercomputing. For many tasks GPU clusters became a valid alternative for supercomputing systems, at a much lower price tag (and a much lower electricity bill). General device -------------- The PC has evolved to a general device to be used for many different tasks. The laptop in your backpack, the PC on your desktop and the server where you self host your website are all about the same. When we speak of a server, it is often more to describe the role of the system and not so much its capabilities. Maybe servers contain less consumer-grade hardware like cheap SSD's, but that has more to do with expected life time. Of course in the data center systems are used that are not very comparable with an ordinary PC. But in design, they are not that different. SoC --- Interesting is the development of the SoC, System on Chip in this evolution. SoC's, made for mobile communication,found their ways to single board computers (SBCs). Again, first expensive, until SBCs like the Raspberry Pi and the BeagleBone Black arrived. They disrupted the market with a price point of about 25% of comparable SBSs. Again we saw the development to more capable systems, with more powerful processors, more RAM, faster NICs and so on. Now, the situation has even more blurred. We see laptops and desktop computers based on Raspberry Pi's, and recently Apple launched a laptop based on an iPhone processor. When you want to learn how to administer a system, like maintaining a DNS, a webserver, or a DNS, or when you want to learn programming, this is a great advantage. The BSD running on your BeagleBone or Raspberry Pi, or on a repurposed router, or on a cheap second hand PC, is just the same BSD that runs on servers in data centers. When you have an internet connection, you can start with self hosting your webserver, Gopher server, XMPP server and so on, on hardware that costs less than Euro 100 (or $100). All thanks to that "toy system", not for serious tasks... Last edited: $Date: 2026/03/24 15:44:12 $
The Seattle-Made RatKing Orodruin Bars are Heat-Treated [BIKEPACKING.com] (10:28 , Tuesday, 24 March 2026)
The newly released RatKing Orodruin Bars have just the right amount of rise and sweep for any style of riding, and they’re heat-treated for increased yield strength and fatigue life. Check them out here...
The post The Seattle-Made RatKing Orodruin Bars are Heat-Treated appeared first on BIKEPACKING.com.
The Seattle-Made RatKing Orodruin Bars are Heat-Treated [BIKEPACKING.com] (10:28 , Tuesday, 24 March 2026)
The newly released RatKing Orodruin Bars have just the right amount of rise and sweep for any style of riding, and they’re heat-treated for increased yield strength and fatigue life. Check them out here...
The post The Seattle-Made RatKing Orodruin Bars are Heat-Treated appeared first on BIKEPACKING.com.
Teravail Circos Wheels Review: Invisible [BIKEPACKING.com] (10:11 , Tuesday, 24 March 2026)
Nic didn’t think much about his first brand-new pair of carbon wheels. After a few thousand miles of use on various bikes, in different configurations, and through all kinds of weather, the Teravail Circos carbon wheels simply disappeared beneath him. Having put them thoroughly through their paces, his review takes a deeper look at this nearly invisible set of circles…
The post Teravail Circos Wheels Review: Invisible appeared first on BIKEPACKING.com.
Teravail Circos Wheels Review: Invisible [BIKEPACKING.com] (10:11 , Tuesday, 24 March 2026)
Nic didn’t think much about his first brand-new pair of carbon wheels. After a few thousand miles of use on various bikes, in different configurations, and through all kinds of weather, the Teravail Circos carbon wheels simply disappeared beneath him. Having put them thoroughly through their paces, his review takes a deeper look at this nearly invisible set of circles…
The post Teravail Circos Wheels Review: Invisible appeared first on BIKEPACKING.com.
Winners of the 2026 Iditarod Trail Invitational 1000: The Fab Four [BIKEPACKING.com] (09:46 , Tuesday, 24 March 2026)
On Monday, four riders now known as the "Fab Four" became the first and only two-wheeled finishers of the 2026 Iditarod Trail Invitational 1000. The group spent over 27 frozen days in the Alaskan backcountry, pushing forward even as everyone else turned back. Find a recap from Kari Gibbons of the Wild Winter Women and photos from their incredible ride here...
The post Winners of the 2026 Iditarod Trail Invitational 1000: The Fab Four appeared first on BIKEPACKING.com.
Salmon Skin Silver Surly Krampus and Lingering Cranberry Bridge Club [BIKEPACKING.com] (09:23 , Tuesday, 24 March 2026)
Surly just announced two fresh paint colors for its core rigid dirt-touring bikes. Check out the new Salmon Skin Silver Krampus and Lingering Cranberry Bridge Club here...
The post Salmon Skin Silver Surly Krampus and Lingering Cranberry Bridge Club appeared first on BIKEPACKING.com.
Collective Reward #239: Restrap Switch Rack [BIKEPACKING.com] (09:05 , Tuesday, 24 March 2026)
For our 239th Collective Reward, we’re giving one randomly selected Bikepacking Collective member a Restrap Switch Rack and Cage—a clever, modular cargo system designed to bring rack functionality to nearly any modern thru-axle bike. Find details here...
The post Collective Reward #239: Restrap Switch Rack appeared first on BIKEPACKING.com.
Harman Switch Azure [35mmc] (09:00 , Tuesday, 24 March 2026)
I’m a little late to the party with the release of Harman Azure – it was launched a few weeks back, but I was out of action in the run-up so I only had a chance to shoot one roll and was late sending to be processed. So here I am a few weeks later...
The post Harman Switch Azure appeared first on 35mmc.
Self-propagating malware poisons open source software and wipes Iran-based machines [Biz & IT - Ars Technica] (08:38 , Tuesday, 24 March 2026)
A new hacking group has been rampaging the Internet in a persistent campaign that spreads a self-propagating and never-before-seen backdoor—and curiously a data wiper that targets Iranian machines.
The group, tracked under the name TeamPCP, first gained visibility in December, when researchers from security firm Flare observed it unleashing a worm that targeted cloud-hosted platforms that weren’t properly secured. The objective was to build a distributed proxy and scanning infrastructure and then use it to compromise servers for exfiltrating data, deploying ransomware, conducting extortion, and mining cryptocurrency. The group is notable for its skill in large-scale automation and integration of well-known attack techniques.
More recently, TeamPCP has waged a relentless campaign that uses continuously evolving malware to bring ever more systems under its control. Late last week, it compromised virtually all versions of the widely used Trivy vulnerability scanner in a supply-chain attack after gaining privileged access to the GitHub account of Aqua Security, the Trivy creator.
‘Merger Synergies’: CBS News Fires Workers, Shutters 100 Year Old CBS Radio [Techdirt] (08:22 , Tuesday, 24 March 2026)
All modern major U.S. media mergers follow the same trajectory. Executives pump out a bunch of pre-merger lies about job creation and innovation that are parroted by a lazy access press, followed by the rubber stamping by corrupt regulators, followed by oodles of price hikes, layoffs, and quality erosion caused by panicked efforts to pay down preposterous merger debt.
Rinse, wash, and repeat.
After promising this for a while, CBS last week announced it was laying off around six percent of its workforce, or around 60 employees after the company was acquired by right wing billionaire Larry Ellison last year. The company also announced it would be destroying the 100 year old CBS News Radio (there was no indication of what, if anything, they planned to do with archival history).
CBS News boss Bari Weiss offered this statement in the wake of the layoffs:
“Today we are reducing the size of our workforce, and employees who are affected will be notified by the end of the day. We recognize that this is a difficult time for those who will be leaving CBS News. Because these aren’t just names on a list. They are talented, committed colleagues who have been critical to our success. We’ll treat them all with care and respect.
It’s no secret that the news business is changing radically, and that we need to change along with it. New audiences are burgeoning in new places, and we are pressing forward with ambitious plans to grow and invest so that we can be there for them. That means some parts of our newsroom must get smaller to make room for the things we must build to remain competitive.
But these are very hard choices and today is a difficult day. This is a tough message to receive at any time, and especially in the middle of an exceptionally intense news cycle. This organization is working its heart out to deliver for our audience. We’re so grateful to all of you, and we thank you for handling this difficult news with compassion.”
You’re to ignore, of course, that Bari Weiss appears to have absolutely no idea what she’s doing, outside of a generalized and obvious sense that she’d like to make the network even more friendly to right wing autocrats like Donald Trump and Benjamin Netanyahu.
Weiss’ inaugural “town hall” with opportunistic right wing grifter Erika Kirk was a ratings dud, Weiss’ new nightly news broadcast has been an error-prone hot mess, and her delay of a 60 Minutes story about Trump concentration camps continues to plague the network and cause a continued revolt among remaining journalists, who are tripping over themselves in a rush to the exits.
There’s likely to be even greater layoffs as the Ellisons’ pursue their even more problematic acquisition of Warner Brothers (and CBS and NBC), adding significantly even more debt to the company at a very precarious time for traditional television and Hollywood. It’s something the network’s unionized employees are well aware of:
Again, the solution to this is to have a genuine antitrust renaissance in the U.S, and block all and every instance of pointless “growth for growth’s sake” consolidation.
These deals do nothing but generate short-term stock bumps (sometimes), tax breaks, and delusion among the brunchlord extraction class that they’re “savvy dealmakers” as they engage in financial acrobatics to create the illusion of perpetual growth.
These fictions are all aided by a lazy press damaged from the very same pointless consolidation. This particular merger is complicated by the fact that the Trump-loyal Ellisons very clearly see Victor Orban’s autocratic-friendly media in Hungary as a model worth emulating. The only bright spot is that nobody, just like Warner Bros last few suitors, appears to have any idea what they’re actually doing.
The problem is, even if the Ellisons and autocrats fail completely and CBS collapses, they’ve “succeeded” in destroying another journalistic outlet on their way to what they hope will be total U.S. ideological domination.
What I’d Do Differently (If I Didn’t Work in the Bike Industry) [BIKEPACKING.com] (07:33 , Tuesday, 24 March 2026)
In this opinion[ated] piece, BIKEPACKING.com’s founding editor offers a candid peek at what he’d do differently if he didn’t work in the bike industry. Find his thoughts on what bikes and gear he’d be using (and which he wouldn’t) if he didn’t have access to or the need to try the latest and greatest here…
The post What I’d Do Differently (If I Didn’t Work in the Bike Industry) appeared first on BIKEPACKING.com.
New Adventures in Lo-Fi with 110 [35mmc] (06:00 , Tuesday, 24 March 2026)
In 2025, with the inspiration of 35mmc, I began to explore the world of 110 gear and film. 110 as a format was developed by Kodak in the early 1970s and was immensely popular during that decade and the 1980s. It seems that interest began to wane during the 1990s and most camera manufacturers stopped...
The post New Adventures in Lo-Fi with 110 appeared first on 35mmc.
‘Martinsville Missile’ sets a new land speed record for a stock car [Cardinal News] (04:45 , Tuesday, 24 March 2026)

So what kind of trip did Joey Arrington and Tommy Hurley have Monday at Kennedy Space Center?
It was a blast.
It was a gas.
It set a new record.

Hurley drove a machine prepared and powered in Martinsville by Arrington to a new land speed record for a stock car as the Ridgeway resident hit 253 mph on the 3-mile concrete strip at the Shuttle Landing Facility at Cape Canaveral in Florida.
The speed set by the “Martinsville Missile” broke the old record of 244.9 mph set in 2007 at Bonneville Salt Flats in Utah by Russ Wicks, who also piloted a car with an engine built by Arrington.
Hurley, a 50-year-old Ridgeway resident with a drag racing background, drove a rebuilt 1969 Dodge Daytona Charger with a 1,000-horsepower, 358-cubic-inch motor to the new record. The vehicle and the record-setting attempt were backed by the VA250 Car Project, part of a national motorsports and engineering initiative commemorating the 250th birthday of the United States.
Arrington, a Franklin County native and the son of former NASCAR driver Buddy Arrington, stood aside the 300-foot-wide tarmac where 78 space shuttle landings took place and watched Hurley roar past on the record run.
“When you’re standing there, the engine’s screaming and it’s talking junk to you,” Arrington said. “The car is moving so fast. You can see it, but when it’s right in front of you, you’ve got to sort of wait for it to get past you and then you pick it up.
“Truly, it is so satisfying. When you accomplish something like that and you can visually see it, it makes noise, it makes your hair stand up.”
Hurley made two passes on the concrete surface. The first was basically a test run, hitting 226 mph.
“We went out and shook the car down, made sure everything worked for Tommy, and it did,” Arrington said. “Everybody just buckled down, got it turned around and went at it again.”
Hurley, who said his top speed on a one-eighth mile drag strip has been 165 mph, went through the gear box cleanly in a ride that took just under 1 minute. Based on in-car video, the Henry County native shifted out of first gear at 55 mph, reached third gear at 80 mph and hit fourth gear at 190 mph.
It took Hurley approximately 34 seconds to achieve the target speed of 250, topping out at 253 before he fired the twin parachutes from the rear of the car that slowed the rolling rocket. “It felt good,” Hurley said. “The car was real stable. I could have stopped on the last pass probably on brakes.”
Arrington believes few attempts have been made in 19 years since he helped Wicks, a West Coast driver known as “SpeedKing,” set the land speed mark in Utah.
“There’s so many different categories,” Arrington said. “You can change one thing and all of a sudden it moves you into a different class. It’s not anything people do as a sport week in, week out, but it is something that people do.”
Arrington expects the record to be certified by the International Hot Rod Association after officials receive data from the GPS tracking device that measured the speed.

The idea to make a run at the record originated from a conversation between Arrington and former Martinsville Mayor Danny Turner.
The pair pitched a proposal to Republican state Del. Terry Austin of Botetourt County, who is the chair of the VA250 Commission. The commission eventually contributed $50,000 to the project, which according to Turner had a $280,000 price tag. (Disclosure: The commission is one of our donors for our Cardinal 250 project, but donors have no say in news decisions. See our policy.)
Austin was on hand in Florida for Monday’s event along with former Chatham Mayor Will Pace, a member of the Virginia’s Tobacco Region Revitalization Commission.
Danville’s Peyton Sellers was the original choice to drive the car, with the attempt scheduled for early January. The run was postponed, and it was later announced that former NASCAR Cup star Kyle Petty would be behind the wheel.
However, that deal never materialized so Arrington chose Hurley, a 1994 Magna Vista High School graduate who works as a mechanic at his father’s Ridgeway business, Hurley’s Auto Sales.
Putting the “Martinsville Missile” into the record was truly a Southside Virginia endeavor.

Arrington, 69, is a 1974 graduate of the old Laurel Park High School. Turner graduated from Martinsville High in 1974, twice organizing school events that earned inclusion into the “Guinness Book of World Records.”
Turner was diagnosed with Stage 4 cancer one year ago when the project began to move from an idea to reality in Arrington’s Martinsville shop, which is located in a former Sears store behind Liberty Fair Mall.
Fifty-two years is a long time between records.
“Back in the time when we were kids you’d always say, ‘What’s the next thing we’re going to break?’ And you had the world in front of you,” Turner said. “With the cancer diagnosis right when we started working on this, it was really a goal that I wanted to achieve, too. When you’re talking about how long you’ve got to live, let’s go ahead and do this while I’m still around.”
Arrington, Hurley and Turner were driving back to Virginia together Monday night.
Hurley planned to be at work Tuesday in his father’s shop after a long trip home.
Arrington is already a contemplating a return to Florida for another record attempt.
Meanwhile, the VA250 car that Arrington transported to Florida in a Virginia-themed hauler was enjoying its own celebrity. It was scheduled to appear at a boat race on the other side of the Sunshine State in St. Petersburg.
“It’s going to go show itself off,” Arrington said.
The post ‘Martinsville Missile’ sets a new land speed record for a stock car appeared first on Cardinal News.
Legislators ponder a different way to tax data centers. Should that new way also be based on a locality’s economic status? [Cardinal News] (04:15 , Tuesday, 24 March 2026)

Once upon a time, as all good fairy tales begin, Virginia passed a law to persuade a goose that lays golden eggs to nest in the state. As an incentive, it offered that goose a generous tax break, which seemed easy enough at the time because the state had no geese and a goose that laid golden eggs seemed a pretty good deal. The state would lose nothing from what it had at the time, but it might wind up with a horde of golden eggs.

That’s exactly what happened. Virginia today has an entire flock of geese laying golden eggs, more than anywhere else, but not everybody is happy about that. Geese can be noisy, obnoxious neighbors who foul everything they touch, and how many golden eggs do you really need anyway? They’re piling up everywhere — on farms, next to battlefields, next to residential areas. Now, some in Virginia are thinking that maybe their tax break was too generous and should be revoked — except that some people who don’t have any geese are saying, “Hey, where’s our golden egg? You’ve got more than you want, but we don’t have any! Before you kill those geese, can’t we get at least a few of those golden eggs?”
Ideally, by now you’ve figured out that this column isn’t about surly, bad-tempered waterfowl but about data centers. In 2010, Virginia legislators took a tax break designed only to attract data centers to a very specific group of economically distressed counties and extended it statewide. The result has been spectacularly successful: Virginia is now the data center capital of the world, although we’ll likely soon lose that title to Texas. In the process, that tax abatement whose impact was originally listed as “unknown” is now known: The state forgoes $1.9 billion worth in taxes a year. Some see that as a giveaway, others a bargain, because that has to be balanced against the $9.1 billion in gross domestic product that data centers produce in Virginia each year.
All this has temporarily broken Virginia’s budget-making. Although both chambers of the General Assembly are controlled by the same party, the House and Senate budget writers disagreed so profoundly over data centers that the legislature adjourned without passing a budget.

The immediate issue is that tax abatement, which is scheduled to run through 2035. Senate Finance Committee Chair Louise Lucas, D-Portsmouth, wants to end it next year — eight years early. Others have warned that it sets a dangerous precedent for other industries, that Virginia can’t be trusted to keep its word. A Danville-based economic development group, the Future of the Piedmont Foundation, issued a lengthy statement addressing that point.
Toward the end of the session, Gov. Abigail Spanberger floated the notion of a “consumption tax” — taxing data centers on the amount of electricity they use.
This is a fundamental shift in policy and is said to be one that the data center industry supports, although not a single data center representative I contacted was willing to talk about the issue — I suspect because this is all quite sensitive.
Across the country, 37 states have some sort of tax incentives to attract data centers, according to the National Conference of State Legislatures. While each of these varies, they are all generally about the same, according to a list compiled by Data Center Dynamics, in that they involve some kind of break on sales taxes or other taxes. They’re still paying taxes, just not the full rate — the lower rate being an incentive to locate in that state. No other state appears to have a consumption tax, and no details have been offered on what one in Virginia would look like, which means we don’t know exactly how this would work
In the midst of this, a Republican legislator from rural Virginia has advanced a detailed version of how a consumption tax might work and how it could be used to spread the digital wealth in Virginia (my phrase, not his).

That legislator is Del. Wren Williams, R-Patrick County, who has put together an extensive fact sheet on how a consumption tax on data centers might work to benefit rural Virginia. To start with, he prefers not to call it a consumption tax, but a severance tax, similar to severance taxes on coal. The theory with the latter is that coal is a finite resource, so the severance tax serves as a license by which coal companies pay back a locality for removing that resource. Conceptually, Williams says Virginia should treat electricity as a resource, even though electricity can be generated anew, unlike coal. He’s not even sure this should be regarded as a tax. “It’s more of a license,” he says, “a business paying a fee to use our grid” — and then getting charged accordingly. If you use more, you pay a higher rate for the license.
All that’s more a matter of semantics and theory; the most interesting part, from a regional perspective, is that Williams says this charge — whether you call it a consumption tax or a severance tax or a license or a fee — should be based on a locality’s economic status.
Simply put, a data center in a prosperous county would pay more than a data center in an economically distressed county. Williams envisions three tiers, using the fiscal stress index developed by the Virginia Commission on Local Government, which is periodically updated as conditions change. He pictures something like this, with some examples given of some of the localities that might qualify for each tier:
His rationale: This system might prompt data centers to localities where they’re more wanted — and needed, from a fiscal point of view. “I do think they’re not paying enough,” he says, but if Virginia wants data centers to pay their “fair share,” as the phrase goes, those rates should be used to help spur economic development in rural Virginia. “I think it’s a creative solution,” he says.
In the closing days of the session, Williams presented his idea to Senate Majority Leader Scott Surovell, D-Fairfax County (a sign of how legislators from different parties and different regions do talk to one another — even different chambers!).
Surovell told me it’s an “interesting idea.”
Sen. Creigh Deeds, D-Charlottesville, and one of the Senate’s budget negotiators, told me via text message: “It’s on the table as are other ideas.” He also cautioned that “costs would get passed on to customers,” as all costs eventually do.
The legislator whose district has more data centers than any other is Del. David Reid, D-Loudoun County, says it’s a concept that needs more research but points out that a few years ago there was a suggestion that, if Northern Virginia localities wanted to slow down data center growth, they should seek a change in the law that would have the full tax on data centers apply in that part of the state. Those localities opposed it vehemently, he says.
Reid wonders whether it’s legally and technically possible to structure a consumption tax so that it only hits data centers (whose popularity right now is low in some quarters) and doesn’t inadvertently bring in other high-energy users, such as, say, Newport News Shipbuilding.
He also wonders about legal issues with different tax rates, although there are other things that have different tax rates, based on geography. That’s the whole concept behind “enterprise zones,” having a lower tax rate in some economically distressed areas as a way to attract jobs.
The idea of a tax break for data centers based on geography is not new at all. Virginia’s very first tax break for data centers, passed in 2008, was intended to apply only to Mecklenburg County but was expanded to allow localities with an unemployment rate of 4.9% or higher. It was only later that the tax breaks were applied statewide, which helped set off the data center boom in Northern Virginia.
A consumption tax for data centers would be new, but a tiered system intended to encourage data centers to locate in economically distressed localities would take Virginia’s data center policies back to first principles.
Want more politics and analysis? Sign up for West of the Capital, our weekly political newsletter that goes out on Fridays. Sign up here:
The post Legislators ponder a different way to tax data centers. Should that new way also be based on a locality’s economic status? appeared first on Cardinal News.
Legislators ponder a different way to tax data centers. Should that new way also be based on a locality’s economic status? [Cardinal News] (04:15 , Tuesday, 24 March 2026)

Once upon a time, as all good fairy tales begin, Virginia passed a law to persuade a goose that lays golden eggs to nest in the state. As an incentive, it offered that goose a generous tax break, which seemed easy enough at the time because the state had no geese and a goose that laid golden eggs seemed a pretty good deal. The state would lose nothing from what it had at the time, but it might wind up with a horde of golden eggs.

That’s exactly what happened. Virginia today has an entire flock of geese laying golden eggs, more than anywhere else, but not everybody is happy about that. Geese can be noisy, obnoxious neighbors who foul everything they touch, and how many golden eggs do you really need anyway? They’re piling up everywhere — on farms, next to battlefields, next to residential areas. Now, some in Virginia are thinking that maybe their tax break was too generous and should be revoked — except that some people who don’t have any geese are saying, “Hey, where’s our golden egg? You’ve got more than you want, but we don’t have any! Before you kill those geese, can’t we get at least a few of those golden eggs?”
Ideally, by now you’ve figured out that this column isn’t about surly, bad-tempered waterfowl but about data centers. In 2010, Virginia legislators took a tax break designed only to attract data centers to a very specific group of economically distressed counties and extended it statewide. The result has been spectacularly successful: Virginia is now the data center capital of the world, although we’ll likely soon lose that title to Texas. In the process, that tax abatement whose impact was originally listed as “unknown” is now known: The state forgoes $1.9 billion worth in taxes a year. Some see that as a giveaway, others a bargain, because that has to be balanced against the $9.1 billion in gross domestic product that data centers produce in Virginia each year.
All this has temporarily broken Virginia’s budget-making. Although both chambers of the General Assembly are controlled by the same party, the House and Senate budget writers disagreed so profoundly over data centers that the legislature adjourned without passing a budget.

The immediate issue is that tax abatement, which is scheduled to run through 2035. Senate Finance Committee Chair Louise Lucas, D-Portsmouth, wants to end it next year — eight years early. Others have warned that it sets a dangerous precedent for other industries, that Virginia can’t be trusted to keep its word. A Danville-based economic development group, the Future of the Piedmont Foundation, issued a lengthy statement addressing that point.
Toward the end of the session, Gov. Abigail Spanberger floated the notion of a “consumption tax” — taxing data centers on the amount of electricity they use.
This is a fundamental shift in policy and is said to be one that the data center industry supports, although not a single data center representative I contacted was willing to talk about the issue — I suspect because this is all quite sensitive.
Across the country, 37 states have some sort of tax incentives to attract data centers, according to the National Conference of State Legislatures. While each of these varies, they are all generally about the same, according to a list compiled by Data Center Dynamics, in that they involve some kind of break on sales taxes or other taxes. They’re still paying taxes, just not the full rate — the lower rate being an incentive to locate in that state. No other state appears to have a consumption tax, and no details have been offered on what one in Virginia would look like, which means we don’t know exactly how this would work
In the midst of this, a Republican legislator from rural Virginia has advanced a detailed version of how a consumption tax might work and how it could be used to spread the digital wealth in Virginia (my phrase, not his).

That legislator is Del. Wren Williams, R-Patrick County, who has put together an extensive fact sheet on how a consumption tax on data centers might work to benefit rural Virginia. To start with, he prefers not to call it a consumption tax, but a severance tax, similar to severance taxes on coal. The theory with the latter is that coal is a finite resource, so the severance tax serves as a license by which coal companies pay back a locality for removing that resource. Conceptually, Williams says Virginia should treat electricity as a resource, even though electricity can be generated anew, unlike coal. He’s not even sure this should be regarded as a tax. “It’s more of a license,” he says, “a business paying a fee to use our grid” — and then getting charged accordingly. If you use more, you pay a higher rate for the license.
All that’s more a matter of semantics and theory; the most interesting part, from a regional perspective, is that Williams says this charge — whether you call it a consumption tax or a severance tax or a license or a fee — should be based on a locality’s economic status.
Simply put, a data center in a prosperous county would pay more than a data center in an economically distressed county. Williams envisions three tiers, using the fiscal stress index developed by the Virginia Commission on Local Government, which is periodically updated as conditions change. He pictures something like this, with some examples given of some of the localities that might qualify for each tier:
His rationale: This system might prompt data centers to localities where they’re more wanted — and needed, from a fiscal point of view. “I do think they’re not paying enough,” he says, but if Virginia wants data centers to pay their “fair share,” as the phrase goes, those rates should be used to help spur economic development in rural Virginia. “I think it’s a creative solution,” he says.
In the closing days of the session, Williams presented his idea to Senate Majority Leader Scott Surovell, D-Fairfax County (a sign of how legislators from different parties and different regions do talk to one another — even different chambers!).
Surovell told me it’s an “interesting idea.”
Sen. Creigh Deeds, D-Charlottesville, and one of the Senate’s budget negotiators, told me via text message: “It’s on the table as are other ideas.” He also cautioned that “costs would get passed on to customers,” as all costs eventually do.
The legislator whose district has more data centers than any other is Del. David Reid, D-Loudoun County, says it’s a concept that needs more research but points out that a few years ago there was a suggestion that, if Northern Virginia localities wanted to slow down data center growth, they should seek a change in the law that would have the full tax on data centers apply in that part of the state. Those localities opposed it vehemently, he says.
Reid wonders whether it’s legally and technically possible to structure a consumption tax so that it only hits data centers (whose popularity right now is low in some quarters) and doesn’t inadvertently bring in other high-energy users, such as, say, Newport News Shipbuilding.
He also wonders about legal issues with different tax rates, although there are other things that have different tax rates, based on geography. That’s the whole concept behind “enterprise zones,” having a lower tax rate in some economically distressed areas as a way to attract jobs.
The idea of a tax break for data centers based on geography is not new at all. Virginia’s very first tax break for data centers, passed in 2008, was intended to apply only to Mecklenburg County but was expanded to allow localities with an unemployment rate of 4.9% or higher. It was only later that the tax breaks were applied statewide, which helped set off the data center boom in Northern Virginia.
A consumption tax for data centers would be new, but a tiered system intended to encourage data centers to locate in economically distressed localities would take Virginia’s data center policies back to first principles.
Want more politics and analysis? Sign up for West of the Capital, our weekly political newsletter that goes out on Fridays. Sign up here:
The post Legislators ponder a different way to tax data centers. Should that new way also be based on a locality’s economic status? appeared first on Cardinal News.
Legislators ponder a different way to tax data centers. Should that new way also be based on a locality’s economic status? [Cardinal News] (04:15 , Tuesday, 24 March 2026)

Once upon a time, as all good fairy tales begin, Virginia passed a law to persuade a goose that lays golden eggs to nest in the state. As an incentive, it offered that goose a generous tax break, which seemed easy enough at the time because the state had no geese and a goose that laid golden eggs seemed a pretty good deal. The state would lose nothing from what it had at the time, but it might wind up with a horde of golden eggs.

That’s exactly what happened. Virginia today has an entire flock of geese laying golden eggs, more than anywhere else, but not everybody is happy about that. Geese can be noisy, obnoxious neighbors who foul everything they touch, and how many golden eggs do you really need anyway? They’re piling up everywhere — on farms, next to battlefields, next to residential areas. Now, some in Virginia are thinking that maybe their tax break was too generous and should be revoked — except that some people who don’t have any geese are saying, “Hey, where’s our golden egg? You’ve got more than you want, but we don’t have any! Before you kill those geese, can’t we get at least a few of those golden eggs?”
Ideally, by now you’ve figured out that this column isn’t about surly, bad-tempered waterfowl but about data centers. In 2010, Virginia legislators took a tax break designed only to attract data centers to a very specific group of economically distressed counties and extended it statewide. The result has been spectacularly successful: Virginia is now the data center capital of the world, although we’ll likely soon lose that title to Texas. In the process, that tax abatement whose impact was originally listed as “unknown” is now known: The state forgoes $1.9 billion worth in taxes a year. Some see that as a giveaway, others a bargain, because that has to be balanced against the $9.1 billion in gross domestic product that data centers produce in Virginia each year.
All this has temporarily broken Virginia’s budget-making. Although both chambers of the General Assembly are controlled by the same party, the House and Senate budget writers disagreed so profoundly over data centers that the legislature adjourned without passing a budget.

The immediate issue is that tax abatement, which is scheduled to run through 2035. Senate Finance Committee Chair Louise Lucas, D-Portsmouth, wants to end it next year — eight years early. Others have warned that it sets a dangerous precedent for other industries, that Virginia can’t be trusted to keep its word. A Danville-based economic development group, the Future of the Piedmont Foundation, issued a lengthy statement addressing that point.
Toward the end of the session, Gov. Abigail Spanberger floated the notion of a “consumption tax” — taxing data centers on the amount of electricity they use.
This is a fundamental shift in policy and is said to be one that the data center industry supports, although not a single data center representative I contacted was willing to talk about the issue — I suspect because this is all quite sensitive.
Across the country, 37 states have some sort of tax incentives to attract data centers, according to the National Conference of State Legislatures. While each of these varies, they are all generally about the same, according to a list compiled by Data Center Dynamics, in that they involve some kind of break on sales taxes or other taxes. They’re still paying taxes, just not the full rate — the lower rate being an incentive to locate in that state. No other state appears to have a consumption tax, and no details have been offered on what one in Virginia would look like, which means we don’t know exactly how this would work
In the midst of this, a Republican legislator from rural Virginia has advanced a detailed version of how a consumption tax might work and how it could be used to spread the digital wealth in Virginia (my phrase, not his).

That legislator is Del. Wren Williams, R-Patrick County, who has put together an extensive fact sheet on how a consumption tax on data centers might work to benefit rural Virginia. To start with, he prefers not to call it a consumption tax, but a severance tax, similar to severance taxes on coal. The theory with the latter is that coal is a finite resource, so the severance tax serves as a license by which coal companies pay back a locality for removing that resource. Conceptually, Williams says Virginia should treat electricity as a resource, even though electricity can be generated anew, unlike coal. He’s not even sure this should be regarded as a tax. “It’s more of a license,” he says, “a business paying a fee to use our grid” — and then getting charged accordingly. If you use more, you pay a higher rate for the license.
All that’s more a matter of semantics and theory; the most interesting part, from a regional perspective, is that Williams says this charge — whether you call it a consumption tax or a severance tax or a license or a fee — should be based on a locality’s economic status.
Simply put, a data center in a prosperous county would pay more than a data center in an economically distressed county. Williams envisions three tiers, using the fiscal stress index developed by the Virginia Commission on Local Government, which is periodically updated as conditions change. He pictures something like this, with some examples given of some of the localities that might qualify for each tier:
His rationale: This system might prompt data centers to localities where they’re more wanted — and needed, from a fiscal point of view. “I do think they’re not paying enough,” he says, but if Virginia wants data centers to pay their “fair share,” as the phrase goes, those rates should be used to help spur economic development in rural Virginia. “I think it’s a creative solution,” he says.
In the closing days of the session, Williams presented his idea to Senate Majority Leader Scott Surovell, D-Fairfax County (a sign of how legislators from different parties and different regions do talk to one another — even different chambers!).
Surovell told me it’s an “interesting idea.”
Sen. Creigh Deeds, D-Charlottesville, and one of the Senate’s budget negotiators, told me via text message: “It’s on the table as are other ideas.” He also cautioned that “costs would get passed on to customers,” as all costs eventually do.
The legislator whose district has more data centers than any other is Del. David Reid, D-Loudoun County, says it’s a concept that needs more research but points out that a few years ago there was a suggestion that, if Northern Virginia localities wanted to slow down data center growth, they should seek a change in the law that would have the full tax on data centers apply in that part of the state. Those localities opposed it vehemently, he says.
Reid wonders whether it’s legally and technically possible to structure a consumption tax so that it only hits data centers (whose popularity right now is low in some quarters) and doesn’t inadvertently bring in other high-energy users, such as, say, Newport News Shipbuilding.
He also wonders about legal issues with different tax rates, although there are other things that have different tax rates, based on geography. That’s the whole concept behind “enterprise zones,” having a lower tax rate in some economically distressed areas as a way to attract jobs.
The idea of a tax break for data centers based on geography is not new at all. Virginia’s very first tax break for data centers, passed in 2008, was intended to apply only to Mecklenburg County but was expanded to allow localities with an unemployment rate of 4.9% or higher. It was only later that the tax breaks were applied statewide, which helped set off the data center boom in Northern Virginia.
A consumption tax for data centers would be new, but a tiered system intended to encourage data centers to locate in economically distressed localities would take Virginia’s data center policies back to first principles.
Want more politics and analysis? Sign up for West of the Capital, our weekly political newsletter that goes out on Fridays. Sign up here:
The post Legislators ponder a different way to tax data centers. Should that new way also be based on a locality’s economic status? appeared first on Cardinal News.
Legislators ponder a different way to tax data centers. Should that new way also be based on a locality’s economic status? [Cardinal News] (04:15 , Tuesday, 24 March 2026)

Once upon a time, as all good fairy tales begin, Virginia passed a law to persuade a goose that lays golden eggs to nest in the state. As an incentive, it offered that goose a generous tax break, which seemed easy enough at the time because the state had no geese and a goose that laid golden eggs seemed a pretty good deal. The state would lose nothing from what it had at the time, but it might wind up with a horde of golden eggs.

That’s exactly what happened. Virginia today has an entire flock of geese laying golden eggs, more than anywhere else, but not everybody is happy about that. Geese can be noisy, obnoxious neighbors who foul everything they touch, and how many golden eggs do you really need anyway? They’re piling up everywhere — on farms, next to battlefields, next to residential areas. Now, some in Virginia are thinking that maybe their tax break was too generous and should be revoked — except that some people who don’t have any geese are saying, “Hey, where’s our golden egg? You’ve got more than you want, but we don’t have any! Before you kill those geese, can’t we get at least a few of those golden eggs?”
Ideally, by now you’ve figured out that this column isn’t about surly, bad-tempered waterfowl but about data centers. In 2010, Virginia legislators took a tax break designed only to attract data centers to a very specific group of economically distressed counties and extended it statewide. The result has been spectacularly successful: Virginia is now the data center capital of the world, although we’ll likely soon lose that title to Texas. In the process, that tax abatement whose impact was originally listed as “unknown” is now known: The state forgoes $1.9 billion worth in taxes a year. Some see that as a giveaway, others a bargain, because that has to be balanced against the $9.1 billion in gross domestic product that data centers produce in Virginia each year.
All this has temporarily broken Virginia’s budget-making. Although both chambers of the General Assembly are controlled by the same party, the House and Senate budget writers disagreed so profoundly over data centers that the legislature adjourned without passing a budget.

The immediate issue is that tax abatement, which is scheduled to run through 2035. Senate Finance Committee Chair Louise Lucas, D-Portsmouth, wants to end it next year — eight years early. Others have warned that it sets a dangerous precedent for other industries, that Virginia can’t be trusted to keep its word. A Danville-based economic development group, the Future of the Piedmont Foundation, issued a lengthy statement addressing that point.
Toward the end of the session, Gov. Abigail Spanberger floated the notion of a “consumption tax” — taxing data centers on the amount of electricity they use.
This is a fundamental shift in policy and is said to be one that the data center industry supports, although not a single data center representative I contacted was willing to talk about the issue — I suspect because this is all quite sensitive.
Across the country, 37 states have some sort of tax incentives to attract data centers, according to the National Conference of State Legislatures. While each of these varies, they are all generally about the same, according to a list compiled by Data Center Dynamics, in that they involve some kind of break on sales taxes or other taxes. They’re still paying taxes, just not the full rate — the lower rate being an incentive to locate in that state. No other state appears to have a consumption tax, and no details have been offered on what one in Virginia would look like, which means we don’t know exactly how this would work
In the midst of this, a Republican legislator from rural Virginia has advanced a detailed version of how a consumption tax might work and how it could be used to spread the digital wealth in Virginia (my phrase, not his).

That legislator is Del. Wren Williams, R-Patrick County, who has put together an extensive fact sheet on how a consumption tax on data centers might work to benefit rural Virginia. To start with, he prefers not to call it a consumption tax, but a severance tax, similar to severance taxes on coal. The theory with the latter is that coal is a finite resource, so the severance tax serves as a license by which coal companies pay back a locality for removing that resource. Conceptually, Williams says Virginia should treat electricity as a resource, even though electricity can be generated anew, unlike coal. He’s not even sure this should be regarded as a tax. “It’s more of a license,” he says, “a business paying a fee to use our grid” — and then getting charged accordingly. If you use more, you pay a higher rate for the license.
All that’s more a matter of semantics and theory; the most interesting part, from a regional perspective, is that Williams says this charge — whether you call it a consumption tax or a severance tax or a license or a fee — should be based on a locality’s economic status.
Simply put, a data center in a prosperous county would pay more than a data center in an economically distressed county. Williams envisions three tiers, using the fiscal stress index developed by the Virginia Commission on Local Government, which is periodically updated as conditions change. He pictures something like this, with some examples given of some of the localities that might qualify for each tier:
His rationale: This system might prompt data centers to localities where they’re more wanted — and needed, from a fiscal point of view. “I do think they’re not paying enough,” he says, but if Virginia wants data centers to pay their “fair share,” as the phrase goes, those rates should be used to help spur economic development in rural Virginia. “I think it’s a creative solution,” he says.
In the closing days of the session, Williams presented his idea to Senate Majority Leader Scott Surovell, D-Fairfax County (a sign of how legislators from different parties and different regions do talk to one another — even different chambers!).
Surovell told me it’s an “interesting idea.”
Sen. Creigh Deeds, D-Charlottesville, and one of the Senate’s budget negotiators, told me via text message: “It’s on the table as are other ideas.” He also cautioned that “costs would get passed on to customers,” as all costs eventually do.
The legislator whose district has more data centers than any other is Del. David Reid, D-Loudoun County, says it’s a concept that needs more research but points out that a few years ago there was a suggestion that, if Northern Virginia localities wanted to slow down data center growth, they should seek a change in the law that would have the full tax on data centers apply in that part of the state. Those localities opposed it vehemently, he says.
Reid wonders whether it’s legally and technically possible to structure a consumption tax so that it only hits data centers (whose popularity right now is low in some quarters) and doesn’t inadvertently bring in other high-energy users, such as, say, Newport News Shipbuilding.
He also wonders about legal issues with different tax rates, although there are other things that have different tax rates, based on geography. That’s the whole concept behind “enterprise zones,” having a lower tax rate in some economically distressed areas as a way to attract jobs.
The idea of a tax break for data centers based on geography is not new at all. Virginia’s very first tax break for data centers, passed in 2008, was intended to apply only to Mecklenburg County but was expanded to allow localities with an unemployment rate of 4.9% or higher. It was only later that the tax breaks were applied statewide, which helped set off the data center boom in Northern Virginia.
A consumption tax for data centers would be new, but a tiered system intended to encourage data centers to locate in economically distressed localities would take Virginia’s data center policies back to first principles.
Want more politics and analysis? Sign up for West of the Capital, our weekly political newsletter that goes out on Fridays. Sign up here:
The post Legislators ponder a different way to tax data centers. Should that new way also be based on a locality’s economic status? appeared first on Cardinal News.
AI tool built by UVA Wise student aims to help connect patients with health care [Cardinal News] (04:10 , Tuesday, 24 March 2026)

When Gurkan Akalin set out to find a teaching assistant for his graduate-level course in artificial intelligence and machine learning, he didn’t expect to find a qualified candidate among the incoming freshmen.
Peter Gaublomme had just arrived at the University of Virginia’s College at Wise from Arlington for his first semester when he applied for the teaching assistant position. Akalin found he had the necessary coding skills and an interest in working with AI. He also noted Gaublomme’s interest in the needs of Southwest Virginia.
“That’s just the type of person he is. He cares about people,” Akalin said.
Within his first few months of college, Gaublomme built Wise Care, an AI navigational tool designed to help residents find health care services, understand insurance and locate nearby providers.
The Lenowisco, Cumberland Plateau and Mount Rogers health districts report some of the highest mortality rates in Virginia. A 2022 report from the Virginia Department of Health found that residents often struggle to access care due to difficulty finding providers, limited transportation and limited options to improve health literacy.
Wise County is also federally designated as a Health Profession Shortage Area for dental, mental health and primary care. Even when services are available, some providers do not accept certain insurance plans.
Gaublomme said he created Wise Care to help residents navigate those barriers.
“I noticed that one of the most prevalent issues in the region was access to health care,” he said. “Many people in Southwest Virginia face these different challenges regarding navigating health care systems, especially just navigating how to identify specialists, insurance coverage and local services. So my intent behind building Wise Care was, how can I organize reliable regional information regarding health care?”
He built the tool using OpenAI, the company behind ChatGPT. Users can enter questions, and the chatbot generates responses to guide them toward services.
The program offers step-by-step suggestions. For example, when asked how to reach the Health Wagon, a free clinic in Wise that offers some mobile services, without transportation, it explains how to access Medicaid transportation benefits and suggests calling the facility directly for help.
To ensure accuracy, Gaublomme designed the system to cite its sources with each response. He also programmed it to rely only on existing information rather than generate new content.
“I think creating information is a big cause for concern where AI is concerned, especially as AI improves. I’ve done my best, I’ve done all that I can to make sure this information is accurate,” Gaublomme said.
Through Akalin’s classes, Gaublomme also understands the ethical problems of data collection in an AI-powered tool, especially in health care.
“You can have good intent, but is it typical to have access [to information] like this, especially in health care? That’s a big item that we are discussing in my class,” Akalin said. “There is no definite answer.”
University leaders also raised questions about data privacy. Gaublomme said Wise Care does not store personal health information, user inputs or responses. The system only retains the sources it uses to generate answers.
This year, Wise Care was selected as a semifinalist in the UVa Entrepreneurship Cup, placing in the top 40 of more than 300 student projects across multiple UVa departments in both Charlottesville and Wise. It was one of two ventures selected from UVA Wise.
Gaublomme said he hopes to continue building tools that benefit the region.
“I’m very proud to be a part of that,” he said.
The tool is available to the public on the Wise Care website, but Gaublomme said usage has been limited so far.
Akalin said integrating Wise Care into existing health systems could expand its impact, but doing so would require agreements with individual providers. Ongoing concerns about data privacy could make that process challenging.
Like many early-stage tools, Wise Care will likely evolve through user feedback.
“With this type of product, it’s never the best product at once. So if you can keep improving the product based on feedback, then it’s going to be a really good product,” Akalin said.
Gaublomme is set to transfer to the main UVa campus in Charlottesville next year, but he said his commitment to Southwest Virginia will continue.
“Moving from an urban area to a more rural area, this was definitely a shift for me. Now that I’ve become more accustomed to it, I’ve found a deep appreciation for the area. While I’m here I want to do as much as I can and be a positive influence,” Gaublomme said.
The post AI tool built by UVA Wise student aims to help connect patients with health care appeared first on Cardinal News.
Trump Administration Tries To Rein In RFK Jr. As A Midterms Liability [Techdirt] (11:25 , Monday, 23 March 2026)
I’ve obviously talked a great deal about how RFK Jr. and his activity as the Secretary of HHS has been a massive health liability for the American public. The implementation of his batshit anti-vaxxer stances have, of course, grabbed most of the headlines here, especially given the recent pushback he received from the courts, but it’s also worth noting the other craziness he’s spouted at the same time. He co-signed Trump’s nonsense about Tylenol giving all the kids autism. He’s overseen the worst measles outbreak in America in several decades. It seems likely he lied to Congress about his “work” in Samoa. He has vindictively repealed grant funding to groups that disagree with him on public health matters. He’s very interested in teenager sperm counts. He once took his grandkids swimming in a river known to be filthy with human waste.
It’s bad for the health of America. The Trump administration hasn’t really seemed to care all that much about that fact, of course, but it certainly does care about retaining power through the midterms. To that end, it seems the White House has finally woken up to the idea that most Americans hate what Kennedy and HHS are doing and has decided to pare back his activity because it’s a political liability.
The White House has taken steps to assert tighter control over HHS amid leadership and messaging changes tied to concerns that department Secretary Robert Kennedy Jr.’s focus on vaccine policy could pose political risks heading into the 2026 midterm elections, The Wall Street Journal reported March 13.
While Mr. Kennedy remains in good standing with President Donald Trump, administration aides have grown frustrated with what they described as disorganization and missteps inside HHS, according to the report. Among them: a delayed response to a measles outbreak in Texas, backlash over mental health grant cuts and internal tension surrounding the FDA’s approval of a generic abortion pill.
We somehow are not at a place yet where the Trump administration realizes that they put a loon in charge of public health and are looking at making a leadership change. But they can read the polling as well as I can and they damned well know that the majority of America is not happy with Kennedy’s performance generally, and especially unhappy with his anti-vaxxer bullshit. To that end, the White House is making several moves to try to steady the waters and keep Kennedy and HHS out of the headlines.
Basically, it looks like they’re trying to provide a bit of more adult supervision, moving Chris Klomp up from managing Medicare to managing Kennedy… er… being Kennedy’s deputy, while moving Peter Thiel’s former righthand man, Jim O’Neill, out of his HHS Deputy Secretary role and over to the FDA where there’s hope he “reduce internal friction.”
The problem is that Captain Brain Worm remains at the top of all of this. Trump and his advisers know the country doesn’t like what HHS has done. They see the chaos, the resignations, and the bullshit that gets spewed out in press conferences and courtrooms alike. It would be nice if the government did this for reasons having to do with the American people rather than for its own political ramifications, but I suppose I’ll take what I can get under the circumstances.
Congress Is Dropping The Ball With A Clean Extension Of FISA [Techdirt] (06:35 , Monday, 23 March 2026)
Two years ago, Congress passed the “Reforming Intelligence and Securing America” Act (RISAA) that included nominal reforms to Section 702 of the Foreign Intelligence Surveillance Act (FISA). The bill unfortunately included some problematic expansions of the law—but it also included a relatively big victory for civil liberties advocates: Section 702 authorities were only extended for two years, allowing Congress to continue the important work of negotiating a warrant requirement for Americans as well as some other critical reforms.
However, Congress clearly did not continue this work. In fact, it now appears that Congress is poised to consider another extension of this program without even attempting to include necessary and common sense reforms. Most notably, Congress is not considering a requirement to obtain a warrant before looking at data on U.S. persons that was indiscriminately and warrantlessly collected. House Speaker Mike Johnson confirmed that “the plan is to move a clean extension of FISA … for at least 18 months.”
Even more disappointing, House Judiciary Chair Jim Jordan, who has previously been a champion of both the warrant requirement and closing the data broker loophole, told the press he would vote for a clean extension of FISA, claiming that RISAA included enough reforms for the moment.
It’s important to note RISAA was just a reauthorization of this mass surveillance program with a long history of abuse. Prior to the 2024 reauthorization, Section 702 was already misused to run improper queries on peaceful protesters, federal and state lawmakers, Congressional staff, thousands of campaign donors, journalists, and a judge reporting civil rights violations by local police. RISAA further expanded the government’s authority by allowing it to compel a much larger group of people and providers into assisting with this surveillance. As we said when it passed, overall, RISAA is a travesty for Americans who deserve basic constitutional rights and privacy whether they are communicating with people and services inside or outside of the US.
Section 702 should not be reauthorized without any additional safeguards or oversight. Fortunately, there are currently three reform bills for Congress to consider: SAFE, PLEWSA, and GSRA. While none of these bills are perfect, they are all significantly better than the status quo, and should be considered instead of a bill that attempts no reform at all.
Mass spying—accessing a massive amount of communications by and with Americans first and sorting out targets second and secretly—has always been a problem for our rights. It was a problem at first when President George W. Bush authorized it in secret without Congressional or court oversight. And it remained a problem even after the passage of Section 702 in 2008 created the possibility of some oversight. Congress was right that this surveillance is dangerous, and that’s why it set Section 702 up for regular reconsideration. That reconsideration has not occurred, even as the circumstances of the NSA, Justice Department, and FBI leadership, have radically changed. Reform is long overdue, and now it’s urgent.
Republished from the EFF’s Deeplinks blog.
Pomera DM250 Tinkering [joshua stein] (06:01 , Monday, 23 March 2026)
The KING JIM Pomera DM250 "digital typewriter" is a small Linux-powered ARM computer that boots up into a custom word processor application. I've been tinkering with it to try to get OpenBSD booted on it. I'd normally wait until the end and write up a proper article explaining everything, but this process is taking a lot longer than I expected so I figured I'd document it all as I go.

KING JIM has made a number of portable word processors starting with the DM5, the DM10 and DM20 with fold-out keyboards, then the DM100 and DM200 which share the form factor with its latest DM250.
I only know of KING JIM because stsp@ has their Portabook x86 machine that has required a handful of tweaks to get OpenBSD working on it.
The DM250 was only sold in Japan, but the manufacturer recently launched an Indiegogo campaign to launch a US version ("DM250US") with an ANSI keyboard layout and defaulting to English in the software (the Japanese model has English support in its software and can use the keyboard in English, though with its slightly different layout). I learned about this on the writerDeck subreddit which I subscribe to for some reason.
The unit measures 10.35" wide by 4.72" tall, with a height of 0.7" when closed. It has a 7" 1024x600 full-color TFT LCD screen, though the DM250's custom word processing software only uses black and white. It weighs 1.4 lbs and has a soft-touch rubber coating on its case. Its DM250US now has a US layout, though the arrow keys were unfortunately moved from an inverted "T" layout on the Japanese DM250 to a horizontal layout.
The DM250 is powered by a Rockchip RK3128 quad-core ARM Cortex-A7 processor with 1GB of RAM and about 8GB of eMMC storage. It has a full-size SD card slot and USB-C for charging. It has an AMPAK AP6236 Wi-Fi and Bluetooth SDIO chip which is based on the Broadcom BCM43436.
I backed the Indiegogo campaign on February 19th and used Buyee to buy a Japanese model DM250 while I waited for the US campaign to end and for mine to ship out. The Japanese DM250 arrived on the 13th and with the aid of this website, I was able to boot into a Debian build and start inspecting how the device worked. I also took backups of the eMMC flash to be able to recover to it if I screw things up.
I haven't really been interested in the random armv7 boards that run OpenBSD because they all seemed to be similar while also each having quirks that make them unusable for daily use due to lacking driver support or cheap hardware. The DM250 appealed to me because it was a complete computer with keyboard and screen, not just a lone board with an ethernet port. (Although I'm sure I will eventually come up short on complete driver support on this machine too.)
It can turn on "instantly" due to some proprietary software called "LINEOWarp" which integrates into u-boot and the Linux kernel and basically hibernates the machine after booting and writes out its RAM to the eMMC. Upon opening the lid, u-boot directly reads the WARP image and loads it into RAM, bypassing the Linux kernel boot process. I first heard about this type of software from dosdude1's Honda infotainment video which has a similar need for "instant on".
Trying to get OpenBSD loaded will require updating u-boot on the DM250 to a newer release with EFI support. EFI support was added in 2015 but the DM250 has a build from 2014.
But I can't really mess with u-boot until I get access to the UART on the device and I haven't been able to find the UART pins. I tried booting to Linux and printing random garbage to the serial port while I probed every pin on the board with my Saleae looking for serial data. For some reason nothing came out anywhere.
Eventually I found this page which shows where the UART pins are, which I definitely probed and found nothing while Debian was running. But once I kept leads on those pins while powering on, I could see u-boot output. Now I can actually see what's going on.
U-Boot 2014.10-RK3128-06 (Mar 17 2022 - 14:28:55)
CPU: rk3128
CPU's clock information:
arm pll = 816000000HZ
periph pll = 594000000HZ
ddr pll = 600000000HZ
codec pll = 400000000HZ
Board: Rockchip platform Board
Uboot as second level loader
DRAM: Found dram banks:1
Adding bank:0000000060000000(0000000040000000)
512 MiB
[...]
I'm not sure why u-boot shows 512 MB of RAM there when the DM250 has 1 GB,
especially when that bank output shows a size of 0x40000000 (1,073,741,824
bytes).
While trying to solder wires to the UART pins, I damaged one of the pads :/ The device still works otherwise so I'll just sell this one and wait for my US model to arrive.
I learned that Rockchip SoCs have a neat feature where if the firmware fails
to load a bootloader from eMMC or SDMMC, it will automatically launch into a
"MaskROM" mode
where it becomes a ugen device over its USB-C cable and allows the attached
computer to directly read and write data to the eMMC.
This way the device can never really be bricked which makes me more confident
testing u-boot changes.
This MaskROM mode works even before SDRAM is initialized, so the first thing
that has to be done is sending it a RAM training blob, then a more complete
usbplug blob which allows more complicated commands over USB.
This can be done with
rkflashtool
or
xrock
which both work on OpenBSD.
$ doas xrock maskrom rk3128_ddr_300MHz_v2.12.bin rk3128_usbplug_v2.63.bin
After uploading the blobs, the device detaches and reattaches into its USB loader mode:
ugen0 at uhub3 port 3 "vendor 0x2207 product 0x310c" rev 2.00/1.00 addr 9
ugen0 detached
ugen0 at uhub3 port 3 "RockChip USB-MSC" rev 2.00/1.00 addr 9
If the flashed u-boot does boot but it's broken, one can short the eMMC to
ground while the board is being powered on and force it into MaskROM mode.
On the DM250, this can be done by shorting TP501 to ground.
My DM250US arrived. A quick teardown shows it's basically the same hardware but with a different version silkscreened.


The keyboard keys feel slightly smaller in size and rougher in texture. u-boot appears to be the same version but the build date is newer:
U-Boot 2014.10-RK3128-06 (Oct 07 2024 - 17:22:56)
The kernel is still Linux 3.10.0 with WARP patches. The DTB stored on the eMMC is mostly the same but with these additions:
bq27z558-battery@55 {
compatible = "ti,bq27z561";
reg = <0x55>;
gpios = <0x76 0x12 0x01 0x75 0x0d 0x01>;
status = "okay";
};
bq256xx-charger@6b {
compatible = "ti,bq25620";
reg = <0x6b>;
gpios = <0x76 0x11 0x01>;
ti,watchdog-timeout-ms = <0x00>;
charge-current-limit-microamp = <0x2bf200>;
charge-voltage-limit-microvolt = <0x408b70>;
input-current-limit-microamp = <0x2dc6c0>;
minimal-system-voltage-microvolt = "\0.c";
pre-charge-control-microamp = <0x8d9a0>;
termination-control-microamp = <0x249f0>;
ti,no-thermistor = <0x01>;
status = "okay";
};
I got this pogo-pin clip from Adafruit to access the UART pins without having to solder to them and potentially damage them again. It's definitely made it much easier to reliably access the UART across multiple reboots.
I've been trying to get different u-boot trees compiling and booting but none
were working except the
one from KING JIM.
I tried
rockchip-linux/u-boot
and
linux-rockchip/u-boot-rockchip
but neither boot (or at least don't output anything to uart1.
Geniatech
make the
XPI-3128
which is basically the
rk3128-evb
evaluation board that exists in u-boot.
While digging around their
documentation,
I found
this huge tarball
that includes a snapshot of their u-boot tree which is based on newer 2017.09
and has most of the necessary Rockchip drivers.
I'm not sure why Rockchip is so special that they can't do everything in the
official upstream u-boot tree…
With a few changes to build with a newer gcc, setting CONFIG_DEBUG_UART_BASE
to 0x20064000 (uart1 instead of uart2), adding some
custom uart initialization
code
to arch/arm/mach-rockchip/rk3128/rk3128.c, and adding an rk3128-specific
timer driver, I now have a working build of u-boot that has EFI support!
U-Boot 2017.09-g5d36672-dirty (Apr 12 2025 - 12:31:04 -0500)
Model: KING JIM Pomera DM250US
DRAM: 512 MiB
Sysmem: init
Relocation Offset: 00000000, fdt: 00000000
Using default environment
dwmmc@10214000: 1, dwmmc@1021c000: 0
mmc_init: err -110, timer:41969
switch to partitions #0, OK
mmc0(part 0) is current device
Bootdev: mmc 0
MMC0: High Speed, 52Mhz
PartType: RKPARM
rockchip_get_boot_mode: Could not found misc partition
boot mode: normal
Found DTB in resource part
DTB: rk-kernel.dtb
CLK: (uboot. arm: enter 600000 KHz, init 600000 KHz, kernel 0N/A)
apll 600000 KHz
dpll 600000 KHz
cpll 400000 KHz
gpll 594000 KHz
armclk 600000 KHz
aclk_cpu 148500 KHz
hclk_cpu 74250 KHz
pclk_cpu 74250 KHz
aclk_peri 148500 KHz
hclk_peri 74250 KHz
pclk_peri 74250 KHz
=> mmcinfo
Device: dwmmc@1021c000
Manufacturer ID: 11
OEM: 100
Name: 008GB
Timing Interface: High Speed
Tran Speed: 52000000
Rd Block Len: 512
MMC version 5.1
High Capacity: Yes
Capacity: 7.3 GiB
Bus Width: 8-bit
Erase Group Size: 512 KiB
HC WP Group Size: 4 MiB
User Capacity: 7.3 GiB WRREL
Boot Capacity: 4 MiB ENH
RPMB Capacity: 4 MiB ENH
=> bootefi
bootefi - Boots an EFI payload from memory
Usage:
bootefi <image address> [fdt address]
- boot EFI payload stored at address <image address>.
If specified, the device tree located at <fdt address> gets
exposed as EFI configuration table.
Unfortunately only the eMMC (dwmmc@1021c000) is working but the probe of the
SDMMC device at dwmmc@10214000 times out.
This means I can't see an inserted SD card and begin to boot OpenBSD's EFI
loader.
I think this has to do with the device not being powered up at boot. I'm still trying to figure out what is required for this to work since it works in other older Rockchip-specific u-boot trees.
Success!
U-Boot 2017.09-gcc6b241-dirty (Apr 14 2025 - 17:20:37 -0500)
Model: KING JIM Pomera DM250US
DRAM: 512 MiB
Sysmem: init
Relocation Offset: 00000000, fdt: 00000000
Using default environment
dwmmc@10214000: 1, dwmmc@1021c000: 0
RKPARM: Invalid parameter part table
switch to partitions #0, OK
mmc1 is current device
switch to partitions #0, OK
mmc0(part 0) is current device
Bootdev: mmc 0
MMC0: High Speed, 52Mhz
PartType: RKPARM
rockchip_get_boot_mode: Could not found misc partition
boot mode: normal
Found DTB in resource part
DTB: rk-kernel.dtb
In: serial
Out: serial
Err: serial
CLK: (uboot. arm: enter 600000 KHz, init 600000 KHz, kernel 0N/A)
apll 600000 KHz
dpll 600000 KHz
cpll 400000 KHz
gpll 594000 KHz
armclk 600000 KHz
aclk_cpu 148500 KHz
hclk_cpu 74250 KHz
pclk_cpu 74250 KHz
aclk_peri 148500 KHz
hclk_peri 74250 KHz
pclk_peri 74250 KHz
=> setenv fdtfile rk3128-pomera-dm250us.dtb
=> load mmc 1 ${kernel_addr_r} efi/boot/bootarm.efi
reading efi/boot/bootarm.efi
119296 bytes read in 51 ms (2.2 MiB/s)
=> bootefi ${kernel_addr_r} ${fdt_addr_r}
## Starting EFI application at 62008000 ...
FtlInit fffffffe
Scanning disk nandc@10500000.blk...
rkparm_init_param_from_storage param read fail
RKPARM: Invalid parameter part table
Scanning disk dwmmc@10214000.blk...
Scanning disk dwmmc@1021c000.blk...
Scanning disk rksdmmc@1021c000.blk...
rkparm_init_param_from_storage param read fail
RKPARM: Invalid parameter part table
Scanning disk rksdmmc@10214000.blk...
rkparm_init_param_from_storage param read fail
RKPARM: Invalid parameter part table
Scanning disk rksdmmc@10218000.blk...
rkparm_init_param_from_storage param read fail
RKPARM: Invalid parameter part table
Found 6 disks
Adding bank: 0x60000000 - 0x80000000 (size: 0x20000000)
disks: sd0* sd1 sd2 sd3 sd4 sd5 sd6 sd7 sd8 sd9 sd10 sd11 sd12 sd13 sd14 sd15 sd16 sd17 sd18 sd19 sd20 sd21 sd22 sd23 sd24 sd25 sd26 sd27 sd28 sd29 sd30
>> OpenBSD/armv7 BOOTARM 1.23
boot>
cannot open sd0a:/etc/random.seed: No such file or directory
booting sd0a:/bsd: 2411324+767888+11506208+484492 [188357+107+388448+214048]=0x0
I added some debug printfs to the working u-boot tree and saw that it was
calling
rk_iomux_config(RK_UART2_IOMUX)
when initializing the storage.
That ends up calling
rk_uart_iomux_config()
which does some magic writes to the IOMUX.
Reading the
GRF documentation
and other pieces of code, I learned that GPIO1B needs pins 12 and 14 enabled to
activate mmc0_pwren and mmc0_cmd, and GPIO1C needs pins 0, 2, 4, 6, 8, and
10 enabled to change them from JTAG and UART2 pins to those needed for eMMC.
With that, the SD card is recognized and u-boot can read files from it with its
built-in FAT filesystem support.
The existing config on the eMMC splits up the single drive into many different
partitions like kernel, warp, ro_data, etc., which each show up as
separate disks to the EFI loader.
The EFI loader is read from the SD card and loaded into memory with load mmc 1
${kernel_addr_r} efi/boot/bootarm.efi, and then executed with bootefi
${kernel_addr_r} ${fdt_addr_r}.
OpenBSD's BOOTARM.EFI loads successfully and can list files on the SD card and
start reading and booting bsd.rd.
Unfortunately it goes off into lala land there so I'm not sure what it's doing,
but at least now I can move on to the OpenBSD part of this bringup.
I've pushed my U-Boot tree to GitHub as it seems to be in a good state now. I split up my changes specific to the rk3128 and then added a specific board config for the DM250. Eventually this will need some work to enable the LVDS LCD at boot time like it was with the factory U-Boot.
I added a uart_putc helper to OpenBSD's armv7 locore0.S to print numbers to
the serial port, and then added them along the boot path to see how far it got.
.globl uart_putc /* send r1 to uart */
uart_putc:
ldr r0, =0x20064000
str r1, [r0]
ldr r2, =0x20064000 + 0x7c /* UART_USR */
check_usr:
ldr r3, [r2]
tst r3, #(1<<1) /* UART_TRANSMIT_FIFO_NOT_FULL */
beq check_usr
bx lr
[...]
start_mmu:
mov r1, #'1'
bl uart_putc
[...]
mov r1, #'2'
bl uart_putc
/* Enable MMU */
mrc CP15_SCTLR(r0)
orr r0, r0, #CPU_CONTROL_MMU_ENABLE
mcr CP15_SCTLR(r0)
isb
mov r1, #'3'
bl uart_putc
This showed it was getting to start_mmu but as soon as it wrote the SCTLR
register to enable the MMU, it stopped outputting.
Mark
pointed out
that this was because there was no mapping in the MMU page table to continue
accessing the UART at 0x20064000.
I added an entry for it:
MMU_INIT(0x20000000, 0x20000000, 1,
L1_TYPE_S|L1_S_V7_AP(AP_KRW)|L1_S_V7_AF)
But it still wasn't printing '3'.
After a few hours of debugging and reading more docs, I finally realized that my
dumb uart_putc function was clobbering r0 and r1 which were being used
inside of start_mmu so the page table wasn't getting set up right.
By changing it to just a few inline instructions with no FIFO status checking
and using registers that weren't in use, it could enable the MMU properly and
get to '3' and beyond:
ldr r4, =0x20064000
mov r5, #'3'
str r4, [r5]
Eventually with some more tweaks to the DTB passed from U-Boot to the EFI loader
and to the kernel, the kernel could properly print to the chosen stdout-path
and get to copyright.
Since it is able to do this through the normal com_fdt_init_cons routine in
dev/fdt/com_fdt.c which does a bus_space_map, I could remove all of my
debugging from locore0 and then remove my custom UART page table entry.
It can now get to copyright with no kernel changes:
disks: sd0* sd1 sd2
>> OpenBSD/armv7 BOOTARM 1.23
boot> b bsd.arm
cannot open sd0a:/etc/random.seed: No such file or directory
booting sd0a:bsd.arm: 4910236+1012484+138796+608784
[2789902+360416+184+330342]=0x0
OpenBSD/armv7 booting ...
arg0 0xc0caf850 arg1 0x0 arg2 0x9ac83000
Allocating page tables
IRQ stack: p0x60cde000 v0xc0cde000
ABT stack: p0x60cdf000 v0xc0cdf000
UND stack: p0x60ce0000 v0xc0ce0000
SVC stack: p0x60ce1000 v0xc0ce1000
Creating L1 page table at 0x60cb0000
Mapping kernel
Constructing L2 page tables
undefined page type 0x2 pa 0x60000000 va 0x60000000 pages 0x2000 attr 0x8
type 0x7 pa 0x62000000 va 0x60000000 pages 0x6000 attr 0x8
type 0x4 pa 0x68000000 va 0x68000000 pages 0x7 attr 0x8
type 0x7 pa 0x68008000 va 0x60000000 pages 0x32c7b attr 0x8
type 0x2 pa 0x9ac83000 va 0x9ac83000 pages 0x7 attr 0x8
type 0x7 pa 0x9ac8a000 va 0x9ac8a000 pages 0x4 attr 0x8
type 0x7 pa 0x9ac8e000 va 0x9ac8e000 pages 0x2 attr 0x8
type 0x7 pa 0x9ac90000 va 0x9ac90000 pages 0x1 attr 0x8
type 0x2 pa 0x9ac91000 va 0x9ac91000 pages 0x100 attr 0x8
type 0x2 pa 0x9ad91000 va 0x9ad91000 pages 0x1e attr 0x8
type 0x6 pa 0x9adaf000 va 0x9adaf000 pages 0x1 attr 0x8000000000000008
type 0x0 pa 0x9adb0000 va 0x9adb0000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb1000 va 0x9adb1000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb2000 va 0x9adb2000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb3000 va 0x9adb3000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb4000 va 0x9adb4000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb5000 va 0x9adb5000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb6000 va 0x9adb6000 pages 0x1 attr 0x8
type 0x2 pa 0x9adb7000 va 0x9adb7000 pages 0x308c attr 0x8
type 0x5 pa 0x9de43000 va 0x9de43000 pages 0x1 attr 0x8000000000000008
type 0x2 pa 0x9de44000 va 0x9adb7000 pages 0x21bc attr 0x8
pmap [ using 3481620 bytes of bsd ELF symbol table ]
Copyright (c) 1982, 1986, 1989, 1991, 1993
The Regents of the University of California. All rights reserved.
Copyright (c) 1995-2025 OpenBSD. All rights reserved. https://www.OpenBSD.org
Now I just need to figure out how far into init_main.c it's getting and why it
hangs after printing the copyright line.
With some instrumenting I figured out the kernel was getting as far as setting
up the page tables for the MMU and would then lock up when doing a memset on
the newly setup memory.
By reducing the amount of memory used, I could get it to fully boot the kernel
to !cold, but it crashes in userland:
U-Boot 2017.09-g9333465-dirty (Apr 21 2025 - 13:38:25 -0500)
Model: KING JIM Pomera DM250
DRAM: 1 GiB
Sysmem: init
Relocation Offset: 3ddc2000, fdt: 00000000
Using default environment
Failed to load DTB
Failed to get kernel dtb, ret=-1
In: serial
Out: serial
Err: serial
Model: KING JIM Pomera DM250
dwmmc@10214000: 1, dwmmc@1021c000: 0
switch to partitions #0, OK
mmc1 is current device
switch to partitions #0, OK
mmc0(part 0) is current device
Bootdev: mmc 0
MMC0: High Speed, 52Mhz
## Unknown partition table type 0
PartType: <NULL>
rockchip_get_boot_mode: Could not found misc partition
boot mode: normal
CLK: (uboot. arm: enter 600000 KHz, init 600000 KHz, kernel 0N/A)
apll 600000 KHz
dpll 600000 KHz
cpll 400000 KHz
gpll 594000 KHz
armclk 600000 KHz
aclk_cpu 148500 KHz
hclk_cpu 74250 KHz
pclk_cpu 74250 KHz
aclk_peri 148500 KHz
hclk_peri 74250 KHz
pclk_peri 74250 KHz
Hit key to stop autoboot('CTRL+C'): 0
switch to partitions #0, OK
mmc1 is current device
Scanning mmc 1:1...
reading /kingjim-dm250.dtb
23239 bytes read in 6 ms (3.7 MiB/s)
Found EFI removable media binary efi/boot/bootarm.efi
reading efi/boot/bootarm.efi
119564 bytes read in 16 ms (7.1 MiB/s)
## Starting EFI application at 62008000 ...
FtlInit fffffffe
Scanning disk nandc@10500000.blk...
Scanning disk dwmmc@10214000.blk...
Scanning disk dwmmc@1021c000.blk...
Found 3 disks
Adding bank: 0x60000000 - 0xa0000000 (size: 0x40000000)
disks: sd0* sd1 sd2 sd3
>> OpenBSD/armv7 BOOTARM 1.23
boot> b sd0a:/bsd.rd
cannot open sd0a:/etc/random.seed: No such file or directory
booting sd0a:/bsd.rd: 4916868+1014156+16731272+608976
[2791939+360736+184+330515]=0x0
OpenBSD/armv7 booting ...
arg0 0xc1c8514c arg1 0x0 arg2 0x9ac82000
Allocating page tables
IRQ stack: p0x61cb4000 v0xc1cb4000
ABT stack: p0x61cb5000 v0xc1cb5000
UND stack: p0x61cb6000 v0xc1cb6000
SVC stack: p0x61cb7000 v0xc1cb7000
Creating L1 page table at 0x61c88000
Mapping kernel
Constructing L2 page tables
undefined page type 0x2 pa 0x60000000 va 0x60000000 pages 0x2000 attr 0x8
type 0x7 pa 0x62000000 va 0x60000000 pages 0x6000 attr 0x8
initarm: added 24576 pages at 0x62000000, physmem now 32768
type 0x4 pa 0x68000000 va 0x68000000 pages 0x7 attr 0x8
type 0x7 pa 0x68008000 va 0x60000000 pages 0x32c7a attr 0x8
initarm: added 103997 pages at 0x68008000, physmem now 136765
type 0x2 pa 0x9ac82000 va 0x9ac82000 pages 0x7 attr 0x8
type 0x7 pa 0x9ac89000 va 0x9ac89000 pages 0x4 attr 0x8
type 0x7 pa 0x9ac8d000 va 0x9ac8d000 pages 0x2 attr 0x8
type 0x7 pa 0x9ac8f000 va 0x9ac8f000 pages 0x1 attr 0x8
type 0x2 pa 0x9ac90000 va 0x9ac90000 pages 0x100 attr 0x8
type 0x2 pa 0x9ad90000 va 0x9ad90000 pages 0x1e attr 0x8
type 0x6 pa 0x9adae000 va 0x9adae000 pages 0x1 attr 0x8000000000000008
type 0x0 pa 0x9adaf000 va 0x9adaf000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb0000 va 0x9adb0000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb1000 va 0x9adb1000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb2000 va 0x9adb2000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb3000 va 0x9adb3000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb4000 va 0x9adb4000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb5000 va 0x9adb5000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb6000 va 0x9adb6000 pages 0x1 attr 0x8
type 0x2 pa 0x9adb7000 va 0x9adb7000 pages 0x308c attr 0x8
type 0x5 pa 0x9de43000 va 0x9de43000 pages 0x1 attr 0x8000000000000008
type 0x2 pa 0x9de44000 va 0x9adb7000 pages 0x21bc attr 0x8
pmap [ using 3484148 bytes of bsd ELF symbol table ]
Copyright (c) 1982, 1986, 1989, 1991, 1993
The Regents of the University of California. All rights reserved.
Copyright (c) 1995-2025 OpenBSD. All rights reserved. https://www.OpenBSD.org
OpenBSD 7.7 (obj.amd64.armv7) #113: Fri Apr 18 11:15:57 CDT 2025
jcs@nano.jcs.org:/usr/src/sys/arch/armv7/compile/GENERIC/obj.amd64.armv7
real mem = 560189440 (534MB)
avail mem = 520486912 (496MB)
random: boothowto does not indicate good seed
mainbus0 at root: KING JIM Pomera DM250
cortex0 at mainbus0
psci0 at mainbus0: PSCI 0.0
syscon0 at mainbus0: can't map registers
syscon1 at mainbus0: "syscon"
ampintc0 at mainbus0 nirq 160, ncpu 4: "interrupt-controller"
syscon2 at mainbus0: "syscon"
agtimer0 at mainbus0: 24000 kHz
agtimer1 at mainbus0: 24000 kHz
com0 at mainbus0: dw16550, 64 byte fifo
com0: probed fifo depth: 0 bytes
com1 at mainbus0: dw16550
com1: console
com2 at mainbus0: dw16550
ehci0 at mainbus0
usb0 at ehci0: USB revision 2.0
uhub0 at usb0 configuration 1 interface 0 "Generic EHCI root hub" rev 2.00/1.00
addr 1
ohci0 at mainbus0: version 1.0
dwmmc0 at mainbus0: 18 MHz base clock
sdmmc0 at dwmmc0: 4-bit, dma
dwmmc1 at mainbus0: 25 MHz base clock
sdmmc1 at dwmmc1: 8-bit, dma
rkiic0 at mainbus0
iic0 at rkiic0
"rockchip,rk818" at iic0 addr 0x1c not configured
rkiic1 at mainbus0
iic1 at rkiic1
pcxrtc0 at iic1 addr 0x51pcxrtc0: pcxrtc_reg_read: failed to read reg0
pcxrtc0: pcxrtc_reg_write: failed to write reg0
pcxrtc0: pcxrtc_reg_read: failed to read reg2
: battery ok
rkiic2 at mainbus0
iic2 at rkiic2
rkiic3 at mainbus0
iic3 at rkiic3
usb1 at ohci0: USB revision 1.0
uhub1 at usb1 configuration 1 interface 0 "Generic OHCI root hub" rev 1.00/1.00
addr 1
scsibus0 at sdmmc0: 2 targets, initiator 0
sd0 at scsibus0 targ 1 lun 0: <Sandisk, SL32G, 0080> removable
sd0: 30436MB, 512 bytes/sector, 62333952 sectors
scsibus1 at sdmmc1: 2 targets, initiator 0
sd1 at scsibus1 targ 1 lun 0: <Toshiba, 008GB1, 0000> removable
sd1: 7456MB, 512 bytes/sector, 15269888 sectors
vscsi0 at root
scsibus2 at vscsi0: 256 targets
softraid0 at root
scsibus3 at softraid0: 256 targets
bootfile: sd0a:/bsd
boot device: sd0
root on rd0a swap on rd0b dump on rd0b
pcxrtc0: pcxrtc_clock_read: failed to read rtc
WARNING: bad clock chip time
WARNING: CHECK AND RESET THE DATE!
Fatal kernel mode prefetch abort at 0x00000000
trapframe: 0xcd06ba70
IFSR=00000005, IFAR=00000000, spsr=80000113
r0 =00000000, r1 =00000007, r2 =c18a0868, r3 =60000113
r4 =00000007, r5 =c93ad000, r6 =c93ad000, r7 =cd06bb10
r8 =cd06a000, r9 =00000013, r10=c08a8988, r11=cd06bb08
r12=c18e5378, ssp=cd06bac0, slr=c0780344, pc =00000000
Stopped at 0
ddb> trace
0
rlv=0xc032fd30 rfp=0xcd06bb90
exception_exit
rlv=0xc0343800 rfp=0xcd06bee0
sys_execve+0x2c8 [/usr/src/sys/kern/kern_exec.c:361]
rlv=0xc04c4450 rfp=0xcd06bfa8
start_init+0x254 [/usr/src/sys/kern/init_main.c:716]
rlv=0xc07976ac rfp=0xc1cb8f90
Bad frame pointer: 0xc1cb8f90
I'm still not sure why the memory limiting is needed, but apparently U-boot is not passing the proper memory segment information to the EFI bootloader for the kernel to know to avoid that address space.
Since I was able to reduce the custom things needed in U-boot, I tried adapting my UART, GPIO, and timer changes to mainline U-boot to see if maybe the EFI code was better there. It boots now with UART output but the SDMMC and eMMC driver fails to setup either one of them:
U-Boot 2025.01-00001-g4e6a9d7df66d-dirty (Apr 19 2025 - 22:17:13 -0500)
Model: KING JIM Pomera DM250
DRAM: 1 GiB
Core: 30 devices, 14 uclasses, devicetree: embed
MMC: mmc@10214000: 1, mmc@1021c000: 0
Loading Environment from nowhere... OK
In: serial@20064000
Out: serial@20064000
Err: serial@20064000
Hit any key to stop autoboot: 0
Card did not respond to voltage select! : -110
Cannot persist EFI variables without system partition
Card did not respond to voltage select! : -110
No USB controllers found
I can see the udelay calls work properly (which weren't in the
Rockchip-specific U-boot tree until I made the RK3128-specific timer changes),
and where it's failing to respond to voltage is past the initial setup which
requires responses from the controllers so it seems like they are being powered
up.
I guess I should have read the kernel panic better.
Fatal kernel mode prefetch abort at 0x00000000 and pc =00000000 indicate
that the kernel set the program counter to 0, which meant it was probably
executing a function callback that was pointing to NULL.
After dozens of printfs added, kernels recompiled, SD cards swapped, and reset
pins grounded, I figured out that the kernel was panicking in
data_abort_handler because curcpu()->ci_flush_bp was NULL and there was no
check for that (because it shouldn't really happen).
Why it was NULL was much more complicated.
ci_flush_bp was never initialized because arm/arm/cpu.c was not attaching to
cpu0, because the reg values for cpu0-cpu3 in the FDT were
0x000-0x003, but
mainbus.c
expects them to be 0xf00-0xf03.
They are 0x000-0x003 even in the latest
U-boot
tree
but 0xf00-0xf03 in
Linux
which I guess is now the authoritative source for device trees?
This is why I dislike the ARM ecosystem…
cpu0 at mainbus0 mpidr f00: ARM Cortex-A7 r0p5
cpu0: 32KB 32b/line 2-way L1 VIPT I-cache, 32KB 64b/line 4-way L1 D-cache
cpu0: 256KB 64b/line 8-way L2 cache
Anyway, now that cpu0 actually attaches and runs cpu_identify, it sets the
CPU device's ci_flush_bp callback to cpu_flush_bp_noop, which does…
nothing.
So the kernel isn't panicking now, but instead it just locks up (actually powers off) when it should be starting userland. I'm getting there…
Oh, right, we have no
clock
again, so the dwmmc driver's attempt to set the frequency
does nothing
but this isn't handled as an error.
I'll have to add rockchip,rk3128-cru support to the rkclock driver, which
does not look
fun
doing from scratch.
This menial task of translating register definitions from PDFs and cross-referencing Linux driver code is usually where my willpower fades in these types of projects.
A few weeks ago I bought a Geniatech XPI-3128 which is another board based on the Rockchip 3128, but with 4 USB ports, ethernet, and HDMI. I tried flashing a new U-Boot build to it and it promptly stopped booting. When I'd try powering it on with its recovery button pressed to boot into Maskrom mode, my laptop would just log messages like this:
uhub3: device problem, disabling port 3
So it was as if it was trying to attach but kept failing. The device was basically bricked, so I e-mailed Geniatech's support address for help. A couple weeks later they finally gave me the information I needed, which was that I had to desolder the Wi-Fi board, remove the CPU heatsink, and then the eMMC clock line was reachable to be shorted to ground to avoid loading U-Boot and force Maskrom mode.
That allowed me to flash and test different U-Boot builds again and finally get it booting on the XPI-3128. However, the more I worked on it, the more I realized trying to do anything with my older U-Boot tree was futile.
The device tree (DTB) that shipped on the DM250 (and the XPI-3128) is very old,
and is configured for old U-Boot and Linux drivers.
Things like the names of compatible strings and the way peripherals are
described are targeting Rockchip-specific drivers in their old Linux tree,
rather than what's in the current Linux kernel.
Trying to write OpenBSD drivers for the way this old DTB is setup would be a bad
idea, so I really needed to get RK3128 support working on the latest U-Boot and
targeting the
official XPI-3128 device tree
with all of its compatible strings.
While reading various RK3128 code, I came across Linux and U-Boot patches from Alex Bee, which led me to find their U-Boot tree with RK3128 support, but done right to eventually be upstreamed. With this tree I was finally able to boot a modern U-boot (2025.04) on the XPI-3128 (though still needing my timer init code), which allowed me to boot OpenBSD all the way to userland on a USB stick:
U-Boot 2025.04-rc1-00167-g04767ba5b99f-dirty (Apr 29 2025 - 21:55:41 -0500)
Model: Geniatech XPI-3128
DRAM: 1 GiB
Cannot find regulator pwm init_voltage
Cannot find regulator pwm init_voltage
Core: 164 devices, 21 uclasses, devicetree: embed
MMC: mmc@10214000: 1, mmc@1021c000: 0
Loading Environment from MMC... Reading from MMC(0)... *** Warning - bad CRC,
using default environment
In: serial@20064000
Out: serial@20064000
Err: serial@20064000
Model: Geniatech XPI-3128
Net: No ethernet found.
Hit any key to stop autoboot: 0
Scanning for bootflows in all bootdevs
Seq Method State Uclass Part Name Filename
--- ----------- ------ -------- ---- ------------------------
----------------
Scanning global bootmeth 'efi_mgr':
Card did not respond to voltage select! : -110
Cannot persist EFI variables without system partition
0 efi_mgr ready (none) 0 <NULL>
** Booting bootflow '<NULL>' with efi_mgr
Loading Boot0000 'mmc 0' failed
EFI boot manager: Cannot load any image
Boot failed (err=-14)
Scanning bootdev 'mmc@10214000.bootdev':
Card did not respond to voltage select! : -110
Scanning bootdev 'mmc@1021c000.bootdev':
Unknown uclass 'nvme' in label
Unknown uclass 'scsi' in label
Bus usb@10180000: USB DWC2
Bus usb@101c0000: USB EHCI 1.00
scanning bus usb@10180000 for devices... 1 USB Device(s) found
scanning bus usb@101c0000 for devices... 3 USB Device(s) found
Scanning bootdev 'usb_mass_storage.lun0.bootdev':
1 efi ready usb_mass_ 1 usb_mass_storage.lun0.boo
/EFI/BOOT/BOOTARM.EFI
** Booting bootflow 'usb_mass_storage.lun0.bootdev.part_1' with efi
Booting /\EFI\BOOT\BOOTARM.EFI
disks: sd0* sd1
>> OpenBSD/armv7 BOOTARM 1.23
boot>
booting sd0a:/bsd: 4915064+1013912+140528+607852 [289299+107+346480+308631]=0x0
OpenBSD/armv7 booting ...
arg0 0xc0a456f8 arg1 0x0 arg2 0x9cdff000
Allocating page tables
IRQ stack: p0x60a74000 v0xc0a74000
ABT stack: p0x60a75000 v0xc0a75000
UND stack: p0x60a76000 v0xc0a76000
SVC stack: p0x60a77000 v0xc0a77000
Creating L1 page table at 0x60a48000
Mapping kernel
Constructing L2 page tables
undefined page type 0x2 pa 0x60000000 va 0x60000000 pages 0x2000 attr 0x8
type 0x7 pa 0x62000000 va 0x62000000 pages 0x3adff attr 0x8
type 0x2 pa 0x9cdff000 va 0x9cdff000 pages 0x9 attr 0x8
type 0x7 pa 0x9ce08000 va 0x9ce08000 pages 0x1 attr 0x8
type 0x2 pa 0x9ce09000 va 0x9ce09000 pages 0x100 attr 0x8
type 0x1 pa 0x9cf09000 va 0x9cf09000 pages 0x1e attr 0x8
type 0x4 pa 0x9cf27000 va 0x9cf27000 pages 0x3 attr 0x8
type 0x9 pa 0x9cf2a000 va 0x9cf2a000 pages 0xb attr 0x8
type 0x4 pa 0x9cf35000 va 0x9cf35000 pages 0xb attr 0x8
type 0x6 pa 0x9cf40000 va 0x9cf40000 pages 0x1 attr 0x8000000000000008
type 0x4 pa 0x9cf41000 va 0x9cf41000 pages 0x1 attr 0x8
type 0x6 pa 0x9cf42000 va 0x9cf42000 pages 0x22 attr 0x8000000000000008
type 0x4 pa 0x9cf64000 va 0x9cf64000 pages 0x5 attr 0x8
type 0x3 pa 0x9cf69000 va 0x9cf69000 pages 0x1009 attr 0x8
type 0x6 pa 0x9df72000 va 0x9df72000 pages 0x1 attr 0x8000000000000008
type 0x3 pa 0x9df73000 va 0x9df73000 pages 0x1fff attr 0x8
type 0x5 pa 0x9ff72000 va 0x9ff72000 pages 0x2 attr 0x8000000000000008
type 0x3 pa 0x9ff74000 va 0x9ff74000 pages 0x8c attr 0x8
pmap [ using 945052 bytes of bsd ELF symbol table ]
Copyright (c) 1982, 1986, 1989, 1991, 1993
The Regents of the University of California. All rights reserved.
Copyright (c) 1995-2025 OpenBSD. All rights reserved. https://www.OpenBSD.org
OpenBSD 7.7-current (GENERIC) #1: Tue Apr 29 20:43:21 MDT 2025
jcs@rk3128:/usr/src/sys/arch/armv7/compile/GENERIC
real mem = 1021308928 (973MB)
avail mem = 992374784 (946MB)
random: good seed from bootblocks
mainbus0 at root: Geniatech XPI-3128
cpu0 at mainbus0 mpidr f00: ARM Cortex-A7 r0p5
cpu0: 32KB 32b/line 2-way L1 VIPT I-cache, 32KB 64b/line 4-way L1 D-cache
cpu0: 256KB 64b/line 8-way L2 cache
cortex0 at mainbus0
syscon0 at mainbus0: "syscon"
"power-controller" at syscon0 not configured
syscon1 at mainbus0: "qos"
syscon2 at mainbus0: "qos"
syscon3 at mainbus0: "qos"
syscon4 at mainbus0: "qos"
syscon5 at mainbus0: "qos"
syscon6 at mainbus0: "qos"
syscon7 at mainbus0: "qos"
ampintc0 at mainbus0 nirq 160, ncpu 4: "interrupt-controller"
rkclock0 at mainbus0
syscon8 at mainbus0: "syscon"
"usb2phy" at syscon8 not configured
syscon9 at mainbus0: can't map registers
agtimer0 at mainbus0: 24000 kHz
ehci0 at mainbus0rk3128_enable: 0x000001d9
usb0 at ehci0: USB revision 2.0
uhub0 at usb0 configuration 1 interface 0 "Generic EHCI root hub" rev 2.00/1.00
addr 1
dwmmc0 at mainbus0rk3128_set_frequency: 68 100000000
rkclock_set_frequency(rkclock0, 68, 100000000) parent
: 12 MHz base clock
sdmmc0 at dwmmc0: 4-bit, sd high-speed, dma
dwmmc1 at mainbus0rk3128_set_frequency: 71 100000000
rkclock_set_frequency: clk div mask 16128
rk3128_get_frequency: RK3128_XIN24M
rk3128_get_frequency: RK3128_XIN24M
rk3128_get_frequency: RK3128_PLL_CPLL
rk3128_get_pll: 0x20 = 523462184
rk3128_get_frequency: RK3128_PLL_CPLL
rk3128_get_pll: 0x20 = 523462184
rk3128_get_frequency: RK3128_XIN24M
rk3128_get_frequency: RK3128_XIN24M
rk3128_get_frequency: RK3128_PLL_CPLL
rk3128_get_pll: 0x20 = 523462184
rk3128_get_frequency: unhandled 71
rk3128_get_frequency: RK3128_PLL_CPLL
rk3128_get_pll: 0x20 = 523462184
: 43 MHz base clock
sdmmc1 at dwmmc1: 8-bit, mmc high-speed, dma
com0 at mainbus0: dw16550
com0: console
rkiic0 at mainbus0
rk3128_get_frequency: RK3128_CLK_I2C
rk3128_get_frequency: RK3128_PLL_CPLL
rk3128_get_pll: 0x20 = 523462184
iic0 at rkiic0
dwge0 at mainbus0rk3128_set_frequency: 124 50000000
rkclock_set_frequency(rkclock0, 124, 50000000)
rk3128_enable: 0x0000016f
: rev 0x35rk3128_get_frequency: unhandled 126
rkclock_get_frequency(rkclock0, 126)
, address 76:e3:5a:fa:14:d9
rk3128_set_frequency: 126 50000000
rkclock_set_frequency(rkclock0, 126, 50000000)
dwge0: no PHY found!
scsibus0 at sdmmc1: 2 targets, initiator 0
sd0 at scsibus0 targ 1 lun 0: <Samsung, 8GTF4R, 0000> removable
sd0: 7456MB, 512 bytes/sector, 15269888 sectors
uhub1 at uhub0 port 1 configuration 1 interface 0 "Genesys Logic USB2.0 Hub" rev
2.00/60.90 addr 2
umass0 at uhub1 port 1 configuration 1 interface 0 "USB SanDisk 3.2Gen1" rev
2.10/1.00 addr 3
umass0: using SCSI over Bulk-Only
scsibus1 at umass0: 2 targets, initiator 0
sd1 at scsibus1 targ 1 lun 0: <USB, SanDisk 3.2Gen1, 1.00> removable
serial.078155ab8107712cf658
sd1: 942480MB, 512 bytes/sector, 1930199040 sectors
vscsi0 at root
scsibus2 at vscsi0: 256 targets
softraid0 at root
scsibus3 at softraid0: 256 targets
bootfile: sd0a:/bsd
boot device: sd0
root on sd1a (f2059a1fe6a57770.a) swap on sd1b dump on sd1b
WARNING: CHECK AND RESET THE DATE!
rk3128_get_frequency: RK3128_ARMCLK
rk3128_get_frequency: RK3128_PLL_APLL
rk3128_get_pll: 0x0 = 211673469
rk3128_set_frequency: RK3128_ARMCLK 52918367
rk3128_set_frequency: RK3128_PLL_APLL 52918367
rk3128_set_pll: freq 52918367
rk3128_set_pll: 52918367 Hz
cpu0: clock not implemented
Automatic boot in progress: starting file system checks.
/dev/sd1a (f2059a1fe6a57770.a): file system is clean; not checking
pf enabled
starting network
starting early daemons: syslogd pflogd ntpd.
starting RPC daemons:.
savecore: no core dump
checking quotas: done.
clearing /tmp
kern.securelevel: 0 -> 1
creating runtime link editor directory cache.
preserving editor files.
starting network daemons: sshd.
starting local daemons: cron.
Tue Apr 29 20:52:11 MDT 2025
OpenBSD/armv7 (rk3128) (console)
login:
That kernel was actually compiled on the XPI-3128 and then booting on it (ignore all the clock debugging output).
I need to fix the clock setting for MMC and ethernet and then sdmmc and dwge
devices will work.
USB is working fine out of the box since it's actually booting and running off
of a USB stick, but once MMC works, I can install and boot from the onboard
eMMC.
I'd like to write a driver for the
Artasie AM1805 I2C RTC
that is present on the XPI-3128 which will give it a working realtime clock.
Once all of those things are working I'll hopefully commit all of this RK3128 support to OpenBSD, which will then allow me to go back to working on the DM250 and write drivers for the keyboard and LCD.
You may have assumed that I gave up on this project but the sad part is that I've been working on it almost every day and getting pretty much nowhere.
I have the basics working like GPIO pin control (rkpinctrl), clocks
(rkclock), and regulators (rkpmic) but anything more advanced like the
screen or SDIO Wi-Fi or keyboard interrupts aren't working.
The main problem so far is that the DTB embedded on the eMMC is ancient and uses a lot of proprietary Rockchip properties that are specifically for Rockchip's Linux 3.10 tree, on top of which has hard-coded hacks and RK312x-specific tweaks everywhere. Also, the U-Boot on the device also has hard-coded hacks and things specific to the DM250.
To make this work on OpenBSD, the DTB has to be modernized which is largely helped by this RK3128 file but there are a lot of DM250-specific components that need to be added describing the keyboard, SDIO, battery, LCD screen information, LVDS controller information, etc.
I currently have two DM250s taken apart on my desk with cables hooked up to their UART pins, one running OpenBSD with current U-Boot, and one running Debian Linux 11 with the DM250 Linux 3.10.0 tree booting from the DM250-specific U-Boot.

This allows me to add in some debugging printks on the Linux kernel, compile
it, dd it to /dev/mmcblk0p14, reboot, and see the output.
Then I can add things on the OpenBSD DM250 and reboot.
But often this requires changing a pin configuration or adding something new to
the DTB which then has to be written to the eMMC on the OpenBSD DM250 over a USB
cable.
This whole process has been going very slowly and just when I think I figured
something out, I broke something else.
I can turn the LCD backlight on with pwmbl and adjust its brightness, but I
still can't get anything to show up on the screen.
I wrote rklvds and rklcdc drivers for OpenBSD based on the
Rockchip-specific
code
in the DM250 U-Boot tree, only to discover that the LCDC does the same as what
is now called the VOP and should have used a different compatible string in
the DTB.
The Rockchip VOP
already has an OpenBSD
driver
that hooks it up to wscons and rkdrm, but it needed RK3128 (RK3126 actually)
support which I added.
But still nothing will show on the screen.
Current U-Boot even has Rockchip VOP and LVDS drivers so it should work out of the box, right? But it does the same thing as OpenBSD just enabling the backlight and unable to draw anything on the screen.
The keyboard kind of works with the I2C TC3589x driver I wrote, but I can't get
interrupts working.
The SD card slot works, but I don't get interrupts for card-detect events even
though I'm specifying the same cd-gpios information as the DTB file that
shipped with the DM250.
Anyway, this is all rambling and probably not very interesting but I'm getting tired of this project after a few months. If I could just get the screen and the keyboard interrupts working, I could work directly on the DM250 in OpenBSD instead of it being cracked open in pieces on my desk with wires hanging out of it working over a serial connection.
tl;dr: OpenBSD with my kernel tree and U-boot with updated device-tree bindings is now working reliably on the DM250 including graphical boot early in U-boot with keyboard support, X11, interrupt-driven keyboard, battery charging and sensors, Wi-Fi, SD card eject/insertion, CPU speed adjustments, red and green power/charging LEDs, and probably other things I'm forgetting.

I just noticed this article is now more than a year old.
After many months of working on other projects, I had enough desk space to get back to the DM250. I booted my US model that had OpenBSD installed on it and through its serial console I could see it booting to the kernel copyright line and then locking up or totally powering off. I had no other usable kernels on the device so it took a while to get it back to a working state which involved cross-compiling an armv7 kernel on my ThinkPad.
Once I had a new kernel booting, I was encountering the same problems that I remembered encountering half a year ago, such as it locking up when all 1GB of RAM was being initialized in OpenBSD, or the SD card not being able to be properly read in U-Boot. It took me a while to figure out (or remember) many of the issues were related to power caused by the battery not charged enough (or being completely disconnected as it sometimes was while moving everything around). I think when the system is running with all of its power regulators enabled, just having its USB port connected to a 5V power source doesn't supply enough amperage to fully power everything and it relies on a working battery for help or it crashes.
That led me to figure out why the battery wasn't getting charged while idling in U-Boot or OpenBSD. After more digging through the vendor U-boot tree and using a USB-C power meter, I found that the RK818 PMIC needs to be told to enable USB charging at a higher rate or else it will just trickle charge the battery at a rate that is too low to keep up with the idle power consumption of the device. This would cause the battery to eventually drain too low to be able to boot.
Luckily I had a few other DM250s so I swapped the dead battery into a device with the original U-boot firmware where it would immediately charge it at a high rate before continuing with boot.
Once I figured out how to enable higher-rate charging on the RK818, I wrote an
rkcharger
driver that hangs off the rkpmic device, and also a driver for
simple-battery
devices which asks the parent device (rkcharger) to read charging and battery
info and exposes it as hw.sensors values.
Once there was reliable power, the random crashes and power-offs stopped and I
could use the full 1GB of RAM.
I also updated the
simple-battery
node in the DM250 device tree.
I also discovered that the DM250 and DM250US aren't as identical as I thought, at least in terms of charging. The DM250 uses the RK818 to do it directly while the DM250US introduces a TI BQ25620 charging chip. This caused me a lot of frustration trying to figure out why my driver wasn't working on the DM250 (because the device wasn't even there).


I fixed a bunch of other little issues, many of which stemmed from incorrect
things in the unofficial
DM250 device-tree
that is still being worked on.
Once the reset and power settings were corrected for the
Wi-Fi
bwfm0 at sdmmc1 just magically worked without any kernel changes.
It uses brcmfmac43430-sdio.bin for firmware and it can use the
brcmfmac43430-sdio.rockchip,pomera-dm250.txt NVRAM settings file from the
original Linux installation on the device.
I brought over the U-boot LVDS and VOP drivers from Rockchip's U-boot tree, which enabled a graphical framebuffer very early in the power-on process. I also ported my Toshiba TC3589X keyboard driver from OpenBSD so I could type on the keyboard and over the serial device at the same time. I enabled U-boot's boot logo support to get a neat OpenBSD logo (read from a .bmp file in eMMC's EFI partition) during boot before clearing the screen to show OpenBSD's EFI bootloader. Since the keyboard works in U-boot now, this also enabled the keyboard to work in OpenBSD's bootloader (at least as far as telling it to boot a different device or kernel).
Once that worked, OpenBSD technically didn't need any video driver since it
could use simplefb that was setup by U-boot.
This shows continuous boot output from the EFI bootloader all the way to the
console login, which is nice.
If I enable the video drivers (rklvds, rkvop, rkdrm) to (re-)initialize
the video in OpenBSD, it boots about halfway through the kernel sequence and
then blacks out for a second or two as it has to wait for hardware to settle
before drawing through the new output path.
With video and the keyboard working, I finally reached that point where I could
do development directly on the device which feels a lot different than remotely
poking at something through a serial console.
I've done a lot of little quality-of-life changes like implementing a US
keyboard layout for the non-US model (available with wsconsctl
keyboard.encoding=us), adding
gpio(4)
support to rkgpio so I can poke individual GPIO pins from userland with
gpioctl.
This allows me to turn on and off the red (gpioctl gpio1 8) and green
(gpioctl gpio1 12) LEDs on the side of the device depending on whether the
battery is about to die (red) or is charging (green).
One thing that is odd about the DM250 is that the left Alt key and the right Shift key are directly wired up to their own GPIO pins, not going through the TC35894 like every other key. Presumably this is why the recovery sequence that the vendor's U-boot tree looks for is those two keys plus power, so their U-boot didn't have to implement a TC3589X driver.
Anyway, since those two keys are not on the keyboard, I thought about how to
make them work in OpenBSD without a specific hack for the DM250 or something in
my tcmfd driver that had to reach into GPIO land.
Since the existing OpenBSD gpiokeys driver works on armv7 and sees the entries
in the device tree:
gpiokeys0 at mainbus0: "Power Button", "Lid Switch", "Right Shift", "Left Alt"
I added a
(only-slightly-hackish)
hack
to it to inject unknown GPIO keys into the console wskbd device's input
stream, so anything listening for keyboard input will see left Alt and right
Shift as though it came from the same tc35894 device.
This means Control+Alt+F# keys work as expected to change virtual terminals, for
example.
I still have a laundry list of things I'd like to keep working on like improving the keyboard driver, implementing some degree of suspend/resume, and supporting the external DMA engine for the MMC controller to speed up eMMC access. Our arm port also doesn't enable multiple processors but some degree of support seems there from when it was imported from NetBSD.
My list of commits is getting quite long so I need to try to upstream as much of this as possible. My last attempts at committing just basic RK3128 support in various drivers were thwarted, so I'm still just hammering out stuff in my own trees for now. If you have a DM250 (non-US for now) and want to try OpenBSD on it, let me know and I can send you installation images and instructions.
Note: I am frequently rebasing and squashing commits in my trees as I improve things, so the commit IDs in the trees linked here may vanish or become obsolete.
OpenBSD 7.9-beta (GENERIC) #134: Mon Mar 23 16:10:06 CDT 2026
jcs@dm250x:/usr/src/sys/arch/armv7/compile/GENERIC
real mem = 1018015744 (970MB)
avail mem = 988418048 (942MB)
random: good seed from bootblocks
mainbus0 at root: Rockchip RK3128 Pomera DM250
cpu0 at mainbus0 mpidr f00: ARM Cortex-A7 r0p5
cpu0: 32KB 32b/line 2-way L1 VIPT I-cache, 32KB 64b/line 4-way L1 D-cache
cpu0: 256KB 64b/line 8-way L2 cache
cortex0 at mainbus0
syscon0 at mainbus0: "syscon"
"power-controller" at syscon0 not configured
syscon1 at mainbus0: "qos"
syscon2 at mainbus0: "qos"
syscon3 at mainbus0: "qos"
syscon4 at mainbus0: "qos"
syscon5 at mainbus0: "qos"
syscon6 at mainbus0: "qos"
syscon7 at mainbus0: "qos"
ampintc0 at mainbus0 nirq 160, ncpu 4: "interrupt-controller"
rkclock0 at mainbus0
syscon8 at mainbus0: "syscon"
rkusbphy0 at syscon8: phy 0
rklvds0 at syscon8: LVDS 24-bit JEIDA
rkpinctrl0 at mainbus0: "pinctrl"
rkgpio0 at rkpinctrl0
gpio0 at rkgpio0: 32 pins
rkgpio1 at rkpinctrl0
gpio1 at rkgpio1: 32 pins
rkgpio2 at rkpinctrl0
gpio2 at rkgpio2: 32 pins
rkgpio3 at rkpinctrl0
gpio3 at rkgpio3: 32 pins
rkdrm0 at mainbus0
drm0 at rkdrm0
agtimer0 at mainbus0: 24000 kHz
rkvop0 at mainbus0: RK3126 VOP
dwctwo0 at mainbus0
dwmmc0 at mainbus0: 49 MHz base clock
sdmmc0 at dwmmc0: 4-bit, sd high-speed, mmc high-speed
dwmmc1 at mainbus0: 49 MHz base clock
sdmmc1 at dwmmc1: 4-bit, sd high-speed
dwmmc2 at mainbus0: 49 MHz base clock
sdmmc2 at dwmmc2: 8-bit, mmc high-speed
rklvdsphy0 at mainbus0
dwdog0 at mainbus0
rkpwm0 at mainbus0
com0 at mainbus0: dw16550, 64 byte fifo
bcmbt0 at com0
com1 at mainbus0: dw16550
rkiic0 at mainbus0
iic0 at rkiic0
tcmfd0 at iic0 addr 0x45
wskbd0 at tcmfd0: console keyboard
rkpmic0 at iic0 addr 0x1c: RK818
rkcharger0 at rkpmic0: 4.2V 5800mAh battery
simplebat0 at rkcharger0
gpioleds0 at mainbus0: "pomera:green:power"
gpiokeys0 at mainbus0: "Power Button", "Lid Switch", "Right Shift", "Left Alt"
pwmbl0 at mainbus0
simplepanel0 at mainbus0: 1024x600
rkdrm0: 1024x600, 32bpp
wsdisplay0 at rkdrm0 mux 1: console (std, vt100 emulation), using wskbd0
wsdisplay0: screen 1-5 added (std, vt100 emulation)
usb0 at dwctwo0: USB revision 2.0
uhub0 at usb0 configuration 1 interface 0 "DWC2 DWC2 root hub" rev 2.00/1.00
addr 1
scsibus0 at sdmmc0: 2 targets, initiator 0
sd0 at scsibus0 targ 1 lun 0: <Sandisk, SD32G, 0085> removable
sd0: 30436MB, 512 bytes/sector, 62333952 sectors
scsibus1 at sdmmc2: 2 targets, initiator 0
sd1 at scsibus1 targ 1 lun 0: <Toshiba, 008GB0, 0000> removable
sd1: 7456MB, 512 bytes/sector, 15269888 sectors
bwfm0 at sdmmc1 function 1
manufacturer 0x02d0, product 0xa9a6 at sdmmc1 function 2 not configured
vscsi0 at root
scsibus2 at vscsi0: 256 targets
softraid0 at root
scsibus3 at softraid0: 256 targets
bootfile: sd0a:/bsd
boot device: sd0
root on sd1a (717a8af462695010.a) swap on sd1b dump on sd1b
bcmbt0: address 70:4a:0e:df:xx:xx
bwfm0: address 70:4a:0e:df:xx:xx
Pomera DM250 Tinkering [joshua stein] (06:01 , Monday, 23 March 2026)
The KING JIM Pomera DM250 "digital typewriter" is a small Linux-powered ARM computer that boots up into a custom word processor application. I've been tinkering with it to try to get OpenBSD booted on it. I'd normally wait until the end and write up a proper article explaining everything, but this process is taking a lot longer than I expected so I figured I'd document it all as I go.

KING JIM has made a number of portable word processors starting with the DM5, the DM10 and DM20 with fold-out keyboards, then the DM100 and DM200 which share the form factor with its latest DM250.
I only know of KING JIM because stsp@ has their Portabook x86 machine that has required a handful of tweaks to get OpenBSD working on it.
The DM250 was only sold in Japan, but the manufacturer recently launched an Indiegogo campaign to launch a US version ("DM250US") with an ANSI keyboard layout and defaulting to English in the software (the Japanese model has English support in its software and can use the keyboard in English, though with its slightly different layout). I learned about this on the writerDeck subreddit which I subscribe to for some reason.
The unit measures 10.35" wide by 4.72" tall, with a height of 0.7" when closed. It has a 7" 1024x600 full-color TFT LCD screen, though the DM250's custom word processing software only uses black and white. It weighs 1.4 lbs and has a soft-touch rubber coating on its case. Its DM250US now has a US layout, though the arrow keys were unfortunately moved from an inverted "T" layout on the Japanese DM250 to a horizontal layout.
The DM250 is powered by a Rockchip RK3128 quad-core ARM Cortex-A7 processor with 1GB of RAM and about 8GB of eMMC storage. It has a full-size SD card slot and USB-C for charging. It has an AMPAK AP6236 Wi-Fi and Bluetooth SDIO chip which is based on the Broadcom BCM43436.
I backed the Indiegogo campaign on February 19th and used Buyee to buy a Japanese model DM250 while I waited for the US campaign to end and for mine to ship out. The Japanese DM250 arrived on the 13th and with the aid of this website, I was able to boot into a Debian build and start inspecting how the device worked. I also took backups of the eMMC flash to be able to recover to it if I screw things up.
I haven't really been interested in the random armv7 boards that run OpenBSD because they all seemed to be similar while also each having quirks that make them unusable for daily use due to lacking driver support or cheap hardware. The DM250 appealed to me because it was a complete computer with keyboard and screen, not just a lone board with an ethernet port. (Although I'm sure I will eventually come up short on complete driver support on this machine too.)
It can turn on "instantly" due to some proprietary software called "LINEOWarp" which integrates into u-boot and the Linux kernel and basically hibernates the machine after booting and writes out its RAM to the eMMC. Upon opening the lid, u-boot directly reads the WARP image and loads it into RAM, bypassing the Linux kernel boot process. I first heard about this type of software from dosdude1's Honda infotainment video which has a similar need for "instant on".
Trying to get OpenBSD loaded will require updating u-boot on the DM250 to a newer release with EFI support. EFI support was added in 2015 but the DM250 has a build from 2014.
But I can't really mess with u-boot until I get access to the UART on the device and I haven't been able to find the UART pins. I tried booting to Linux and printing random garbage to the serial port while I probed every pin on the board with my Saleae looking for serial data. For some reason nothing came out anywhere.
Eventually I found this page which shows where the UART pins are, which I definitely probed and found nothing while Debian was running. But once I kept leads on those pins while powering on, I could see u-boot output. Now I can actually see what's going on.
U-Boot 2014.10-RK3128-06 (Mar 17 2022 - 14:28:55)
CPU: rk3128
CPU's clock information:
arm pll = 816000000HZ
periph pll = 594000000HZ
ddr pll = 600000000HZ
codec pll = 400000000HZ
Board: Rockchip platform Board
Uboot as second level loader
DRAM: Found dram banks:1
Adding bank:0000000060000000(0000000040000000)
512 MiB
[...]
I'm not sure why u-boot shows 512 MB of RAM there when the DM250 has 1 GB,
especially when that bank output shows a size of 0x40000000 (1,073,741,824
bytes).
While trying to solder wires to the UART pins, I damaged one of the pads :/ The device still works otherwise so I'll just sell this one and wait for my US model to arrive.
I learned that Rockchip SoCs have a neat feature where if the firmware fails
to load a bootloader from eMMC or SDMMC, it will automatically launch into a
"MaskROM" mode
where it becomes a ugen device over its USB-C cable and allows the attached
computer to directly read and write data to the eMMC.
This way the device can never really be bricked which makes me more confident
testing u-boot changes.
This MaskROM mode works even before SDRAM is initialized, so the first thing
that has to be done is sending it a RAM training blob, then a more complete
usbplug blob which allows more complicated commands over USB.
This can be done with
rkflashtool
or
xrock
which both work on OpenBSD.
$ doas xrock maskrom rk3128_ddr_300MHz_v2.12.bin rk3128_usbplug_v2.63.bin
After uploading the blobs, the device detaches and reattaches into its USB loader mode:
ugen0 at uhub3 port 3 "vendor 0x2207 product 0x310c" rev 2.00/1.00 addr 9
ugen0 detached
ugen0 at uhub3 port 3 "RockChip USB-MSC" rev 2.00/1.00 addr 9
If the flashed u-boot does boot but it's broken, one can short the eMMC to
ground while the board is being powered on and force it into MaskROM mode.
On the DM250, this can be done by shorting TP501 to ground.
My DM250US arrived. A quick teardown shows it's basically the same hardware but with a different version silkscreened.


The keyboard keys feel slightly smaller in size and rougher in texture. u-boot appears to be the same version but the build date is newer:
U-Boot 2014.10-RK3128-06 (Oct 07 2024 - 17:22:56)
The kernel is still Linux 3.10.0 with WARP patches. The DTB stored on the eMMC is mostly the same but with these additions:
bq27z558-battery@55 {
compatible = "ti,bq27z561";
reg = <0x55>;
gpios = <0x76 0x12 0x01 0x75 0x0d 0x01>;
status = "okay";
};
bq256xx-charger@6b {
compatible = "ti,bq25620";
reg = <0x6b>;
gpios = <0x76 0x11 0x01>;
ti,watchdog-timeout-ms = <0x00>;
charge-current-limit-microamp = <0x2bf200>;
charge-voltage-limit-microvolt = <0x408b70>;
input-current-limit-microamp = <0x2dc6c0>;
minimal-system-voltage-microvolt = "\0.c";
pre-charge-control-microamp = <0x8d9a0>;
termination-control-microamp = <0x249f0>;
ti,no-thermistor = <0x01>;
status = "okay";
};
I got this pogo-pin clip from Adafruit to access the UART pins without having to solder to them and potentially damage them again. It's definitely made it much easier to reliably access the UART across multiple reboots.
I've been trying to get different u-boot trees compiling and booting but none
were working except the
one from KING JIM.
I tried
rockchip-linux/u-boot
and
linux-rockchip/u-boot-rockchip
but neither boot (or at least don't output anything to uart1.
Geniatech
make the
XPI-3128
which is basically the
rk3128-evb
evaluation board that exists in u-boot.
While digging around their
documentation,
I found
this huge tarball
that includes a snapshot of their u-boot tree which is based on newer 2017.09
and has most of the necessary Rockchip drivers.
I'm not sure why Rockchip is so special that they can't do everything in the
official upstream u-boot tree…
With a few changes to build with a newer gcc, setting CONFIG_DEBUG_UART_BASE
to 0x20064000 (uart1 instead of uart2), adding some
custom uart initialization
code
to arch/arm/mach-rockchip/rk3128/rk3128.c, and adding an rk3128-specific
timer driver, I now have a working build of u-boot that has EFI support!
U-Boot 2017.09-g5d36672-dirty (Apr 12 2025 - 12:31:04 -0500)
Model: KING JIM Pomera DM250US
DRAM: 512 MiB
Sysmem: init
Relocation Offset: 00000000, fdt: 00000000
Using default environment
dwmmc@10214000: 1, dwmmc@1021c000: 0
mmc_init: err -110, timer:41969
switch to partitions #0, OK
mmc0(part 0) is current device
Bootdev: mmc 0
MMC0: High Speed, 52Mhz
PartType: RKPARM
rockchip_get_boot_mode: Could not found misc partition
boot mode: normal
Found DTB in resource part
DTB: rk-kernel.dtb
CLK: (uboot. arm: enter 600000 KHz, init 600000 KHz, kernel 0N/A)
apll 600000 KHz
dpll 600000 KHz
cpll 400000 KHz
gpll 594000 KHz
armclk 600000 KHz
aclk_cpu 148500 KHz
hclk_cpu 74250 KHz
pclk_cpu 74250 KHz
aclk_peri 148500 KHz
hclk_peri 74250 KHz
pclk_peri 74250 KHz
=> mmcinfo
Device: dwmmc@1021c000
Manufacturer ID: 11
OEM: 100
Name: 008GB
Timing Interface: High Speed
Tran Speed: 52000000
Rd Block Len: 512
MMC version 5.1
High Capacity: Yes
Capacity: 7.3 GiB
Bus Width: 8-bit
Erase Group Size: 512 KiB
HC WP Group Size: 4 MiB
User Capacity: 7.3 GiB WRREL
Boot Capacity: 4 MiB ENH
RPMB Capacity: 4 MiB ENH
=> bootefi
bootefi - Boots an EFI payload from memory
Usage:
bootefi <image address> [fdt address]
- boot EFI payload stored at address <image address>.
If specified, the device tree located at <fdt address> gets
exposed as EFI configuration table.
Unfortunately only the eMMC (dwmmc@1021c000) is working but the probe of the
SDMMC device at dwmmc@10214000 times out.
This means I can't see an inserted SD card and begin to boot OpenBSD's EFI
loader.
I think this has to do with the device not being powered up at boot. I'm still trying to figure out what is required for this to work since it works in other older Rockchip-specific u-boot trees.
Success!
U-Boot 2017.09-gcc6b241-dirty (Apr 14 2025 - 17:20:37 -0500)
Model: KING JIM Pomera DM250US
DRAM: 512 MiB
Sysmem: init
Relocation Offset: 00000000, fdt: 00000000
Using default environment
dwmmc@10214000: 1, dwmmc@1021c000: 0
RKPARM: Invalid parameter part table
switch to partitions #0, OK
mmc1 is current device
switch to partitions #0, OK
mmc0(part 0) is current device
Bootdev: mmc 0
MMC0: High Speed, 52Mhz
PartType: RKPARM
rockchip_get_boot_mode: Could not found misc partition
boot mode: normal
Found DTB in resource part
DTB: rk-kernel.dtb
In: serial
Out: serial
Err: serial
CLK: (uboot. arm: enter 600000 KHz, init 600000 KHz, kernel 0N/A)
apll 600000 KHz
dpll 600000 KHz
cpll 400000 KHz
gpll 594000 KHz
armclk 600000 KHz
aclk_cpu 148500 KHz
hclk_cpu 74250 KHz
pclk_cpu 74250 KHz
aclk_peri 148500 KHz
hclk_peri 74250 KHz
pclk_peri 74250 KHz
=> setenv fdtfile rk3128-pomera-dm250us.dtb
=> load mmc 1 ${kernel_addr_r} efi/boot/bootarm.efi
reading efi/boot/bootarm.efi
119296 bytes read in 51 ms (2.2 MiB/s)
=> bootefi ${kernel_addr_r} ${fdt_addr_r}
## Starting EFI application at 62008000 ...
FtlInit fffffffe
Scanning disk nandc@10500000.blk...
rkparm_init_param_from_storage param read fail
RKPARM: Invalid parameter part table
Scanning disk dwmmc@10214000.blk...
Scanning disk dwmmc@1021c000.blk...
Scanning disk rksdmmc@1021c000.blk...
rkparm_init_param_from_storage param read fail
RKPARM: Invalid parameter part table
Scanning disk rksdmmc@10214000.blk...
rkparm_init_param_from_storage param read fail
RKPARM: Invalid parameter part table
Scanning disk rksdmmc@10218000.blk...
rkparm_init_param_from_storage param read fail
RKPARM: Invalid parameter part table
Found 6 disks
Adding bank: 0x60000000 - 0x80000000 (size: 0x20000000)
disks: sd0* sd1 sd2 sd3 sd4 sd5 sd6 sd7 sd8 sd9 sd10 sd11 sd12 sd13 sd14 sd15 sd16 sd17 sd18 sd19 sd20 sd21 sd22 sd23 sd24 sd25 sd26 sd27 sd28 sd29 sd30
>> OpenBSD/armv7 BOOTARM 1.23
boot>
cannot open sd0a:/etc/random.seed: No such file or directory
booting sd0a:/bsd: 2411324+767888+11506208+484492 [188357+107+388448+214048]=0x0
I added some debug printfs to the working u-boot tree and saw that it was
calling
rk_iomux_config(RK_UART2_IOMUX)
when initializing the storage.
That ends up calling
rk_uart_iomux_config()
which does some magic writes to the IOMUX.
Reading the
GRF documentation
and other pieces of code, I learned that GPIO1B needs pins 12 and 14 enabled to
activate mmc0_pwren and mmc0_cmd, and GPIO1C needs pins 0, 2, 4, 6, 8, and
10 enabled to change them from JTAG and UART2 pins to those needed for eMMC.
With that, the SD card is recognized and u-boot can read files from it with its
built-in FAT filesystem support.
The existing config on the eMMC splits up the single drive into many different
partitions like kernel, warp, ro_data, etc., which each show up as
separate disks to the EFI loader.
The EFI loader is read from the SD card and loaded into memory with load mmc 1
${kernel_addr_r} efi/boot/bootarm.efi, and then executed with bootefi
${kernel_addr_r} ${fdt_addr_r}.
OpenBSD's BOOTARM.EFI loads successfully and can list files on the SD card and
start reading and booting bsd.rd.
Unfortunately it goes off into lala land there so I'm not sure what it's doing,
but at least now I can move on to the OpenBSD part of this bringup.
I've pushed my U-Boot tree to GitHub as it seems to be in a good state now. I split up my changes specific to the rk3128 and then added a specific board config for the DM250. Eventually this will need some work to enable the LVDS LCD at boot time like it was with the factory U-Boot.
I added a uart_putc helper to OpenBSD's armv7 locore0.S to print numbers to
the serial port, and then added them along the boot path to see how far it got.
.globl uart_putc /* send r1 to uart */
uart_putc:
ldr r0, =0x20064000
str r1, [r0]
ldr r2, =0x20064000 + 0x7c /* UART_USR */
check_usr:
ldr r3, [r2]
tst r3, #(1<<1) /* UART_TRANSMIT_FIFO_NOT_FULL */
beq check_usr
bx lr
[...]
start_mmu:
mov r1, #'1'
bl uart_putc
[...]
mov r1, #'2'
bl uart_putc
/* Enable MMU */
mrc CP15_SCTLR(r0)
orr r0, r0, #CPU_CONTROL_MMU_ENABLE
mcr CP15_SCTLR(r0)
isb
mov r1, #'3'
bl uart_putc
This showed it was getting to start_mmu but as soon as it wrote the SCTLR
register to enable the MMU, it stopped outputting.
Mark
pointed out
that this was because there was no mapping in the MMU page table to continue
accessing the UART at 0x20064000.
I added an entry for it:
MMU_INIT(0x20000000, 0x20000000, 1,
L1_TYPE_S|L1_S_V7_AP(AP_KRW)|L1_S_V7_AF)
But it still wasn't printing '3'.
After a few hours of debugging and reading more docs, I finally realized that my
dumb uart_putc function was clobbering r0 and r1 which were being used
inside of start_mmu so the page table wasn't getting set up right.
By changing it to just a few inline instructions with no FIFO status checking
and using registers that weren't in use, it could enable the MMU properly and
get to '3' and beyond:
ldr r4, =0x20064000
mov r5, #'3'
str r4, [r5]
Eventually with some more tweaks to the DTB passed from U-Boot to the EFI loader
and to the kernel, the kernel could properly print to the chosen stdout-path
and get to copyright.
Since it is able to do this through the normal com_fdt_init_cons routine in
dev/fdt/com_fdt.c which does a bus_space_map, I could remove all of my
debugging from locore0 and then remove my custom UART page table entry.
It can now get to copyright with no kernel changes:
disks: sd0* sd1 sd2
>> OpenBSD/armv7 BOOTARM 1.23
boot> b bsd.arm
cannot open sd0a:/etc/random.seed: No such file or directory
booting sd0a:bsd.arm: 4910236+1012484+138796+608784
[2789902+360416+184+330342]=0x0
OpenBSD/armv7 booting ...
arg0 0xc0caf850 arg1 0x0 arg2 0x9ac83000
Allocating page tables
IRQ stack: p0x60cde000 v0xc0cde000
ABT stack: p0x60cdf000 v0xc0cdf000
UND stack: p0x60ce0000 v0xc0ce0000
SVC stack: p0x60ce1000 v0xc0ce1000
Creating L1 page table at 0x60cb0000
Mapping kernel
Constructing L2 page tables
undefined page type 0x2 pa 0x60000000 va 0x60000000 pages 0x2000 attr 0x8
type 0x7 pa 0x62000000 va 0x60000000 pages 0x6000 attr 0x8
type 0x4 pa 0x68000000 va 0x68000000 pages 0x7 attr 0x8
type 0x7 pa 0x68008000 va 0x60000000 pages 0x32c7b attr 0x8
type 0x2 pa 0x9ac83000 va 0x9ac83000 pages 0x7 attr 0x8
type 0x7 pa 0x9ac8a000 va 0x9ac8a000 pages 0x4 attr 0x8
type 0x7 pa 0x9ac8e000 va 0x9ac8e000 pages 0x2 attr 0x8
type 0x7 pa 0x9ac90000 va 0x9ac90000 pages 0x1 attr 0x8
type 0x2 pa 0x9ac91000 va 0x9ac91000 pages 0x100 attr 0x8
type 0x2 pa 0x9ad91000 va 0x9ad91000 pages 0x1e attr 0x8
type 0x6 pa 0x9adaf000 va 0x9adaf000 pages 0x1 attr 0x8000000000000008
type 0x0 pa 0x9adb0000 va 0x9adb0000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb1000 va 0x9adb1000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb2000 va 0x9adb2000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb3000 va 0x9adb3000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb4000 va 0x9adb4000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb5000 va 0x9adb5000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb6000 va 0x9adb6000 pages 0x1 attr 0x8
type 0x2 pa 0x9adb7000 va 0x9adb7000 pages 0x308c attr 0x8
type 0x5 pa 0x9de43000 va 0x9de43000 pages 0x1 attr 0x8000000000000008
type 0x2 pa 0x9de44000 va 0x9adb7000 pages 0x21bc attr 0x8
pmap [ using 3481620 bytes of bsd ELF symbol table ]
Copyright (c) 1982, 1986, 1989, 1991, 1993
The Regents of the University of California. All rights reserved.
Copyright (c) 1995-2025 OpenBSD. All rights reserved. https://www.OpenBSD.org
Now I just need to figure out how far into init_main.c it's getting and why it
hangs after printing the copyright line.
With some instrumenting I figured out the kernel was getting as far as setting
up the page tables for the MMU and would then lock up when doing a memset on
the newly setup memory.
By reducing the amount of memory used, I could get it to fully boot the kernel
to !cold, but it crashes in userland:
U-Boot 2017.09-g9333465-dirty (Apr 21 2025 - 13:38:25 -0500)
Model: KING JIM Pomera DM250
DRAM: 1 GiB
Sysmem: init
Relocation Offset: 3ddc2000, fdt: 00000000
Using default environment
Failed to load DTB
Failed to get kernel dtb, ret=-1
In: serial
Out: serial
Err: serial
Model: KING JIM Pomera DM250
dwmmc@10214000: 1, dwmmc@1021c000: 0
switch to partitions #0, OK
mmc1 is current device
switch to partitions #0, OK
mmc0(part 0) is current device
Bootdev: mmc 0
MMC0: High Speed, 52Mhz
## Unknown partition table type 0
PartType: <NULL>
rockchip_get_boot_mode: Could not found misc partition
boot mode: normal
CLK: (uboot. arm: enter 600000 KHz, init 600000 KHz, kernel 0N/A)
apll 600000 KHz
dpll 600000 KHz
cpll 400000 KHz
gpll 594000 KHz
armclk 600000 KHz
aclk_cpu 148500 KHz
hclk_cpu 74250 KHz
pclk_cpu 74250 KHz
aclk_peri 148500 KHz
hclk_peri 74250 KHz
pclk_peri 74250 KHz
Hit key to stop autoboot('CTRL+C'): 0
switch to partitions #0, OK
mmc1 is current device
Scanning mmc 1:1...
reading /kingjim-dm250.dtb
23239 bytes read in 6 ms (3.7 MiB/s)
Found EFI removable media binary efi/boot/bootarm.efi
reading efi/boot/bootarm.efi
119564 bytes read in 16 ms (7.1 MiB/s)
## Starting EFI application at 62008000 ...
FtlInit fffffffe
Scanning disk nandc@10500000.blk...
Scanning disk dwmmc@10214000.blk...
Scanning disk dwmmc@1021c000.blk...
Found 3 disks
Adding bank: 0x60000000 - 0xa0000000 (size: 0x40000000)
disks: sd0* sd1 sd2 sd3
>> OpenBSD/armv7 BOOTARM 1.23
boot> b sd0a:/bsd.rd
cannot open sd0a:/etc/random.seed: No such file or directory
booting sd0a:/bsd.rd: 4916868+1014156+16731272+608976
[2791939+360736+184+330515]=0x0
OpenBSD/armv7 booting ...
arg0 0xc1c8514c arg1 0x0 arg2 0x9ac82000
Allocating page tables
IRQ stack: p0x61cb4000 v0xc1cb4000
ABT stack: p0x61cb5000 v0xc1cb5000
UND stack: p0x61cb6000 v0xc1cb6000
SVC stack: p0x61cb7000 v0xc1cb7000
Creating L1 page table at 0x61c88000
Mapping kernel
Constructing L2 page tables
undefined page type 0x2 pa 0x60000000 va 0x60000000 pages 0x2000 attr 0x8
type 0x7 pa 0x62000000 va 0x60000000 pages 0x6000 attr 0x8
initarm: added 24576 pages at 0x62000000, physmem now 32768
type 0x4 pa 0x68000000 va 0x68000000 pages 0x7 attr 0x8
type 0x7 pa 0x68008000 va 0x60000000 pages 0x32c7a attr 0x8
initarm: added 103997 pages at 0x68008000, physmem now 136765
type 0x2 pa 0x9ac82000 va 0x9ac82000 pages 0x7 attr 0x8
type 0x7 pa 0x9ac89000 va 0x9ac89000 pages 0x4 attr 0x8
type 0x7 pa 0x9ac8d000 va 0x9ac8d000 pages 0x2 attr 0x8
type 0x7 pa 0x9ac8f000 va 0x9ac8f000 pages 0x1 attr 0x8
type 0x2 pa 0x9ac90000 va 0x9ac90000 pages 0x100 attr 0x8
type 0x2 pa 0x9ad90000 va 0x9ad90000 pages 0x1e attr 0x8
type 0x6 pa 0x9adae000 va 0x9adae000 pages 0x1 attr 0x8000000000000008
type 0x0 pa 0x9adaf000 va 0x9adaf000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb0000 va 0x9adb0000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb1000 va 0x9adb1000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb2000 va 0x9adb2000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb3000 va 0x9adb3000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb4000 va 0x9adb4000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb5000 va 0x9adb5000 pages 0x1 attr 0x8
type 0x0 pa 0x9adb6000 va 0x9adb6000 pages 0x1 attr 0x8
type 0x2 pa 0x9adb7000 va 0x9adb7000 pages 0x308c attr 0x8
type 0x5 pa 0x9de43000 va 0x9de43000 pages 0x1 attr 0x8000000000000008
type 0x2 pa 0x9de44000 va 0x9adb7000 pages 0x21bc attr 0x8
pmap [ using 3484148 bytes of bsd ELF symbol table ]
Copyright (c) 1982, 1986, 1989, 1991, 1993
The Regents of the University of California. All rights reserved.
Copyright (c) 1995-2025 OpenBSD. All rights reserved. https://www.OpenBSD.org
OpenBSD 7.7 (obj.amd64.armv7) #113: Fri Apr 18 11:15:57 CDT 2025
jcs@nano.jcs.org:/usr/src/sys/arch/armv7/compile/GENERIC/obj.amd64.armv7
real mem = 560189440 (534MB)
avail mem = 520486912 (496MB)
random: boothowto does not indicate good seed
mainbus0 at root: KING JIM Pomera DM250
cortex0 at mainbus0
psci0 at mainbus0: PSCI 0.0
syscon0 at mainbus0: can't map registers
syscon1 at mainbus0: "syscon"
ampintc0 at mainbus0 nirq 160, ncpu 4: "interrupt-controller"
syscon2 at mainbus0: "syscon"
agtimer0 at mainbus0: 24000 kHz
agtimer1 at mainbus0: 24000 kHz
com0 at mainbus0: dw16550, 64 byte fifo
com0: probed fifo depth: 0 bytes
com1 at mainbus0: dw16550
com1: console
com2 at mainbus0: dw16550
ehci0 at mainbus0
usb0 at ehci0: USB revision 2.0
uhub0 at usb0 configuration 1 interface 0 "Generic EHCI root hub" rev 2.00/1.00
addr 1
ohci0 at mainbus0: version 1.0
dwmmc0 at mainbus0: 18 MHz base clock
sdmmc0 at dwmmc0: 4-bit, dma
dwmmc1 at mainbus0: 25 MHz base clock
sdmmc1 at dwmmc1: 8-bit, dma
rkiic0 at mainbus0
iic0 at rkiic0
"rockchip,rk818" at iic0 addr 0x1c not configured
rkiic1 at mainbus0
iic1 at rkiic1
pcxrtc0 at iic1 addr 0x51pcxrtc0: pcxrtc_reg_read: failed to read reg0
pcxrtc0: pcxrtc_reg_write: failed to write reg0
pcxrtc0: pcxrtc_reg_read: failed to read reg2
: battery ok
rkiic2 at mainbus0
iic2 at rkiic2
rkiic3 at mainbus0
iic3 at rkiic3
usb1 at ohci0: USB revision 1.0
uhub1 at usb1 configuration 1 interface 0 "Generic OHCI root hub" rev 1.00/1.00
addr 1
scsibus0 at sdmmc0: 2 targets, initiator 0
sd0 at scsibus0 targ 1 lun 0: <Sandisk, SL32G, 0080> removable
sd0: 30436MB, 512 bytes/sector, 62333952 sectors
scsibus1 at sdmmc1: 2 targets, initiator 0
sd1 at scsibus1 targ 1 lun 0: <Toshiba, 008GB1, 0000> removable
sd1: 7456MB, 512 bytes/sector, 15269888 sectors
vscsi0 at root
scsibus2 at vscsi0: 256 targets
softraid0 at root
scsibus3 at softraid0: 256 targets
bootfile: sd0a:/bsd
boot device: sd0
root on rd0a swap on rd0b dump on rd0b
pcxrtc0: pcxrtc_clock_read: failed to read rtc
WARNING: bad clock chip time
WARNING: CHECK AND RESET THE DATE!
Fatal kernel mode prefetch abort at 0x00000000
trapframe: 0xcd06ba70
IFSR=00000005, IFAR=00000000, spsr=80000113
r0 =00000000, r1 =00000007, r2 =c18a0868, r3 =60000113
r4 =00000007, r5 =c93ad000, r6 =c93ad000, r7 =cd06bb10
r8 =cd06a000, r9 =00000013, r10=c08a8988, r11=cd06bb08
r12=c18e5378, ssp=cd06bac0, slr=c0780344, pc =00000000
Stopped at 0
ddb> trace
0
rlv=0xc032fd30 rfp=0xcd06bb90
exception_exit
rlv=0xc0343800 rfp=0xcd06bee0
sys_execve+0x2c8 [/usr/src/sys/kern/kern_exec.c:361]
rlv=0xc04c4450 rfp=0xcd06bfa8
start_init+0x254 [/usr/src/sys/kern/init_main.c:716]
rlv=0xc07976ac rfp=0xc1cb8f90
Bad frame pointer: 0xc1cb8f90
I'm still not sure why the memory limiting is needed, but apparently U-boot is not passing the proper memory segment information to the EFI bootloader for the kernel to know to avoid that address space.
Since I was able to reduce the custom things needed in U-boot, I tried adapting my UART, GPIO, and timer changes to mainline U-boot to see if maybe the EFI code was better there. It boots now with UART output but the SDMMC and eMMC driver fails to setup either one of them:
U-Boot 2025.01-00001-g4e6a9d7df66d-dirty (Apr 19 2025 - 22:17:13 -0500)
Model: KING JIM Pomera DM250
DRAM: 1 GiB
Core: 30 devices, 14 uclasses, devicetree: embed
MMC: mmc@10214000: 1, mmc@1021c000: 0
Loading Environment from nowhere... OK
In: serial@20064000
Out: serial@20064000
Err: serial@20064000
Hit any key to stop autoboot: 0
Card did not respond to voltage select! : -110
Cannot persist EFI variables without system partition
Card did not respond to voltage select! : -110
No USB controllers found
I can see the udelay calls work properly (which weren't in the
Rockchip-specific U-boot tree until I made the RK3128-specific timer changes),
and where it's failing to respond to voltage is past the initial setup which
requires responses from the controllers so it seems like they are being powered
up.
I guess I should have read the kernel panic better.
Fatal kernel mode prefetch abort at 0x00000000 and pc =00000000 indicate
that the kernel set the program counter to 0, which meant it was probably
executing a function callback that was pointing to NULL.
After dozens of printfs added, kernels recompiled, SD cards swapped, and reset
pins grounded, I figured out that the kernel was panicking in
data_abort_handler because curcpu()->ci_flush_bp was NULL and there was no
check for that (because it shouldn't really happen).
Why it was NULL was much more complicated.
ci_flush_bp was never initialized because arm/arm/cpu.c was not attaching to
cpu0, because the reg values for cpu0-cpu3 in the FDT were
0x000-0x003, but
mainbus.c
expects them to be 0xf00-0xf03.
They are 0x000-0x003 even in the latest
U-boot
tree
but 0xf00-0xf03 in
Linux
which I guess is now the authoritative source for device trees?
This is why I dislike the ARM ecosystem…
cpu0 at mainbus0 mpidr f00: ARM Cortex-A7 r0p5
cpu0: 32KB 32b/line 2-way L1 VIPT I-cache, 32KB 64b/line 4-way L1 D-cache
cpu0: 256KB 64b/line 8-way L2 cache
Anyway, now that cpu0 actually attaches and runs cpu_identify, it sets the
CPU device's ci_flush_bp callback to cpu_flush_bp_noop, which does…
nothing.
So the kernel isn't panicking now, but instead it just locks up (actually powers off) when it should be starting userland. I'm getting there…
Oh, right, we have no
clock
again, so the dwmmc driver's attempt to set the frequency
does nothing
but this isn't handled as an error.
I'll have to add rockchip,rk3128-cru support to the rkclock driver, which
does not look
fun
doing from scratch.
This menial task of translating register definitions from PDFs and cross-referencing Linux driver code is usually where my willpower fades in these types of projects.
A few weeks ago I bought a Geniatech XPI-3128 which is another board based on the Rockchip 3128, but with 4 USB ports, ethernet, and HDMI. I tried flashing a new U-Boot build to it and it promptly stopped booting. When I'd try powering it on with its recovery button pressed to boot into Maskrom mode, my laptop would just log messages like this:
uhub3: device problem, disabling port 3
So it was as if it was trying to attach but kept failing. The device was basically bricked, so I e-mailed Geniatech's support address for help. A couple weeks later they finally gave me the information I needed, which was that I had to desolder the Wi-Fi board, remove the CPU heatsink, and then the eMMC clock line was reachable to be shorted to ground to avoid loading U-Boot and force Maskrom mode.
That allowed me to flash and test different U-Boot builds again and finally get it booting on the XPI-3128. However, the more I worked on it, the more I realized trying to do anything with my older U-Boot tree was futile.
The device tree (DTB) that shipped on the DM250 (and the XPI-3128) is very old,
and is configured for old U-Boot and Linux drivers.
Things like the names of compatible strings and the way peripherals are
described are targeting Rockchip-specific drivers in their old Linux tree,
rather than what's in the current Linux kernel.
Trying to write OpenBSD drivers for the way this old DTB is setup would be a bad
idea, so I really needed to get RK3128 support working on the latest U-Boot and
targeting the
official XPI-3128 device tree
with all of its compatible strings.
While reading various RK3128 code, I came across Linux and U-Boot patches from Alex Bee, which led me to find their U-Boot tree with RK3128 support, but done right to eventually be upstreamed. With this tree I was finally able to boot a modern U-boot (2025.04) on the XPI-3128 (though still needing my timer init code), which allowed me to boot OpenBSD all the way to userland on a USB stick:
U-Boot 2025.04-rc1-00167-g04767ba5b99f-dirty (Apr 29 2025 - 21:55:41 -0500)
Model: Geniatech XPI-3128
DRAM: 1 GiB
Cannot find regulator pwm init_voltage
Cannot find regulator pwm init_voltage
Core: 164 devices, 21 uclasses, devicetree: embed
MMC: mmc@10214000: 1, mmc@1021c000: 0
Loading Environment from MMC... Reading from MMC(0)... *** Warning - bad CRC,
using default environment
In: serial@20064000
Out: serial@20064000
Err: serial@20064000
Model: Geniatech XPI-3128
Net: No ethernet found.
Hit any key to stop autoboot: 0
Scanning for bootflows in all bootdevs
Seq Method State Uclass Part Name Filename
--- ----------- ------ -------- ---- ------------------------
----------------
Scanning global bootmeth 'efi_mgr':
Card did not respond to voltage select! : -110
Cannot persist EFI variables without system partition
0 efi_mgr ready (none) 0 <NULL>
** Booting bootflow '<NULL>' with efi_mgr
Loading Boot0000 'mmc 0' failed
EFI boot manager: Cannot load any image
Boot failed (err=-14)
Scanning bootdev 'mmc@10214000.bootdev':
Card did not respond to voltage select! : -110
Scanning bootdev 'mmc@1021c000.bootdev':
Unknown uclass 'nvme' in label
Unknown uclass 'scsi' in label
Bus usb@10180000: USB DWC2
Bus usb@101c0000: USB EHCI 1.00
scanning bus usb@10180000 for devices... 1 USB Device(s) found
scanning bus usb@101c0000 for devices... 3 USB Device(s) found
Scanning bootdev 'usb_mass_storage.lun0.bootdev':
1 efi ready usb_mass_ 1 usb_mass_storage.lun0.boo
/EFI/BOOT/BOOTARM.EFI
** Booting bootflow 'usb_mass_storage.lun0.bootdev.part_1' with efi
Booting /\EFI\BOOT\BOOTARM.EFI
disks: sd0* sd1
>> OpenBSD/armv7 BOOTARM 1.23
boot>
booting sd0a:/bsd: 4915064+1013912+140528+607852 [289299+107+346480+308631]=0x0
OpenBSD/armv7 booting ...
arg0 0xc0a456f8 arg1 0x0 arg2 0x9cdff000
Allocating page tables
IRQ stack: p0x60a74000 v0xc0a74000
ABT stack: p0x60a75000 v0xc0a75000
UND stack: p0x60a76000 v0xc0a76000
SVC stack: p0x60a77000 v0xc0a77000
Creating L1 page table at 0x60a48000
Mapping kernel
Constructing L2 page tables
undefined page type 0x2 pa 0x60000000 va 0x60000000 pages 0x2000 attr 0x8
type 0x7 pa 0x62000000 va 0x62000000 pages 0x3adff attr 0x8
type 0x2 pa 0x9cdff000 va 0x9cdff000 pages 0x9 attr 0x8
type 0x7 pa 0x9ce08000 va 0x9ce08000 pages 0x1 attr 0x8
type 0x2 pa 0x9ce09000 va 0x9ce09000 pages 0x100 attr 0x8
type 0x1 pa 0x9cf09000 va 0x9cf09000 pages 0x1e attr 0x8
type 0x4 pa 0x9cf27000 va 0x9cf27000 pages 0x3 attr 0x8
type 0x9 pa 0x9cf2a000 va 0x9cf2a000 pages 0xb attr 0x8
type 0x4 pa 0x9cf35000 va 0x9cf35000 pages 0xb attr 0x8
type 0x6 pa 0x9cf40000 va 0x9cf40000 pages 0x1 attr 0x8000000000000008
type 0x4 pa 0x9cf41000 va 0x9cf41000 pages 0x1 attr 0x8
type 0x6 pa 0x9cf42000 va 0x9cf42000 pages 0x22 attr 0x8000000000000008
type 0x4 pa 0x9cf64000 va 0x9cf64000 pages 0x5 attr 0x8
type 0x3 pa 0x9cf69000 va 0x9cf69000 pages 0x1009 attr 0x8
type 0x6 pa 0x9df72000 va 0x9df72000 pages 0x1 attr 0x8000000000000008
type 0x3 pa 0x9df73000 va 0x9df73000 pages 0x1fff attr 0x8
type 0x5 pa 0x9ff72000 va 0x9ff72000 pages 0x2 attr 0x8000000000000008
type 0x3 pa 0x9ff74000 va 0x9ff74000 pages 0x8c attr 0x8
pmap [ using 945052 bytes of bsd ELF symbol table ]
Copyright (c) 1982, 1986, 1989, 1991, 1993
The Regents of the University of California. All rights reserved.
Copyright (c) 1995-2025 OpenBSD. All rights reserved. https://www.OpenBSD.org
OpenBSD 7.7-current (GENERIC) #1: Tue Apr 29 20:43:21 MDT 2025
jcs@rk3128:/usr/src/sys/arch/armv7/compile/GENERIC
real mem = 1021308928 (973MB)
avail mem = 992374784 (946MB)
random: good seed from bootblocks
mainbus0 at root: Geniatech XPI-3128
cpu0 at mainbus0 mpidr f00: ARM Cortex-A7 r0p5
cpu0: 32KB 32b/line 2-way L1 VIPT I-cache, 32KB 64b/line 4-way L1 D-cache
cpu0: 256KB 64b/line 8-way L2 cache
cortex0 at mainbus0
syscon0 at mainbus0: "syscon"
"power-controller" at syscon0 not configured
syscon1 at mainbus0: "qos"
syscon2 at mainbus0: "qos"
syscon3 at mainbus0: "qos"
syscon4 at mainbus0: "qos"
syscon5 at mainbus0: "qos"
syscon6 at mainbus0: "qos"
syscon7 at mainbus0: "qos"
ampintc0 at mainbus0 nirq 160, ncpu 4: "interrupt-controller"
rkclock0 at mainbus0
syscon8 at mainbus0: "syscon"
"usb2phy" at syscon8 not configured
syscon9 at mainbus0: can't map registers
agtimer0 at mainbus0: 24000 kHz
ehci0 at mainbus0rk3128_enable: 0x000001d9
usb0 at ehci0: USB revision 2.0
uhub0 at usb0 configuration 1 interface 0 "Generic EHCI root hub" rev 2.00/1.00
addr 1
dwmmc0 at mainbus0rk3128_set_frequency: 68 100000000
rkclock_set_frequency(rkclock0, 68, 100000000) parent
: 12 MHz base clock
sdmmc0 at dwmmc0: 4-bit, sd high-speed, dma
dwmmc1 at mainbus0rk3128_set_frequency: 71 100000000
rkclock_set_frequency: clk div mask 16128
rk3128_get_frequency: RK3128_XIN24M
rk3128_get_frequency: RK3128_XIN24M
rk3128_get_frequency: RK3128_PLL_CPLL
rk3128_get_pll: 0x20 = 523462184
rk3128_get_frequency: RK3128_PLL_CPLL
rk3128_get_pll: 0x20 = 523462184
rk3128_get_frequency: RK3128_XIN24M
rk3128_get_frequency: RK3128_XIN24M
rk3128_get_frequency: RK3128_PLL_CPLL
rk3128_get_pll: 0x20 = 523462184
rk3128_get_frequency: unhandled 71
rk3128_get_frequency: RK3128_PLL_CPLL
rk3128_get_pll: 0x20 = 523462184
: 43 MHz base clock
sdmmc1 at dwmmc1: 8-bit, mmc high-speed, dma
com0 at mainbus0: dw16550
com0: console
rkiic0 at mainbus0
rk3128_get_frequency: RK3128_CLK_I2C
rk3128_get_frequency: RK3128_PLL_CPLL
rk3128_get_pll: 0x20 = 523462184
iic0 at rkiic0
dwge0 at mainbus0rk3128_set_frequency: 124 50000000
rkclock_set_frequency(rkclock0, 124, 50000000)
rk3128_enable: 0x0000016f
: rev 0x35rk3128_get_frequency: unhandled 126
rkclock_get_frequency(rkclock0, 126)
, address 76:e3:5a:fa:14:d9
rk3128_set_frequency: 126 50000000
rkclock_set_frequency(rkclock0, 126, 50000000)
dwge0: no PHY found!
scsibus0 at sdmmc1: 2 targets, initiator 0
sd0 at scsibus0 targ 1 lun 0: <Samsung, 8GTF4R, 0000> removable
sd0: 7456MB, 512 bytes/sector, 15269888 sectors
uhub1 at uhub0 port 1 configuration 1 interface 0 "Genesys Logic USB2.0 Hub" rev
2.00/60.90 addr 2
umass0 at uhub1 port 1 configuration 1 interface 0 "USB SanDisk 3.2Gen1" rev
2.10/1.00 addr 3
umass0: using SCSI over Bulk-Only
scsibus1 at umass0: 2 targets, initiator 0
sd1 at scsibus1 targ 1 lun 0: <USB, SanDisk 3.2Gen1, 1.00> removable
serial.078155ab8107712cf658
sd1: 942480MB, 512 bytes/sector, 1930199040 sectors
vscsi0 at root
scsibus2 at vscsi0: 256 targets
softraid0 at root
scsibus3 at softraid0: 256 targets
bootfile: sd0a:/bsd
boot device: sd0
root on sd1a (f2059a1fe6a57770.a) swap on sd1b dump on sd1b
WARNING: CHECK AND RESET THE DATE!
rk3128_get_frequency: RK3128_ARMCLK
rk3128_get_frequency: RK3128_PLL_APLL
rk3128_get_pll: 0x0 = 211673469
rk3128_set_frequency: RK3128_ARMCLK 52918367
rk3128_set_frequency: RK3128_PLL_APLL 52918367
rk3128_set_pll: freq 52918367
rk3128_set_pll: 52918367 Hz
cpu0: clock not implemented
Automatic boot in progress: starting file system checks.
/dev/sd1a (f2059a1fe6a57770.a): file system is clean; not checking
pf enabled
starting network
starting early daemons: syslogd pflogd ntpd.
starting RPC daemons:.
savecore: no core dump
checking quotas: done.
clearing /tmp
kern.securelevel: 0 -> 1
creating runtime link editor directory cache.
preserving editor files.
starting network daemons: sshd.
starting local daemons: cron.
Tue Apr 29 20:52:11 MDT 2025
OpenBSD/armv7 (rk3128) (console)
login:
That kernel was actually compiled on the XPI-3128 and then booting on it (ignore all the clock debugging output).
I need to fix the clock setting for MMC and ethernet and then sdmmc and dwge
devices will work.
USB is working fine out of the box since it's actually booting and running off
of a USB stick, but once MMC works, I can install and boot from the onboard
eMMC.
I'd like to write a driver for the
Artasie AM1805 I2C RTC
that is present on the XPI-3128 which will give it a working realtime clock.
Once all of those things are working I'll hopefully commit all of this RK3128 support to OpenBSD, which will then allow me to go back to working on the DM250 and write drivers for the keyboard and LCD.
You may have assumed that I gave up on this project but the sad part is that I've been working on it almost every day and getting pretty much nowhere.
I have the basics working like GPIO pin control (rkpinctrl), clocks
(rkclock), and regulators (rkpmic) but anything more advanced like the
screen or SDIO Wi-Fi or keyboard interrupts aren't working.
The main problem so far is that the DTB embedded on the eMMC is ancient and uses a lot of proprietary Rockchip properties that are specifically for Rockchip's Linux 3.10 tree, on top of which has hard-coded hacks and RK312x-specific tweaks everywhere. Also, the U-Boot on the device also has hard-coded hacks and things specific to the DM250.
To make this work on OpenBSD, the DTB has to be modernized which is largely helped by this RK3128 file but there are a lot of DM250-specific components that need to be added describing the keyboard, SDIO, battery, LCD screen information, LVDS controller information, etc.
I currently have two DM250s taken apart on my desk with cables hooked up to their UART pins, one running OpenBSD with current U-Boot, and one running Debian Linux 11 with the DM250 Linux 3.10.0 tree booting from the DM250-specific U-Boot.

This allows me to add in some debugging printks on the Linux kernel, compile
it, dd it to /dev/mmcblk0p14, reboot, and see the output.
Then I can add things on the OpenBSD DM250 and reboot.
But often this requires changing a pin configuration or adding something new to
the DTB which then has to be written to the eMMC on the OpenBSD DM250 over a USB
cable.
This whole process has been going very slowly and just when I think I figured
something out, I broke something else.
I can turn the LCD backlight on with pwmbl and adjust its brightness, but I
still can't get anything to show up on the screen.
I wrote rklvds and rklcdc drivers for OpenBSD based on the
Rockchip-specific
code
in the DM250 U-Boot tree, only to discover that the LCDC does the same as what
is now called the VOP and should have used a different compatible string in
the DTB.
The Rockchip VOP
already has an OpenBSD
driver
that hooks it up to wscons and rkdrm, but it needed RK3128 (RK3126 actually)
support which I added.
But still nothing will show on the screen.
Current U-Boot even has Rockchip VOP and LVDS drivers so it should work out of the box, right? But it does the same thing as OpenBSD just enabling the backlight and unable to draw anything on the screen.
The keyboard kind of works with the I2C TC3589x driver I wrote, but I can't get
interrupts working.
The SD card slot works, but I don't get interrupts for card-detect events even
though I'm specifying the same cd-gpios information as the DTB file that
shipped with the DM250.
Anyway, this is all rambling and probably not very interesting but I'm getting tired of this project after a few months. If I could just get the screen and the keyboard interrupts working, I could work directly on the DM250 in OpenBSD instead of it being cracked open in pieces on my desk with wires hanging out of it working over a serial connection.
tl;dr: OpenBSD with my kernel tree and U-boot with updated device-tree bindings is now working reliably on the DM250 including graphical boot early in U-boot with keyboard support, X11, interrupt-driven keyboard, battery charging and sensors, Wi-Fi, SD card eject/insertion, CPU speed adjustments, red and green power/charging LEDs, and probably other things I'm forgetting.

I just noticed this article is now more than a year old.
After many months of working on other projects, I had enough desk space to get back to the DM250. I booted my US model that had OpenBSD installed on it and through its serial console I could see it booting to the kernel copyright line and then locking up or totally powering off. I had no other usable kernels on the device so it took a while to get it back to a working state which involved cross-compiling an armv7 kernel on my ThinkPad.
Once I had a new kernel booting, I was encountering the same problems that I remembered encountering half a year ago, such as it locking up when all 1GB of RAM was being initialized in OpenBSD, or the SD card not being able to be properly read in U-Boot. It took me a while to figure out (or remember) many of the issues were related to power caused by the battery not charged enough (or being completely disconnected as it sometimes was while moving everything around). I think when the system is running with all of its power regulators enabled, just having its USB port connected to a 5V power source doesn't supply enough amperage to fully power everything and it relies on a working battery for help or it crashes.
That led me to figure out why the battery wasn't getting charged while idling in U-Boot or OpenBSD. After more digging through the vendor U-boot tree and using a USB-C power meter, I found that the RK818 PMIC needs to be told to enable USB charging at a higher rate or else it will just trickle charge the battery at a rate that is too low to keep up with the idle power consumption of the device. This would cause the battery to eventually drain too low to be able to boot.
Luckily I had a few other DM250s so I swapped the dead battery into a device with the original U-boot firmware where it would immediately charge it at a high rate before continuing with boot.
Once I figured out how to enable higher-rate charging on the RK818, I wrote an
rkcharger
driver that hangs off the rkpmic device, and also a driver for
simple-battery
devices which asks the parent device (rkcharger) to read charging and battery
info and exposes it as hw.sensors values.
Once there was reliable power, the random crashes and power-offs stopped and I
could use the full 1GB of RAM.
I also updated the
simple-battery
node in the DM250 device tree.
I also discovered that the DM250 and DM250US aren't as identical as I thought, at least in terms of charging. The DM250 uses the RK818 to do it directly while the DM250US introduces a TI BQ25620 charging chip. This caused me a lot of frustration trying to figure out why my driver wasn't working on the DM250 (because the device wasn't even there).


I fixed a bunch of other little issues, many of which stemmed from incorrect
things in the unofficial
DM250 device-tree
that is still being worked on.
Once the reset and power settings were corrected for the
Wi-Fi
bwfm0 at sdmmc1 just magically worked without any kernel changes.
It uses brcmfmac43430-sdio.bin for firmware and it can use the
brcmfmac43430-sdio.rockchip,pomera-dm250.txt NVRAM settings file from the
original Linux installation on the device.
I brought over the U-boot LVDS and VOP drivers from Rockchip's U-boot tree, which enabled a graphical framebuffer very early in the power-on process. I also ported my Toshiba TC3589X keyboard driver from OpenBSD so I could type on the keyboard and over the serial device at the same time. I enabled U-boot's boot logo support to get a neat OpenBSD logo (read from a .bmp file in eMMC's EFI partition) during boot before clearing the screen to show OpenBSD's EFI bootloader. Since the keyboard works in U-boot now, this also enabled the keyboard to work in OpenBSD's bootloader (at least as far as telling it to boot a different device or kernel).
Once that worked, OpenBSD technically didn't need any video driver since it
could use simplefb that was setup by U-boot.
This shows continuous boot output from the EFI bootloader all the way to the
console login, which is nice.
If I enable the video drivers (rklvds, rkvop, rkdrm) to (re-)initialize
the video in OpenBSD, it boots about halfway through the kernel sequence and
then blacks out for a second or two as it has to wait for hardware to settle
before drawing through the new output path.
With video and the keyboard working, I finally reached that point where I could
do development directly on the device which feels a lot different than remotely
poking at something through a serial console.
I've done a lot of little quality-of-life changes like implementing a US
keyboard layout for the non-US model (available with wsconsctl
keyboard.encoding=us), adding
gpio(4)
support to rkgpio so I can poke individual GPIO pins from userland with
gpioctl.
This allows me to turn on and off the red (gpioctl gpio1 8) and green
(gpioctl gpio1 12) LEDs on the side of the device depending on whether the
battery is about to die (red) or is charging (green).
One thing that is odd about the DM250 is that the left Alt key and the right Shift key are directly wired up to their own GPIO pins, not going through the TC35894 like every other key. Presumably this is why the recovery sequence that the vendor's U-boot tree looks for is those two keys plus power, so their U-boot didn't have to implement a TC3589X driver.
Anyway, since those two keys are not on the keyboard, I thought about how to
make them work in OpenBSD without a specific hack for the DM250 or something in
my tcmfd driver that had to reach into GPIO land.
Since the existing OpenBSD gpiokeys driver works on armv7 and sees the entries
in the device tree:
gpiokeys0 at mainbus0: "Power Button", "Lid Switch", "Right Shift", "Left Alt"
I added a
(only-slightly-hackish)
hack
to it to inject unknown GPIO keys into the console wskbd device's input
stream, so anything listening for keyboard input will see left Alt and right
Shift as though it came from the same tc35894 device.
This means Control+Alt+F# keys work as expected to change virtual terminals, for
example.
I still have a laundry list of things I'd like to keep working on like improving the keyboard driver, implementing some degree of suspend/resume, and supporting the external DMA engine for the MMC controller to speed up eMMC access. Our arm port also doesn't enable multiple processors but some degree of support seems there from when it was imported from NetBSD.
My list of commits is getting quite long so I need to try to upstream as much of this as possible. My last attempts at committing just basic RK3128 support in various drivers were thwarted, so I'm still just hammering out stuff in my own trees for now. If you have a DM250 (non-US for now) and want to try OpenBSD on it, let me know and I can send you installation images and instructions.
Note: I am frequently rebasing and squashing commits in my trees as I improve things, so the commit IDs in the trees linked here may vanish or become obsolete.
OpenBSD 7.9-beta (GENERIC) #134: Mon Mar 23 16:10:06 CDT 2026
jcs@dm250x:/usr/src/sys/arch/armv7/compile/GENERIC
real mem = 1018015744 (970MB)
avail mem = 988418048 (942MB)
random: good seed from bootblocks
mainbus0 at root: Rockchip RK3128 Pomera DM250
cpu0 at mainbus0 mpidr f00: ARM Cortex-A7 r0p5
cpu0: 32KB 32b/line 2-way L1 VIPT I-cache, 32KB 64b/line 4-way L1 D-cache
cpu0: 256KB 64b/line 8-way L2 cache
cortex0 at mainbus0
syscon0 at mainbus0: "syscon"
"power-controller" at syscon0 not configured
syscon1 at mainbus0: "qos"
syscon2 at mainbus0: "qos"
syscon3 at mainbus0: "qos"
syscon4 at mainbus0: "qos"
syscon5 at mainbus0: "qos"
syscon6 at mainbus0: "qos"
syscon7 at mainbus0: "qos"
ampintc0 at mainbus0 nirq 160, ncpu 4: "interrupt-controller"
rkclock0 at mainbus0
syscon8 at mainbus0: "syscon"
rkusbphy0 at syscon8: phy 0
rklvds0 at syscon8: LVDS 24-bit JEIDA
rkpinctrl0 at mainbus0: "pinctrl"
rkgpio0 at rkpinctrl0
gpio0 at rkgpio0: 32 pins
rkgpio1 at rkpinctrl0
gpio1 at rkgpio1: 32 pins
rkgpio2 at rkpinctrl0
gpio2 at rkgpio2: 32 pins
rkgpio3 at rkpinctrl0
gpio3 at rkgpio3: 32 pins
rkdrm0 at mainbus0
drm0 at rkdrm0
agtimer0 at mainbus0: 24000 kHz
rkvop0 at mainbus0: RK3126 VOP
dwctwo0 at mainbus0
dwmmc0 at mainbus0: 49 MHz base clock
sdmmc0 at dwmmc0: 4-bit, sd high-speed, mmc high-speed
dwmmc1 at mainbus0: 49 MHz base clock
sdmmc1 at dwmmc1: 4-bit, sd high-speed
dwmmc2 at mainbus0: 49 MHz base clock
sdmmc2 at dwmmc2: 8-bit, mmc high-speed
rklvdsphy0 at mainbus0
dwdog0 at mainbus0
rkpwm0 at mainbus0
com0 at mainbus0: dw16550, 64 byte fifo
bcmbt0 at com0
com1 at mainbus0: dw16550
rkiic0 at mainbus0
iic0 at rkiic0
tcmfd0 at iic0 addr 0x45
wskbd0 at tcmfd0: console keyboard
rkpmic0 at iic0 addr 0x1c: RK818
rkcharger0 at rkpmic0: 4.2V 5800mAh battery
simplebat0 at rkcharger0
gpioleds0 at mainbus0: "pomera:green:power"
gpiokeys0 at mainbus0: "Power Button", "Lid Switch", "Right Shift", "Left Alt"
pwmbl0 at mainbus0
simplepanel0 at mainbus0: 1024x600
rkdrm0: 1024x600, 32bpp
wsdisplay0 at rkdrm0 mux 1: console (std, vt100 emulation), using wskbd0
wsdisplay0: screen 1-5 added (std, vt100 emulation)
usb0 at dwctwo0: USB revision 2.0
uhub0 at usb0 configuration 1 interface 0 "DWC2 DWC2 root hub" rev 2.00/1.00
addr 1
scsibus0 at sdmmc0: 2 targets, initiator 0
sd0 at scsibus0 targ 1 lun 0: <Sandisk, SD32G, 0085> removable
sd0: 30436MB, 512 bytes/sector, 62333952 sectors
scsibus1 at sdmmc2: 2 targets, initiator 0
sd1 at scsibus1 targ 1 lun 0: <Toshiba, 008GB0, 0000> removable
sd1: 7456MB, 512 bytes/sector, 15269888 sectors
bwfm0 at sdmmc1 function 1
manufacturer 0x02d0, product 0xa9a6 at sdmmc1 function 2 not configured
vscsi0 at root
scsibus2 at vscsi0: 256 targets
softraid0 at root
scsibus3 at softraid0: 256 targets
bootfile: sd0a:/bsd
boot device: sd0
root on sd1a (717a8af462695010.a) swap on sd1b dump on sd1b
bcmbt0: address 70:4a:0e:df:xx:xx
bwfm0: address 70:4a:0e:df:xx:xx
Court Says Pentagon Can’t Pick And Choose Which News Outlets Have Access [Techdirt] (04:34 , Monday, 23 March 2026)
This was extremely wild shit to be happening anywhere, much less in the land of the First Amendment. No sooner had Donald Trump decided it was time to rename the Department of Defense to the Department of War than the head of DoD operations decided it would be sorting news agencies by level of subservience.
Pretending this was all about national security, the Defense Department basically kicked everyone out of the Pentagon’s press office and stated that only those that chose to play by the new rules would be allowed back inside.
Booted: NBC News, the New York Times, NPR. Welcomed back into the fold: OAN, Newsmax, Breitbart. The Pentagon wanted a state-run press, but without having to do all the heavy lifting that comes with instituting a state-run press in the Land of the Free.
Somewhat surprisingly, some of those explicitly invited to partake of the new Defense Department media wing refused to participate. Fox and Newsmax decided to stay out, rather than promise they’d never publish leaked documents. Those choosing to bend the knee were those who never needed this sort of coercion in the first place: One America News (OAN), The Federalist, and far-right weirdos, the Epoch Times. In other words, MAGA-heavy breathers that have never been known for their independence, much less their journalism.
That didn’t stop Hegseth and the department he’s mismanaging from attempting to take a victory lap. And it certainly didn’t stop news agencies like the New York Times from suing over this blatant violation of the First Amendment.
It’s so obvious it only took the NYT four months to secure a win in a federal court (DC) that is positively swamped with litigation generated by Trump’s swamp. (h/t Adam Klasfield)
The decision [PDF] makes it clear in the opening paragraph how this is going to go for the administration and its extremely selective “respect” of enshrined rights and freedoms.
A primary purpose of the First Amendment is to enable the press to publish what it will and the public to read what it chooses, free of any official proscription. Those who drafted the First Amendment believed that the nation’s security requires a free press and an informed people and that such security is endangered by governmental suppression of political speech. That principle has preserved the nation’s security for almost 250 years. It must not be abandoned now.
Amen.
The court notes that in the past, there has been some friction between national security concerns and reporting by journalists. In some cases, the friction has been little more than the government chafing a bit when something has been published that it would rather have kept a secret. In other cases, leaks involving sensitive information have provoked reform efforts on both sides of the equation, seeking to balance these concerns with serving the public interest.
Up until now, any efforts to expel reporters have been limited to backroom bitching. What’s happening now, however, is unprecedented.
Historically, though, even when Department leaders disliked a journalist’s reporting, they did not consider suspending, revoking, or not renewing the journalist’s press credentials in response to that reporting. Julian Barnes, Pete Williams, and Robert Burns—reporters who have spent decades covering the Pentagon—as well as former Pentagon officials, are not aware of the Department ever suspending, revoking, or not renewing a journalist’s credentials due to concern over the safety or security of Department personnel or property or based on the content of their reporting.
This may be new, but the court isn’t willing to make it the “new normal.” It’s the decades of precedent that truly matter, not the vindictive whims of the overgrown toddlers currently holding office.
The Pentagon claims that demanding journalists agree not to “solicit,” much less print data or information not explicitly approved for release by the Defense Department doesn’t reach any further than existing laws governing the handling of classified documents. The court disagrees, noting that the new policy allows the government to conflate the illegal solicitation of classified material with the sort of soliciting — i.e., requests for information, etc. — journalists do every day in hopes of securing something newsworthy.
On top of allowing the government to punish people for things that weren’t previously considered unlawful, the demand for obeisance wasn’t created in a vacuum. Instead, it flowed directly from this entire administration’s constant attacks on the press by the president and pretty much every one in his Cabinet.
The plaintiffs are correct: “The record is replete with undisputed evidence that the Policy is viewpoint discriminatory.” That evidence tells the story of a Department whose leadership has been and continues to be openly hostile to the “mainstream media” whose reporting it views as unfavorable, but receptive to outlets that have expressed “support for the Trump administration in the past.”
The story begins prior to the adoption of the Policy, when—following extensive reporting on Secretary Hegseth’s background and qualifications during his confirmation process—Secretary Hegseth and Department officials “openly complained about reporting they perceive[d] as unfavorable to them and the Department.” Then, in the weeks and months leading up to the issuance of the Policy, Department officials repeatedly condemned certain news organizations—including The Times—for their coverage of the Department. For example, in response to reporting by The Times on Secretary Hegseth’s alleged misuse of the messaging platform Signal, Mr. Parnell posted on X to call out The Times “and all other Fake News that repeat their garbage.” Mr. Parnell decried these news organizations as “Trump-hating media” who “continue[] to be obsessed with destroying anyone committed to President Trump’s agenda.” In other social media posts leading up to the issuance of the Policy, Department officials referred to journalists from The Washington Post as “scum” and called for their “severe punishment” in response to reporting on Secretary Hegseth’s security detail.
It was never about keeping loose lips from sinking ships. It was always about cutting off access to news agencies the administration didn’t like. And once you’ve gotten rid of the critics, you’re left with the functional equivalent of a state-run media, but without the nastiness of having to disappear people into concentration camps or usher them out of their cubicles at gunpoint.
The court won’t let this stand. The new policy violates both the First Amendment and Fifth Amendment (due to the vagueness of its ban on “soliciting” sensitive information). That’s never been acceptable before in this nation. Just because there’s an aspiring tyrant leaning heavily on the Resolute Desk these days doesn’t make it any more permissible.
The Court recognizes that national security must be protected, the security of our troops must be protected, and war plans must be protected. But especially in light of the country’s recent incursion into Venezuela and its ongoing war with Iran, it is more important than ever that the public have access to information from a variety of perspectives about what its government is doing—so that the public can support government policies, if it wants to support them; protest, if it wants to protest; and decide based on full, complete, and open information who they are going to vote for in the next election. As Justice Brandeis correctly observed, “sunlight is the most powerful of all disinfectants.”
The administration will definitely appeal this decision. And it almost definitely will try to bypass the DC Appeals Court and go straight to the Supreme Court by claiming not being able to expel reporters it doesn’t like is some sort of national emergency. It will probably even claim that the fight it picked in Iran justifies the actions it took months before it decided to involve us in the nation’s latest Afghanistan/Vietnam.
But it definitely shouldn’t win. This isn’t some obscure permutation of First Amendment law. This is the government crafting a policy that allows it to decide what gets to be printed and who gets to print it. That’s never been acceptable here. And it never should be.
An inside look at the life of a student photographer during the spring semester [www.collegiatetimes.com - RSS Results for * of type article OR video OR youtube OR collection] (03:00 , Monday, 23 March 2026)
For a graduating senior at Virginia Tech, the spring semester is a season filled with profound emotions. The dwindling of time becomes less of an afterthought and more of a constant presence. Conversations shift from summer getaways to postgraduate life.…
Upgrade go-to campus meals with hacks that make every bite better [www.collegiatetimes.com - RSS Results for * of type article OR video OR youtube OR collection] (01:00 , Monday, 23 March 2026)
As renowned as Virginia Tech’s dining halls are, eating the same meals time and time again can get mundane. With so many options available on campus, there is room for variety and even imagination.
Omri Piko Kahan Turns Old Bike Frames into Custom Furniture [BIKEPACKING.com] (12:35 , Monday, 23 March 2026)
Industrial designer and bicycle lover Omri Piko Kahan’s latest passion project transforms reclaimed bike frames into one-of-a-kind creations with a distinctive blend of form and function. Take a quick look here...
The post Omri Piko Kahan Turns Old Bike Frames into Custom Furniture appeared first on BIKEPACKING.com.
Farewell to a Friend – A One Shot Story [35mmc] (12:00 , Monday, 23 March 2026)
A recent spring vacation in Europe with our daughter ended in sadness with the death, back home, of our beloved dog Milo at age 14. He had been gradually failing over the past month, so much so that we considered postponing our trip. (It was the same trip we had already rescheduled a year back...
The post Farewell to a Friend – A One Shot Story appeared first on 35mmc.
The historical Mighty Midget set to return to downtown Leesburg in 2026 [www.collegiatetimes.com - RSS Results for * of type article OR video OR youtube OR collection] (12:00 , Monday, 23 March 2026)
For decades, the corner where Leesburg, Virginia’s beloved Mighty Midget once served the community sat empty — until now.
The 2026 Sklar PBJ Colors are Crisp and Cool [BIKEPACKING.com] (10:08 , Monday, 23 March 2026)
Adam Sklar of Sklar Bikes in California just announced two new colors for his dirt-focused rigid model, the PBJ. Take a peek at these fresh tones below...
The post The 2026 Sklar PBJ Colors are Crisp and Cool appeared first on BIKEPACKING.com.
Weekend Snapshot [BIKEPACKING.com] (09:54 , Monday, 23 March 2026)
From the mountains of California to the rolling hills of Mid Wales and the gravel roads of Florida, this installment of Weekend Snapshot finds readers out making the most of their free time on getaways close to home. Browse the latest community-sourced scenes and share a shot from one of your trips here...
The post Weekend Snapshot appeared first on BIKEPACKING.com.
Project XL Double Bacon Cheesebrother is a Tall Fat Bike Made for the Apocalypse [BIKEPACKING.com] (09:35 , Monday, 23 March 2026)
Sentient Works has made a new bike for this year’s Bespoked Apocalypse Build-Off. Following their practically minded showstopper from last year’s event comes a Brother Cycles collaboration tall fat bike. For more on their new build, watch the full video below…
The post Project XL Double Bacon Cheesebrother is a Tall Fat Bike Made for the Apocalypse appeared first on BIKEPACKING.com.
The Pancake Discussion [Tedium] (09:17 , Monday, 23 March 2026)

Pancakes are not my favorite thing to make. They require me to make a messy, gloppy mixture of wheat, milk, and eggs. They come out imperfectly every time. And when you’re done with them, you’ve created a bunch of heavy, saggy discs. (However, not floppy disks.)
But they can be made quickly, and by the thousands. There’s a reason why greasy spoons the world over specialize in pancakes: Anyone can make them, and they can do so quickly, without too much thought.
But they leave a hell of a mess behind. (Especially after the syrup gets involved. God, the syrup. There’s so much of it, and little of it actually gets sopped up by the bread discs you made yourself.)
Sure, you can automate the process—I hear there are frozen pancakes, in case you like your frisbees to melt into food—but nothing is quite like making pancakes yourself.
Just one problem. When everyone does it, all pancakes look the same, they’re greasy, eating them makes you tired and bloated, and it’s hard not to want to grab a yogurt or something instead.
My wife loves them though, so I make them frequently.
Ever wanted to read Tedium without having those annoying ads all over the site? We have just the plan for you. Sign up for a $3 monthly membership on our Ko-Fi, and we promise we can get rid of them. We have the technology. And it beats an ad blocker. (Web-only for now, email coming soon!)
/uploads/MiniPancakes.jpg)
This is a pretty good metaphor for why some describe the discussion on social networks as being flat at times. AI is the natural example of this: It’s either love it or hate it. (And don’t take that to mean I want a bunch of pro-AI content in my feed. It’s very possible to argue against AI really well, as Ed Zitron frequently does in some of the longest blog posts I’ve ever seen.)
Politics are another, and that often leads to the most polarized takes dominating the discussion. Nuanced takes are hard to come by, and if you do make one, it’s most certainly going be drowned out by every other pancake in the stack.
Every take has a beginning and end, and then you throw another one on the griddle. Most end up a little burnt. Occasionally one slides off the pan, as a work of art.
But most of the time, pancakes fall over themselves, one flat discussion point after another. It’s easier to spit out a ten-word takedown of someone’s bad take than to offer real nuance as a discussion point.
Which is why I love blogs. Rather than offering up little discs of information that can be created quickly and digested slowly, you can spend as long or as short a time as you want on them. You can put a mere 30 minutes into them; you can put in 30 days. You can do as much or as little research as you want, and you can lay out an argument with a far different shape than your average pancake.
In many ways, that’s kind of why I’m keeping an eye on the AT Protocol, which is just starting to get good. That will help to make room for more colors and shapes than the 300ish characters you see on Bluesky.
/uploads/KagiSmallWebExample.png)
Recently, I’ve found myself clicking through Kagi’s Small Web interface. It’s effectively the same concept as Stumbleupon, except more focused on helping you find interesting voices and actual blogs.
I was excited when I found it.
They lead to the kinds of posts that social media would never let go viral on their own but are nonetheless super-interesting. A few examples I found with a little searching:
There are gaps. I did not see a lot of women or much in the way of Black culture in these posts, for example. (I did spot a post titled “6 Ways to Make the Cheese World More Inclusive,” but that’s only after I narrowed in on food.) Many authors kind of looked like me—a middle-aged white dude who has been blogging half his life. That’s super-unfortunate, and I wonder if the push towards social platforms has meant that blogs have lost some of their diversity as folks have moved elsewhere. (But it could also be an effect of its sourcing: Kagi built its initial lists from a number of Hacker News threads, among other places. Fortunately, anyone can add to it via its GitHub page.)
And while it doesn’t put it front and center like Google does, Kagi is not afraid of AI, and it’s clear that the skepticism about it that permeates some social media platforms doesn’t necessarily extend to the blogs.
But Kagi essentially revived blog discovery by bringing back an old idea in a new way. I hope it becomes huge.
I can tell that the interest in a more primitive form of communication is coming back. My RSS subscriber count recently passed my newsletter subscriber count for the first time—in part because someone put me on a list somewhere and that list went viral.
That’s honestly the kind of thing I’ve been waiting to happen for a long time. I wrote a post about why I wanted blogging to come back seven freaking years ago.
But strangely, I find myself struggling to post as much as I used to.
I’ve been trying to figure out why blogging, a medium I absolutely adore and that I’ve dedicated much of my adult life to, has felt so tough to do over the last six months or so. I think the answer, as far as I see it, is that my loosey-goosey experiment in writing when I feel like it has failed.
It’s not that I don’t feel like it. It’s that it’s too easy to let every other pancake fall on top of the thing I actually care about. The result is a lopsided plate that often feels too overwhelmed to do the thing I originally set out to do.
With that in mind, it is my hope that I can re-commit to this thing I love by making a pledge I hope to stick to: Tuesday and Thursday evenings, twice a week, starting in April. That’s where Tedium started in 2015, and that’s where it should end back up. If I have something ready to go, I’m going to have to sit on it for a couple of days. If I don’t, I’m going to have to live with that pressure. I’m the kind of guy who works best with a deadline. My problem is that, by removing that deadline, I find it easier to let other things dominate.
And that might mean that I post less on other platforms, where I’m just filling up on pancakes anyway. I’ll still post on Bluesky and Mastodon, but it needs to not be the first place I look in the morning.
Maybe I can click through Kagi’s Small Web thing for inspiration instead. As far as I can tell, it’s serving up more than pancakes.
The new SNL UK could have been embarrassing, but considering it is a rare example of a U.S. comedy phenomenon hitting British shores (rather than the other way around), I found it solid. This review sums it up for me.
It’s not often that a new video of Steve Jobs surfaces, but this one—from 1999, around the launch of the original iBook, is nice because it captures a version of Jobs not talking to the public, but his team.
Marc Andreessen, a man who could have stopped working in 1998, claims that he doesn’t get introspective. That explains a lot.
--
Find this one an interesting read? Share it with a pal! And back at it soon—thanks again!
And thanks to our pals at la machine, a device that is very much not a pancake.
The 2026 Brother Cycles Pinecone and Mehteh Colors are Stunning [BIKEPACKING.com] (09:17 , Monday, 23 March 2026)
Brother Cycles has announced three new colors for two of its frames, the Pinecone and the Mehteh. With bold yellows, blues, purples, and deep blacks, it's a refreshing and fun reset for both new and much-loved dirt-ready models. Learn more about the new colors below…
The post The 2026 Brother Cycles Pinecone and Mehteh Colors are Stunning appeared first on BIKEPACKING.com.
El Mundo Paramo: Amongst Frailejones and Old Renaults [BIKEPACKING.com] (07:36 , Monday, 23 March 2026)
As a follow-up to their last Colombian journal entry, Cass and Emma head north of Bogotá to ride two of the site’s popular routes, Páramos Conexión and Oh Boyacá! After fabulous encounters and 35,000 meters of climbing, they wonder to themselves, “Can anywhere be as good as Colombia for dirt road, mountain touring?” Find out here...
The post El Mundo Paramo: Amongst Frailejones and Old Renaults appeared first on BIKEPACKING.com.
Leica D-Lux 8 in India – Part One [35mmc] (06:00 , Monday, 23 March 2026)
The Leica D-Lux 8 has been reviewed to death, so this is not a review. It’s personal observations about how I came to buy this camera, how I didn’t bond with it at first, and how it performed in the real-life scenario of functioning as my travel camera on a trip to India. It’s a...
The post Leica D-Lux 8 in India – Part One appeared first on 35mmc.
Sunset state of mind: The best views in Blacksburg [www.collegiatetimes.com - RSS Results for * of type article OR video OR youtube OR collection] (07:00 , Sunday, 22 March 2026)
With spring fast approaching, days are getting longer and the sky is getting clearer each week. As the flowers present their colorful petals, so do opportunities to bask in nature’s beauty. In lieu of the busy middle weeks of the…
A love letter to textured hair [www.collegiatetimes.com - RSS Results for * of type article OR video OR youtube OR collection] (06:00 , Sunday, 22 March 2026)
Curly hair represents much more than most realize. It represents heritage, identity and history. However, all of that can quickly be erased by the social pressure to straighten your hair.
Paywalls and physical media [www.collegiatetimes.com - RSS Results for * of type article OR video OR youtube OR collection] (02:37 , Sunday, 22 March 2026)
The world we live in is so complicated with how news occurs and spreads in the blink of an eye. So much information is at our fingertips. Especially now, there is an emphasis on staying informed and up to date…
A life measured in relationships [www.collegiatetimes.com - RSS Results for * of type article OR video OR youtube OR collection] (02:34 , Sunday, 22 March 2026)
What does it mean to live a life well-lived? The answer relies on how we prioritize our time and values in life. These choices ultimately shape our sense of purpose and the direction our lives take.
Virginia Tech celebrates 26th annual Graduate Education Week [www.collegiatetimes.com - RSS Results for * of type article OR video OR youtube OR collection] (02:21 , Sunday, 22 March 2026)
From March 23 to 27, Virginia Tech will host Graduate Education Week to honor approximately 6,500 graduates. Events will be held on Virginia Tech’s Blacksburg campus and in Washington, D.C.
CLAHS Hokie Trek takes students to NBC 4 news station [www.collegiatetimes.com - RSS Results for * of type article OR video OR youtube OR collection] (01:43 , Sunday, 22 March 2026)
During spring break, a group of College of Liberal Arts and Human Sciences students visited the NBC4 News station in Washington, D.C. on one of the Hokie Treks.
Two Steps Forward, One Step Back (Film) [BIKEPACKING.com] (08:06 , Sunday, 22 March 2026)
“Two Steps Forward, One Step Back” follows Cynthia Carson’s tumultuous journey to the 2025 Transcontinental Race and attempts to unpack the meaning of the unanticipated obstacles we all face at one point or another. It also serves as a reminder that progress, in sport and in life, is rarely a straight line. Watch the film and read a reflection from Cynthia here...
The post Two Steps Forward, One Step Back (Film) appeared first on BIKEPACKING.com.
Contax iia – first roll and how I ended up with a couple classic cameras [35mmc] (06:00 , Sunday, 22 March 2026)
As the only photographer many of my friends know, they have sometimes offered me their families no longer wanted old camera, usually another worn brownie of various odd lineage and quality. I have a couple already and don’t need more. Over the last couple years as I’ve been winding down my digital-based career, my interest...
The post Contax iia – first roll and how I ended up with a couple classic cameras appeared first on 35mmc.
Adrien’s Africa End-to-End Update [Rene Herse Cycles] (03:00 , Sunday, 22 March 2026)
We’ve been planning to do an update of Adrien Liechti’s Africa End-to-End record attempt. But just after he sent us photos and stories, his tracker stopped in Cameroon. For almost two weeks, we had no news from Adrien. Needless to say, we were worried, and so were his many friends and followers. Fortunately, he resurfaced, but he couldn’t give details about what happened until he’d left Cameroon—and then he didn’t have any Internet for days while cycling in the Congo. Here is Adrien’s story:

“Finally I can reply. Basically, what happened is this: I took a video of a bridge that was considered a sensitive structure. I was unaware that I was in a restricted area. I was arrested by the military. They erased the video clip—I was fine with that, of course—but then they still decided to incarcerate me in a converted performance hall in Yaoundé. I was not allowed access to my belongings, nor my phone. I was not able to contact the Swiss embassy, nor my family and friends. I was not questioned per se, but I was put under, let’s say, a lot of pressure. The make-shift prison had 29 inmates in a space of just 25 square meters (270 sq ft).
“Fortunately, my tracker had remained on. A touring cyclist found me and then provided my location to the Swiss embassy. After that, things went very quickly, and I was released.
“While imprisoned, I contracted malaria, but I decided to continue and not lose sight of my goal. Today, I’m cured, and now I’m Angola. However, my tracker was stolen, so I use my cell phone for tracking whenever I can. It’s a little less accurate, but it does the job.

“You probably want to know about my tires, too. After 7,500 km, I put a new Poteau Mountain 700×48 on the rear. I’m carrying the old one with me, and I’ll put it on the front when that tire is wearing out. I’m now almost 13,000 km into my ride, and that’s the front tire after all those kilometers in the photo above. I think my tires should last the 5,000 km until the finish. No flats to report so far. During the first 35 days, I only added air once to my tubeless setup.
Thanks for the update, Adrien! We admire your spirit and positive outlook. And those tires show the hard life they’ve lived! We’re glad they’ve been working well for you.
Before Adrien’s misadventure in Cameroon, we asked him to send some photos of the roads he’s been encountering in central Africa, and we want to share those with our readers.

African roads vary greatly. Modern cities like Abidjan, Ivory Coast’s largest city and roughly the half-way point of Adrien’s journey, feature well-maintained roads.

Away from the cities, there are new highways at times…

…but it often doesn’t take long for the pavement to deteriorate.

Gravel roads can be (relatively) smooth…

…or rough and rutted.

Crossing bridges can be an adventure.

Apart from the roads, there’s also the local wildlife to consider.

Road conditions are a challenge for all traffic. The motto on the truck’s side—’No condition is permanent’—is a good philosophy for traveling in Africa.

When asked why he’s continuing after his difficult journey, Adrien replied that it’s because of the people he’s meeting. He wrote: “The encounters I have along the way are what drive this journey. Even though this is a fully self-supported adventure, the truth is that without the people I meet on the road, none of this would be possible.”
More information:
Adrien’s Africa End-to-End Update [Rene Herse Cycles] (03:00 , Sunday, 22 March 2026)
We’ve been planning to do an update of Adrien Liechti’s Africa End-to-End record attempt. But just after he sent us photos and stories, his tracker stopped in Cameroon. For almost two weeks, we had no news from Adrien. Needless to say, we were worried, and so were his many friends and followers. Fortunately, he resurfaced, but he couldn’t give details about what happened until he’d left Cameroon—and then he didn’t have any Internet for days while cycling in the Congo. Here is Adrien’s story:

“Finally I can reply. Basically, what happened is this: I took a video of a bridge that was considered a sensitive structure. I was unaware that I was in a restricted area. I was arrested by the military. They erased the video clip—I was fine with that, of course—but then they still decided to incarcerate me in a converted performance hall in Yaoundé. I was not allowed access to my belongings, nor my phone. I was not able to contact the Swiss embassy, nor my family and friends. I was not questioned per se, but I was put under, let’s say, a lot of pressure. The make-shift prison had 29 inmates in a space of just 25 square meters (270 sq ft).
“Fortunately, my tracker had remained on. A touring cyclist found me and then provided my location to the Swiss embassy. After that, things went very quickly, and I was released.
“While imprisoned, I contracted malaria, but I decided to continue and not lose sight of my goal. Today, I’m cured, and now I’m Angola. However, my tracker was stolen, so I use my cell phone for tracking whenever I can. It’s a little less accurate, but it does the job.

“You probably want to know about my tires, too. After 7,500 km, I put a new Poteau Mountain 700×48 on the rear. I’m carrying the old one with me, and I’ll put it on the front when that tire is wearing out. I’m now almost 13,000 km into my ride, and that’s the front tire after all those kilometers in the photo above. I think my tires should last the 5,000 km until the finish. No flats to report so far. During the first 35 days, I only added air once to my tubeless setup.
Thanks for the update, Adrien! We admire your spirit and positive outlook. And those tires show the hard life they’ve lived! We’re glad they’ve been working well for you.
Before Adrien’s misadventure in Cameroon, we asked him to send some photos of the roads he’s been encountering in central Africa, and we want to share those with our readers.

African roads vary greatly. Modern cities like Abidjan, Ivory Coast’s largest city and roughly the half-way point of Adrien’s journey, feature well-maintained roads.

Away from the cities, there are new highways at times…

…but it often doesn’t take long for the pavement to deteriorate.

Gravel roads can be (relatively) smooth…

…or rough and rutted.

Crossing bridges can be an adventure.

Apart from the roads, there’s also the local wildlife to consider.

Road conditions are a challenge for all traffic. The motto on the truck’s side—’No condition is permanent’—is a good philosophy for traveling in Africa.

When asked why he’s continuing after his difficult journey, Adrien replied that it’s because of the people he’s meeting. He wrote: “The encounters I have along the way are what drive this journey. Even though this is a fully self-supported adventure, the truth is that without the people I meet on the road, none of this would be possible.”
More information:
Virginia Tech falls to Oregon in first round of NCAA Tournament [www.collegiatetimes.com - RSS Results for * of type article OR video OR youtube OR collection] (02:18 , Saturday, 21 March 2026)
As the clock struck midnight on Virginia Tech’s 2025-26 season, Hokies guard Mackenzie Nelson offered a blunt assessment — “I just don’t think we came ready to play.”
Bikepacking Bellamarin with a 35mm Film Camera (Video) [BIKEPACKING.com] (09:45 , Saturday, 21 March 2026)
Cam Cope's new YouTube channel, Negative Gradient, is his outlet to combine cycling adventures with photography. His first video showcases a two-day bikepacking trip on Bellamarin (French Island), Australia, just two hours from Melbourne. Find the video, his route, some words, and a collection of 35mm photos he captured here...
The post Bikepacking Bellamarin with a 35mm Film Camera (Video) appeared first on BIKEPACKING.com.
Trying to capture traditional Albania on Silbersalz35 film – Part 2 [35mmc] (06:00 , Saturday, 21 March 2026)
This is the second part of my images from Albania from August/September 2025 with my Minolta SRT-101 on Silbersalz35 (250D) film. The first part can be found here. Gjirokastra Cause of bad travel planning we only visited the world-heritage town of Gjirokastra for one day. Gjirokastra has a very worthwhile old town with numerous magnificent...
The post Trying to capture traditional Albania on Silbersalz35 film – Part 2 appeared first on 35mmc.
Widely used Trivy scanner compromised in ongoing supply-chain attack [Biz & IT - Ars Technica] (04:50 , Friday, 20 March 2026)
Hackers have compromised virtually all versions of Aqua Security’s widely used Trivy vulnerability scanner in an ongoing supply chain attack that could have wide-ranging consequences for developers and the organizations that use them.
Trivy maintainer Itay Shakury confirmed the compromise on Friday, following rumors and a thread, since deleted by the attackers, discussing the incident. The attack began in the early hours of Thursday. When it was done, the threat actor had used stolen credentials to force-push all but one of the trivy-action tags and seven setup-trivy tags to use malicious dependencies.
A forced push is a git command that overrides a default safety mechanism that protects against overwriting existing commits. Trivy is a vulnerability scanner that developers use to detect vulnerabilities and inadvertently hardcoded authentication secrets in pipelines for developing and deploying software updates. The scanner has 33,200 stars on GitHub, a high rating that indicates it’s used widely.
A Composition’s Dilemma in Black and White – One Shot Story [35mmc] (12:00 , Friday, 20 March 2026)
This photo, taken in the centre of Padua, with my Nikon 35TI and a Ferrania P30, shows a composition’s dilemma. The idea was to explore the usual technique of ‘framing’ a subject within an architectural structure to make it resemble a painting. Actually, though, the main subject, the biker and his vehicle, is more of...
The post A Composition’s Dilemma in Black and White – One Shot Story appeared first on 35mmc.
Bikes, Booths, and Builders in Philly (Part 3) [BIKEPACKING.com] (11:54 , Friday, 20 March 2026)
In our third and final installment of Bikes, Booths, and Builders in Philly, Nic dives into the winners of the People's Choice award, shares an array of interesting new products from Velo Orange, and highlights a handful of additional handmade bikes. Browse the last of his findings below...
The post Bikes, Booths, and Builders in Philly (Part 3) appeared first on BIKEPACKING.com.
The Coolest Stuff at Philly Bike Expo 2026 [Velo Orange - The Velo Orange Blog] (10:58 , Friday, 20 March 2026)
Another Philly Bike Expo is in the books, and it never disappoints. It's one of our favorite shows, packed with energy and good vibes. The crowd is always a mix of passionate cyclists, curious gearheads, and folks who just love bikes — exactly the kind of people we love talking to. This year was no exception. From eye-catching builds to clever components, there was no shortage of inspiration.
Be sure to check out the gallery at the end for more photos — but first, here’s a rundown of the bikes and gear that really caught my eye!

Ok, for all you lug-lickers, this is the crown jewel. Introducing the Royal H "Baines-style Time Trial Frame". Designed as the quintessentially British TT, this design features a super tucked in rear wheel and a standard front end.

It makes for a very responsive and planted rear wheel while maintaining the comfort and handling of a "regular" bike.

I rode one a long time ago and it felt good! The effort of pushing down on the pedals feels normal and yet responsive without the whippy-ness of early TT bikes. I dig it.

Best part? There are two of them! This blue one has our 50.4 Crankset and Retro Bottle Cages. The Candy Striping on the seat tube is phenomenal.











































Friday Debrief: Tiny Mountain Bikes, Blue Monday Surly, Brands Making Moves, and More… [BIKEPACKING.com] (10:02 , Friday, 20 March 2026)
This week’s Debrief features the new Orbea MTB lineup for kids, Revelate Ultra Joeys, Curve's new HQ in Spain, wide alloy rims from Reynolds, the Surly Lowside in Blue Monday, an event to follow live, and much more. Find it all here…
The post Friday Debrief: Tiny Mountain Bikes, Blue Monday Surly, Brands Making Moves, and More… appeared first on BIKEPACKING.com.
Reader’s Rig: Kobkit’s Surly Straggler [BIKEPACKING.com] (09:17 , Friday, 20 March 2026)
This week's Reader's Rig comes from Kobkit in Bangkok, Thailand, who shares a surprising Surly Straggler build that visually summarizes his journey through the world of bikes. Learn a little about Kobkit and scope out his unconventional Straggler here...
The post Reader’s Rig: Kobkit’s Surly Straggler appeared first on BIKEPACKING.com.
The Old Man Mountain Manzanita Cradle is an All-in-One Handlebar Bag System [BIKEPACKING.com] (09:01 , Friday, 20 March 2026)
Designed in partnership with Salsa Cycles, the new Old Man Mountain Manzanita Cradle takes the Anything Cradle to a new level thanks to a number of new features, including a matching side-load or top-load dry bag. Take a closer look here...
The post The Old Man Mountain Manzanita Cradle is an All-in-One Handlebar Bag System appeared first on BIKEPACKING.com.
The Apidura Expedition Series Brings Big Updates and New Bags [BIKEPACKING.com] (08:45 , Friday, 20 March 2026)
A decade after developing their first welded, waterproof bikepacking packs, the team at Apidura just unveiled their new Expedition Series bags. For more information on this new line, explore all the details below...
The post The Apidura Expedition Series Brings Big Updates and New Bags appeared first on BIKEPACKING.com.
Stagecoach 400 Documentary Series, Episode 3: The Desert Changes Everything [BIKEPACKING.com] (07:02 , Friday, 20 March 2026)
In the third episode of the Stagecoach 400 Documentary Series, filmmaker Gregg Dunham follows riders as they spin away from the bright lights of San Diego and head deep into the harsh, unforgiving desert. Join riders as they push, hike, and pedal through fatigue, traversing a stretch of the route where it’s often simpler to keep going than to quit…
The post Stagecoach 400 Documentary Series, Episode 3: The Desert Changes Everything appeared first on BIKEPACKING.com.
Olympus IS-5000 – the Bridge to Digital [35mmc] (06:00 , Friday, 20 March 2026)
Olympus IS-5000 the bridge to digital.
The post Olympus IS-5000 – the Bridge to Digital appeared first on 35mmc.
Pledge changes in 7.9-beta [OpenBSD Journal] (04:53 , Friday, 20 March 2026)
David Leadbeater (dgl@)
posted to ports@ a message,
entitled
Pledge changes in 7.9-beta,
which explains the consequences for porters
of the recent pledge(2)/unveil(2) changes in -current (and, to some extent, 7.8).
Whilst targeted at porters, it provides a good overview for
anyone interested in the changes.
The message reads:
Cloud service providers ask EU regulator to reinstate VMware partner program [Biz & IT - Ars Technica] (05:29 , Thursday, 19 March 2026)
A trade association of cloud service providers (CSPs) filed an antitrust complaint today with the European Union’s European Commission (EC) over Broadcom's shuttering of VMware’s CSP partner program this year.
Since Broadcom bought VMware, it has drastically cut the number of channel partners VMware works with, a shift that began with the elimination of VMware’s partner program. Broadcom replaced the program with an invite-only alternative that favors larger partners working with enterprise-size clients rather than small-to-medium-size businesses.
There are even fewer CSP partners working with VMware today. Broadcom introduced a requirement that CSP partners operate at least 3,500 cores, rendering hundreds of CSPs ineligible for partnership. Before Broadcom bought VMware, the virtualization company had over 4,000 CSP partners, per a February 2024 report from The Register. Today, VMware reportedly has 19 CSP partners in the US and about nine in the United Kingdom, The Register reported.
The New Forbidden Reya is for Downcountry [BIKEPACKING.com] (12:00 , Thursday, 19 March 2026)
The new Forbidden Reya is a full-suspension downcountry bike designed for big days on backcountry trails, cross-country tech, and everything in between. And you won't find an idler or a high pivot, a first for Forbidden. Learn more about the Reya here...
The post The New Forbidden Reya is for Downcountry appeared first on BIKEPACKING.com.
How my family lost their birthright and fortune – a one shot story [35mmc] (12:00 , Thursday, 19 March 2026)
Before diving into the story, a brief explanation of what this – wholly unremarkable – image portrays. This is a London “mews house”, a type of property built by/for wealthy, quite likely aristocratic, families (you’re familiar with Downton Abbey, right?) to house their transportation and associated servants. Whereas as the domestic staff tended to live...
The post How my family lost their birthright and fortune – a one shot story appeared first on 35mmc.
The BTCHN’ Bikes Alpina is a 32er with a Super Boost Rear Hub [BIKEPACKING.com] (10:28 , Thursday, 19 March 2026)
BTCHN' Bikes in California is the latest maker to offer its take on the 32-inch mountain bike. The new BTCHN’ Alpina was designed around the idea that composure creates speed, and it features a 120mm travel fork, 32" wheels, and a super boost rear hub. Check it out here...
The post The BTCHN’ Bikes Alpina is a 32er with a Super Boost Rear Hub appeared first on BIKEPACKING.com.
30 Days Solo Bikepacking Across Incredible Peru (Video) [BIKEPACKING.com] (09:41 , Thursday, 19 March 2026)
In the 33rd episode detailing his ride from Alaska to Argentina, Dan Camp reports in with a video highlighting his first month of riding across Peru. See photos and experience 600 miles of travel along Peruvian dirt roads in the 30-minute video here...
The post 30 Days Solo Bikepacking Across Incredible Peru (Video) appeared first on BIKEPACKING.com.
Rob English on Building a Stunning 32-Inch Mountain Bike [BIKEPACKING.com] (09:24 , Thursday, 19 March 2026)
Rob English of English Cycles in Oregon recently finished up a stunning 32-inch mountain bike that is equal parts prototype and work of art. Eager to try the new wheel size for himself, Rob designed and built a rigid MTB with a matching truss fork. Find a detailed write-up from Rob, photos of the bike, and a complete build kit here...
The post Rob English on Building a Stunning 32-Inch Mountain Bike appeared first on BIKEPACKING.com.
The 2026 Otso Fenrir Ti is the Same Great Two-Headed Beast [BIKEPACKING.com] (09:00 , Thursday, 19 March 2026)
There’s an updated version of the Fenrir Ti in Otso's lineup for 2026. With refreshed graphics and UDH compatibility, this versatile, dirt-ready platform is the same great bike we reviewed a few years back. Find full details on the 2026 Fenrir Ti below...
The post The 2026 Otso Fenrir Ti is the Same Great Two-Headed Beast appeared first on BIKEPACKING.com.
PF queues break the 4 Gbps barrier [OpenBSD Journal] (08:58 , Thursday, 19 March 2026)
OpenBSD's
PF
packet filter has long supported HFSC traffic shaping
with the queue
rules in
pf.conf(5).
However, an internal 32-bit limitation in the HFSC
service curve structure (struct hfsc_sc) meant that bandwidth values
were silently capped at approximately 4.29 Gbps,
” the maximum value of a u_int ".
With 10G, 25G, and 100G
network interfaces now commonplace,
OpenBSD devs making huge progress unlocking the kernel for SMP,
and adding drivers for cards supporting some of these speeds,
this limitation started to get in the way.
Configuring bandwidth 10G on a queue would silently wrap around,
producing incorrect and unpredictable scheduling behaviour.
A new patch
widens the bandwidth fields in the kernel's HFSC scheduler
from 32-bit to 64-bit integers, removing this bottleneck entirely.
The diff also fixes a pre-existing display bug in
pftop(1)
where bandwidth values above 4 Gbps would be shown incorrectly.
32-Inch Tires: What’s Available and What’s to Come [BIKEPACKING.com] (07:30 , Thursday, 19 March 2026)
The floodgates have opened, and bicycle brands and framebuilders have been hard at work designing bikes around the latest and largest wheel size: 32 inches. What at first felt like […]
The post 32-Inch Tires: What’s Available and What’s to Come appeared first on BIKEPACKING.com.
Natalie Peet Takes Overall Women’s Win at DOOM 2026 [BIKEPACKING.com] (05:23 , Wednesday, 18 March 2026)
A massive congratulations goes out to Natalie Peet for becoming the first woman to finish the 2026 DOOM event in Arkansas this week. Natalie finished the 330-mile route in just 2 days, 4 hours, and 26 minutes. Find a written recap and photos from Aaron Arnzen with additional photos from Kai Caddy here…
The post Natalie Peet Takes Overall Women’s Win at DOOM 2026 appeared first on BIKEPACKING.com.
Seeking Shade at the 2026 Queen’s Ransom [BIKEPACKING.com] (10:24 , Wednesday, 18 March 2026)
Twenty-eight riders, including nine women, showed up for the 2026 Queen's Ransom group ride last month, all set on tackling the challenging 230-mile route around Phoenix, Arizona. Kara Woolgar put together a detailed reflection from the weekend, paired with loads of photos from the group. Find it here...
The post Seeking Shade at the 2026 Queen’s Ransom appeared first on BIKEPACKING.com.
“We are Cyclists” Celebrates Community, Representation, and the Joy of Riding (Video) [BIKEPACKING.com] (09:53 , Wednesday, 18 March 2026)
“We are Cyclists” is a new video from Shimano that follows Marley Blonsky and Kailey Kornhauser as they attend cycling events across the country, spreading their message of inclusivity and joy. Watch it below…
The post “We are Cyclists” Celebrates Community, Representation, and the Joy of Riding (Video) appeared first on BIKEPACKING.com.
Don’t Buy What You Don’t Need! [Rene Herse Cycles] (04:12 , Tuesday, 17 March 2026)
Here at Rene Herse Cycles, we don’t want to sell you what you don’t need! Our mission is to create tires, components and accessories that enhance your cycling experience. We’re confident we offer things that you, our customers, find valuable. There’s no need to try and sell you things you don’t need. Here’s what that looks like in practice:

We’ve all been there: We need a small part, but it’s only available as part of a complete set. Or a component is rebuildable in theory, but in practice you can’t get the spares you need to do so. We’ve been there, and we don’t like it! That’s why we offer spare parts for almost everything we sell.
In the unlikely event that a crankarm gets damaged in a crash, there’s no need to buy a new crankset: We offer individual arms at reasonable prices.
If you crack a fender, you can get a spare set of blades—no need to buy all the hardware again. Conversely, if you need an additional fender stay for a special installation, or if you lost a stay eyebolt, we also offer those separately.

For our popular NUDA carbon pumps, you can get the o-ring that seals the pump against the valve as a spare part. Our supplier was surprised when we asked for extra o-rings. It’s true that the o-rings rarely wear out, but it can happen. And we don’t want you to junk a perfectly good pump just because the o-ring has been abraded by valve stems with sharp threads.

We also encourage you to patch your tubes. We offer patch kits for TPU and butyl tubes. The TPU patches allow you to keep using your Rene Herse TPU tubes almost forever. Rather than sell you replacement tubes, we figure you’ll instead convert the other bikes in your stable once you’ve experienced the superior speed and ride feel of TPU tubes. (And if you don’t, that’s OK, too.)
If you’re running butyl tubes, we continue to support you with tubes and patch kits. In fact, we even offer patches separately—because a tube of vulcanizing fluid is good for many more patches than what’s included in the patch kit. We also offer pre-glued butyl patches for on-the-road repairs. Just because we love TPU tubes doesn’t mean we stop to support butyl (and tubeless, too, of course).

Disc rotor covers are almost essential when you’re traveling. They protect your rotors from contamination and your bike frame from scratches (or worse). The ones we used to sell were blue. Brake dust could discolor them over time, so we’ve worked with Ostrich in Japan to make a special black version, just for Rene Herse. Because your equipment should look good for as long as possible.

Planned obsolescence is a hidden way of selling you more than you’d buy otherwise. Older riders will remember when 9-speed cassettes were introduced. Back then, it was an open secret that the industry was already working on 10-speed. And then 11-speed, 12-speed, and now 13-speed. Cramming another cog into the rear cassette isn’t rocket science, but dribbling out these ‘improvements’ over time illustrates a golden rule of product design: Always leave room for future upgrades.
That may also be the reason why modern cranks have replaced classic five-arm spiders with four arms, even though three arms are all you need to transmit torque and power, and keep the chainrings from wobbling. You don’t have to be a genius to predict a switch to three-arm cranks—for even greater weight savings—at some point in the future…

That’s not how we do things. Any component we introduce is as good as we can make it. We don’t leave room for future upgrades. Once we realized that three arms are sufficient, that’s what we offered, right from the start. You can buy our components with confidence, knowing that they won’t become obsolete as soon as we introduce a ‘new-and-improved’ version.

And when things change, we make our parts forward- and backward-compatible. Even our first cranks, introduced 15 years ago, can be converted to 12-speed. Just swap in a new big ring. We’ve designed our 12-speed rings with a little offset, so they reduce the spacing between the chainrings for the narrower chain without requiring a new crankarm. And the ramps and pins line up with the teeth of our existing small rings, so you don’t have to replace your small ring when you upgrade your cranks.

Or you can convert your cranks to a One-By—just change the big ring and replace the small ring with our smart One-By Chainring Spacers. And if, at some point in the future, you’ll want to run a triple for whatever reason, that’s an easy swap again. And of course we’ll be offering chainrings for all those cranks, too. (Try finding replacement rings for a 2011 crankset from a big maker!)

There’s another way many companies try to sell us buy things we don’t need: Most subscriptions have gone to auto-pay in recent years. Once a year, our credit cards get charged automatically. If you’re like me, you tend to forget about canceling subscriptions you don’t want or need any longer. Before you know it, you’ve paid for another year…
That’s not how we do Bicycle Quarterly subscriptions. We charge you only once. When you’ve received your four editions and your subscription is up, we send you a renewal notice. Then it’s up to you to decide whether you want to re-up for another year.
What it comes down to is confidence that we offer something our customers find valuable. That’s also why we don’t send reminders if you’ve left something in your shopping basket. We know that, if you need it, you’ll come back. And if you don’t need it, we’d rather you don’t buy it. We know that you’ll instead come back when you need something we offer.
More Information:
| Feed | RSS | Last fetched |
|---|---|---|
| XML | 05:55 , Wednesday, 25 March 2026 | |
| 35mmc | XML | 05:55 , Wednesday, 25 March 2026 |
| About – Bikes and Film Cameras Club | XML | 05:55 , Wednesday, 25 March 2026 |
| apenwarr | XML | 05:55 , Wednesday, 25 March 2026 |
| Arch Linux: Recent news updates | XML | 05:55 , Wednesday, 25 March 2026 |
| Ars Cardboard - Ars Technica | XML | 05:55 , Wednesday, 25 March 2026 |
| benjojo blog | XML | 05:55 , Wednesday, 25 March 2026 |
| BIKEPACKING.com | XML | 05:55 , Wednesday, 25 March 2026 |
| Biz & IT - Ars Technica | XML | 05:55 , Wednesday, 25 March 2026 |
| Cardinal News | XML | 05:55 , Wednesday, 25 March 2026 |
| Coding Horror | XML | 05:55 , Wednesday, 25 March 2026 |
| Cryptography Dispatches | XML | 05:55 , Wednesday, 25 March 2026 |
| Debian News | XML | 05:55 , Wednesday, 25 March 2026 |
| derailleur | XML | 05:55 , Wednesday, 25 March 2026 |
| EMULSIVE | XML | 05:55 , Wednesday, 25 March 2026 |
| flak | XML | 05:55 , Wednesday, 25 March 2026 |
| Idle Words | XML | 05:55 , Wednesday, 25 March 2026 |
| inks | XML | 05:55 , Wednesday, 25 March 2026 |
| joshua stein | XML | 05:55 , Wednesday, 25 March 2026 |
| McMansion Hell | XML | 06:55 , Wednesday, 25 March 2026 |
| Migratory Caving | XML | 05:55 , Wednesday, 25 March 2026 |
| Open source software and nice hardware | XML | 05:55 , Wednesday, 25 March 2026 |
| OpenBSD Journal | XML | 05:55 , Wednesday, 25 March 2026 |
| Rene Herse Cycles | XML | 05:55 , Wednesday, 25 March 2026 |
| reproducible-builds.org | XML | 05:55 , Wednesday, 25 March 2026 |
| Steam for Linux RSS Feed | XML | 05:55 , Wednesday, 25 March 2026 |
| Techdirt | XML | 05:55 , Wednesday, 25 March 2026 |
| Tedium | XML | 05:55 , Wednesday, 25 March 2026 |
| The Soma Fab Blog | XML | 05:55 , Wednesday, 25 March 2026 |
| The Velo ORANGE Blog | XML | 05:55 , Wednesday, 25 March 2026 |
| Velo Orange - The Velo Orange Blog | XML | 05:55 , Wednesday, 25 March 2026 |
| WUVT-FM 90.7 Blacksburg, VA: Recent Articles | XML | 06:55 , Wednesday, 25 March 2026 |
| www.collegiatetimes.com - RSS Results for * of type article OR video OR youtube OR collection | XML | 06:55 , Wednesday, 25 March 2026 |