Front-page articles summarized hourly.
Phoronix reports Fragnesia, a new Linux local privilege escalation vulnerability disclosed by V12 Security. It resembles the Dirty Frag class and centers on a logic bug in the ESP/XFRM code that permits arbitrary writes into the kernel page cache of read-only files. A proof-of-concept exists. A two-line patch was prepared for Linux kernel's skbuff.c; it has not yet been mainlined but is expected to be incorporated in future releases. More details available on the oss-security mailing list.
The US is winning the AI race not just in chips or power but by building the full stack: cloud infrastructure, data platforms, and commercialization. Since DeepSeek R1, American firms moved fastest to monetize—OpenAI, Anthropic, others expanding agents, Codex, and enterprise tools—while China remains competitive but not ahead in revenue or ecosystem reach. Europe, though strong in engineering, lags in cloud scale and data flow; catching up would take a decade. Energy matters, but cloud, data, and platform reach matter most. A security frontier looms with weaponized AI and closed stacks.
Google's main codebase suffered IDE fragmentation for years despite calls for a uniform editor, duplicating integrations for Bazel, Starlark, code search, etc. Around 2016, a cloud-based editor called Cider indexed the entire codebase on the backend, delivering fast frontend. In 2020–21, Cider moved to a VSCode frontend (Cider V), leveraging VSCode extensions. By 2023, about 80% of Google3 development ran in Cider V, with ~100 internal extensions and AI features (ML-assisted code review, Smart Paste, AI code completion), showing the benefits of a standard, extensible IDE.
Ardent enables instant, zero-risk database branching by creating 1:1 Postgres clones in under 6 seconds. These isolated clones let coding agents test against production data without affecting it, with automated compute scaling to zero and storage efficiency (only changes stored). Features include data cleaning, migration testing, and backfills on exact production copies. Works with AWS RDS, Supabase, PlanetScale, and other providers, and aims to accelerate AI-driven data work with no downtime and no drift from production.
Haiku is an open-source, BeOS-inspired OS for personal computing that aims to be fast, simple, and powerful. The project publishes nightly builds and official releases (R1/beta), with Haiku, Inc. providing financial reports and supporting fundraising and donations. It runs active community programs including Google Summer of Code mentorship (2024–2026) and welcomes contributor blogs on topics such as hardware device management, Bluetooth stack improvements, Gerrit reviews, and other Haiku news. Community participation via forums, mailing lists, IRC, and donations is encouraged.
An exploration of Rust’s limits and a warning against blindly following the approaches of Amazon, Cloudflare, and Discord.
A critical read of a Science paper claiming lifespan heritability is ~50%. The authors use a twin-simulation model with a shared extrinsic mortality parameter (non-aging deaths) to generate lifespans for identical and fraternal twins. Lowering extrinsic mortality raises heritability; near-zero extrinsic mortality yields ~50%. With modern-like extrinsic mortality (~0.001/year), estimates fall to ~35–45% depending on dataset and age cutoff. The piece stresses that heritability is not fixed and depends on society and modeling choices, praising the idea but criticizing the paper’s vagueness and Science’s presentation. Suggested title: about 40% under modern levels.
Roughly 50,000 Lake Tahoe residents face a looming power shift after NV Energy will stop supplying electricity to Liberty Utilities by May 2027 to reserve capacity for data centers. Northern Nevada’s data-center growth could demand up to 5,900 MW by 2033, straining a grid managed by multiple regulators across California, Nevada, and federal authorities. Liberty will seek replacement power via an RFP in summer 2026, likely from sources outside California, delivered over NV Energy’s lines. Critics warn rates could rise and governance remains fragmented.
"Xs of Y" roguelike that names itself each run; written in a Lisp dialect (let-go/Clojure) on a Go VM. Each run generates a unique title, quest, and rune mappings; runes are hidden symbols and spells are s-expressions. Root access to the dungeon's reality engine, but man pages are in a changing dead language. Inverted power curve: early survival, late-game theology with shaky safety margins. Enemies include spiders, goblins, slimes, trolls; traps and environmental chaos. ~6900 lines, no dependencies, runs natively or in a browser (WASM). MIT. WIP.
Open Source Resistance argues that maintaining the OSS used by your employer should happen during work time, not as unpaid side work. It urges developers to fix dependencies and ship fixes within their normal duties, while protecting IP and confidentiality. The manifesto outlines three steps: do the work, protect yourself, and don’t overcommit. It positions OSS maintenance as essential infrastructure, not a hobby, and contrasts it with other pledges (Open Source Pledge, Open Source Friday). It acknowledges objections and offers practical guidance on contracts, IP ownership, and security, authored by Mike McQuaid.
Kickstarter updated its “Mature Content” rules to ban NSFW material, including implied sex acts, MILF/DILF content, implied nudity, and explicit nipples/genitalia/ butts. Reports say Stripe—the payment processor—pressured the change and will review or shut down projects featuring adult content, even after funding. The shift echoes similar moves by Steam and Itch.io under banking pressure, suggesting a broader industry trend; Kickstarter had previously run a “Kickstarter After Dark” adult-oriented newsletter in 2025.
The Atlantic argues bipartisan U.S. anxiety over AI is rising as jobs appear at risk, with figures from Bernie Sanders to Steve Bannon warning AI could replace workers. Public concern fuels data-centre battles and, in extreme cases, violence threats. While AI has boosted markets, many fear a wage/inequality tilt if layoffs accelerate and profits concentrate. Politicians leverage anti-AI rhetoric; tech firms push narratives to downplay job losses. If AI displaces mass numbers of workers, the backlash could resemble historical upheavals from industrialization, with social and political volatility.
An amateur experiment tests the proverb "nailing jelly to a wall." Making jelly from cubes and poured into a dessert bowl, the author tries to nail it to a plank. In attempts, jelly either sticks on its own or tears away when nailed; even many nails fail to keep it upright for long. A later approach with a smaller amount and a nail array briefly holds but collapses. Conclusion: with standard jelly, nailing to a vertical wall is not feasible; suggestions include setting jelly around nails or using alternative methods. The piece notes later internet attention and jelly/jam terminology debates.
The article explains how US residents/organizations can obtain a free locality domain (*.city.state.us). It covers eligibility, choosing a locality from delegated subdomains (gen.your-state.us can work for independents), obtaining free DNS via Amazon Lightsail, filling the Interim .US Domain Template with the FQDN and contacts, submitting the registration, and configuring DNS to point to a host (e.g., GitHub Pages). It also notes that some undelegated domains are government-only and discusses residency and WHOIS considerations.
Tech executives hail AI as a revolution in coding, claiming large portions of new code are AI-generated and that internal adoption will cut costs and headcount. But many developers say AI output is flawed, hard to vet, and requires heavy manual fixes, causing cognitive fatigue and de-skilling. Adoption at major firms is often compulsory and tied to evaluations, with teams reorganized into AI-focused pods. Despite bragging rights, productivity gains haven’t materialized; several high-profile layoffs followed AI push. Some see limited value for prototyping and data retrieval, but worry about long-term skills loss and unsustainable tech debt.
Core developers reverted Python’s incremental GC in 3.14/3.15 due to production memory pressure, returning to the 3.13 generational GC. 3.15 is in alpha; revert may come as 3.14.5. Two GCs in one release is too risky; an opt-in option in 3.16 was discussed but would add maintenance overhead. Benchmarks show incremental GC can lower max pauses but may dramatically raise memory use and slow overall runtime; real-world data is sparse. The plan favors a conservative revert now, with a PEP-driven reintroduction later if evaluated more thoroughly.
NL Times reports that the Dutch suicide-prevention hotline 113 Zelfmoordpreventie shared visitors' metadata with third parties—Google and Microsoft—without user consent. Data included location, browser, device, the site visited before 113, and, for some users, screen recordings of visits; no substantive chats were shared, the foundation says. After ethical hacker Mick Beer flagged the practice, 113 temporarily suspended all measurement/analytics tools. The organization acknowledges privacy concerns and potential GDPR violations and is investigating, with no stated plan to re-enable trackers.
Moved from GitHub to a self-hosted Forgejo for ownership and digital autonomy; the Dutch government did the same with code.overheid.nl. Outages are real, but the deeper issue is control—GitHub is now under US/Microsoft, with Copilot data training by default and CLOUD Act exposure. Forgejo offers true open-source governance (GPLv3+, Codeberg e.V.) and was chosen over GitLab for licensing and autonomy. The setup runs Forgejo v15 on a single NUC with a hardened CI runner (KVM, gVisor, weekly rebuilds, nftables). Plan: archive public GitHub repos and point readers to the new home; some GitHub features don’t map.
An overview of ML-driven protein lead optimization using the Cradle pipeline. It covers proteins as amino-acid sequences whose folding enables function, and lead optimization as improving a functional template via mutations (often multi-objective). It explains Cradle’s transformer-based base model trained on natural proteins, then evotuned with MSAs to focus on relevant sequence space. It uses experimental assay data with batch effects and grouped direct preference optimization (g-DPO) to align the model with function, yielding a logiter for generation and a predictor to estimate activity, plus a masking model to select mutation positions (part 2 will cover generation).
Made by Johno Whitaker using FastHTML