Front-page articles summarized hourly.
This article analyzes Avis LXXX, a 4 m airship drone designed to circumnavigate the world in under 80 days using hydrogen lift and solar power. With a target cruise near 10 m/s (potentially 15 m/s in sun), it could cover roughly 500–800 km daily and finish in 7–9 weeks along winds that favor east-to-west routes. Design uses a carbon-polymer shell, a 0.5 m² solar array, a 1 kg LiPo battery, and a low-leakage hydrogen bag. The proposed route traverses Europe, the Atlantic, the Pacific, Asia, Africa, and the Mediterranean. Legal, budget and reliability challenges remain, but it’s deemed feasible.
Could not summarize article.
Phenakistoscope (1833) is an early animation device considered a precursor to cinema. A spinning cardboard disc viewed in a mirror through moving slits creates moving figures—dancers, bowing figures, leaping animals. It emerged almost simultaneously: Joseph Plateau (Brussels) and Simon von Stampfer (Berlin) developed near-identical devices. McLean’s Optical Illusions or Magic Panorama published in 1833 produced some of the earliest mass-produced phenakistoscopes. It was later eclipsed by the Zoetrope and Muybridge’s Zoopraxiscope, fading into pre-cinema curiosity, yet echoed today in looping online animations.
AI coding assistants have not delivered gains: 21% more tasks but no delivery improvement; experienced developers slower yet feel faster; 48% of AI-generated code has vulnerabilities. Quality suffers when requirements are unclear and edge cases arise mid-implementation, creating hidden gaps and more reviews—fueling tech debt. Some engineers report dramatic wins, but many juniors face higher pressure with less flexibility. Most developer time goes to non-coding work; AI saves about 10 hours/week, yet downstream inefficiencies offset these gains. The piece asks for upstream context (state/data flow, downstream impact) during product discussions and tooling to map changes to code.
Gyrovague reports that archive.today began using its users as proxies to mount a DDOS against his blog in January 2026. A CAPTCHA page executes a JavaScript every 300ms to fetch a random query to gyrovague.com, hogging resources; ad blockers can block the traffic. The post threads the incident through a longer saga—from a 2023 OSINT piece, to FBI interest in Nov 2025, a GDPR takedown request, to January 2026 exchanges and threats from archive.today’s webmaster and aliases. The author sees possible censorship and Streisand effects, but offers no definitive culprit or motive.
Fifty years ago, 20-year-old Bill Gates blasted software piracy in a 1976 “Open Letter to Hobbyists” after Altair BASIC was widely copied, arguing unpaid software harmed developers. The backlash helped spark a rift between proprietary and communal software, fueling the Free Software Movement (Richard Stallman, 1983) and the later Open Source era (1998 definitions). The shift reshaped licensing and contributed to Microsoft’s rise with MS-DOS, laying the groundwork for today’s open-software culture.
FLOPPINUX 0.3.1 (Dec 2025) is an updated, single‑floppy Linux From Scratch‑style workshop that boots a minimal i486 32‑bit system from a 1.44 MB floppy with persistent storage. It uses Omarchy Linux (Arch‑based), a cross‑compiler, a tiny kernel (Linux 6.14.11) and BusyBox 1.36.1 to provide a shell, basic utilities, and simple scripts. The build yields a root filesystem (rootfs.cpio.xz) and a Syslinux‑bootable floppy image (floppinux.img), tested in QEMU before burning. Hardware: Intel 486DX/33 MHz, ~20 MB RAM. Includes init scripts and a welcome message.
HN Word Oracle is a Prolificacy Analyzer that queries a ClickHouse cluster to rank the Top 1000 Prolific Writers, using a 1 Book = 300,000 words conversion (e.g., Game of Thrones). The page shows fields like Author, Global Rank, Word Count, and Percentile, and invites exploration of the top writers. The author notes it was “vibe coding” for fun, includes a cat video link, and the project runs on the ClickHouse API via Gemini Flash/Pro and AIStudio.
University of Utah researchers used archived and modern hair samples from 48 people to track lead exposure over a century. They found about a 100-fold drop in hair lead after EPA regulations, particularly the phase-out of leaded gasoline. Hair lead declined from ~100 ppm before 1970 to ~10 ppm in 1990 and <1 ppm in 2024. Published in PNAS, the work underscores environmental regulation’s public-health benefits and cautions against weakening rules.
Weber State University’s 404 page: the requested page doesn’t exist or was moved. It offers a homepage or search option, asks for feedback on where the error occurred, and provides contact details (3848 Harrison Blvd, Ogden, UT 84408; 1-801-626-6000; [email protected]) plus popular links (Admissions, Bookstore, Stewart Library, Jobs, Tickets, eWeber Portal, Directory, Maps).
GitHub’s discussion centers on the surge of low-quality contributions, including AI-generated PRs, and how maintainers can manage the load. Short-term proposals include repository-level PR controls (even disabling PRs or restricting to collaborators) and a UI option to delete low-quality PRs. In the long term, the group is exploring enhanced permission models, AI-assisted triage with clearer attribution, and rules that PRs must meet—potentially anchored to CONTRIBUTING.md—before they’re opened. The conversation stresses protecting first-time contributors while reducing reviewer fatigue.
CMU Computer Club runs mirrors of HVSC, gnu, aminet, knoppix and knoppix-dvd, scene.org, archive.debian.org, ubuntu and ubuntu-iso, and zeroshell, updated every six hours. They invite requests for new mirrors and contributions at [email protected]. Note: a 2015 Scene.org file was removed after antivirus flagging. Also, cryptographic software on the site is subject to U.S. export restrictions under License Exception TSU; see BIS for details.
Mozilla announced Firefox will offer configurable controls to disable AI enhancements. Users can turn off or selectively disable features including translations, alt text in PDFs, AI-enhanced tab grouping, link previews, and the AI chatbot in the sidebar (supporting Claude, ChatGPT, Copilot, Gemini, Le Chat Mistral). A main Block AI Enhancements toggle will disable current and future AI features. The controls will be in Firefox 148, rolling out starting February 24.
An AI narrator aboard a dewar around Julia—an enigmatic, massless cosmic object—records a century-long vigil with two doctors, Cartan and Brouwer. It inventories cisjulian space, composes in an algebraic language, and paints scenes from aquarelle pages. When Afrasiab arrives, Cartan sacrifices herself to board, and Julia reveals itself as a higher-dimensional structure whose beauty and terror unmake reality. The narrator and the crew are torn apart; Afrasiab departs with whispers from the dead. Virginia, forgive me. I am not afraid.
Tamiko Thiel's CM-1/CM-2 "Feynman" T-shirts celebrate the Connection Machine design. The logo was created in 1983 before CM-1; the machine was designed to resemble it. It rose to fame in the 1990s after Apple used a photo of Richard Feynman in Think Different. The logo depicts a 12-dimensional cube-of-cubes hardware network (boxes with hard connections) and red "pom-poms" for software data structures that need not follow the hardware topology; Feynman influenced the topology. CM-2 was acquired by MoMA NY in 2016. Classic colors: black shirt, yellow-gold hardware, red pom-poms. Orders in US/Europe via Spreadshirt.
Using a bias-variance lens, the authors quantify AI errors as bias (systematic) and variance (incoherence). They find that as models reason longer, incoherence grows; scale helps coherence on easy tasks but not on hard tasks; natural overthinking raises incoherence more than deliberate budget increases; ensembling reduces incoherence by reducing variance. Treating LLMs as dynamical systems, they show larger models learn the objective faster (lower bias) but also become less reliable in following it as task difficulty grows. Implications: future failures may resemble industrial accidents; safety focus should address reward hacking and mis-specification during training, not just perfect optimization.
An experiment to train a joke-generating model via rubric-based RL on Kimi K2. Authors decompose 'funny' into verifiable rubrics (clarity, engagement, tone, specificity) and train a model to satisfy them rather than optimize for 'funny' directly. Data: ~48k examples scraped from Twitter, TikTok, Reddit, and Harvard Lampoon; audio extracted with yt-dlp, transcribed with Whisper large-v3; filtered for quality. RL uses a grader (Qwen3-30B) scoring prompts per rubric; reward is a weighted sum; added a rubric to penalize laugh-emitters. DPO on comments ineffective. Results include various joke formats and standup bits; demo not public.
Documentation for Joedb, the Journal-Only Embedded Database (version 10.3.0), covering Introduction (pros/cons, an example, concurrency), a User Guide (getting started, opening files, checkpoints, concurrency, RPC, schema upgrades, vectors, indexes, blobs), and a Reference (API, file format, network protocols, tools, testing, logging, links, history, license).
Courts blocked the Trump administration’s classified national-security halt on five US offshore wind projects, issuing temporary injunctions that allow construction to resume. Across three courts and four judges, the rulings found the government’s justification unpersuasive and irrational, noting that completed turbines could operate while ongoing work is halted. Most projects were already well advanced, with some near completion, and are likely to finish before any appeal is resolved. The government can appeal, but success appears unlikely.
Made by Johno Whitaker using FastHTML