Front-page articles summarized hourly.
Cellist Steven Isserlis remembers four decades with György Kurtág as the Hungarian composer turns 100, praising his boundless imagination, ferocious musical listenership, and unique way of teaching. Kurtág uses vivid images—“neighing,” “snake,” “dog biting God’s feet”—to reveal musical meaning, and his lessons transform performance through precise guidance on silence, colour, and phrasing. Isserlis recounts powerful sessions, memories of a final London recital with Márta, and Kurtág’s ongoing vitality, including new works like Circumdederunt. The 100th is marked by concerts in Newcastle, London, and Manchester.
The article reports that the "Cancel ChatGPT" movement gains traction after OpenAI’s deal with the U.S. Department of War, as Anthropic refused to provide Claude AI for mass surveillance or autonomous weapons. OpenAI, led by Sam Altman, pledged support to the DoD, claiming no use for mass surveillance, though a government official said the tech would be used for all lawful means. The move sparked backlash in user communities and unsubscribe campaigns. Analysts say the AI industry prioritizes money over ethics, highlighting the power of major players; Microsoft remains supportive.
An essay about finding happiness outside tech. In early 2020 the author, newly out of college, becomes a volunteer head coach for a six-player middle-school basketball team. Despite emptiness in a corporate job, coaching gives purpose—building kids’ skills, confidence, and teamwork. The Covid season end halts the team, but the experience reshapes his view of meaning: he loves mentoring, real-world effort, and steering his own ship. He argues many in tech chase screens and products; identify what truly makes you happy before it’s too late.
Argues the United States is drifting toward a twenty‑first‑century fascism led by an oligarchic techno‑feudal elite. Neoliberal capitalism has hollowed out democracy, concentrating power in a transnational ‘authoritarian international’ of billionaires, security chiefs, and political fixers who monetize state power and shield one another. Big Tech acts as neo‑feudal estates that extract rent from data, weaponize disinformation, and underpin a global police state. Elite impunity—via legal immunities and executive sign‑offs—enables lawless governance. The regime is building concentration‑camp infrastructure and paramilitary policing to manage surplus populations and dissent.
Andrew Miller compares two paths to autonomous driving: Waymo’s sensor fusion (lidar, radar, cameras) vs Tesla’s vision-only approach driven by compute and vast data. Tracing from 1990s/2000s sensor fusion as the baseline to Tesla’s 2016 challenge to that orthodoxy, the piece shows how costs, robustness, and data scale favor fusion, though Tesla argues cameras suffice. It highlights Waymo’s safety gains and Tesla’s higher disengagements and crashes, regulatory scrutiny of vision-only FSD, and Tesla’s partial reintroduction of radar. The author suggests the question is no longer sensors versus cameras, but what safety standard we will accept for robotaxi s.
An introduction to Content-Security-Policy (CSP) for pentesters. CSP acts as a browser bouncer, with directives like script-src and default-src controlling resource sources. Key values include 'self', 'none', 'unsafe-inline', and 'unsafe-eval'. Common flaws: 'unsafe-inline' present, missing base-uri, broad wildcards (https:, data:, *), missing object-src, and subdomain wildcards. Practical testing tips: inspect response headers or meta tags, use curl or DevTools, and leverage CSP Evaluator. The article emphasizes that CSP is complex and misconfigurations are common, offering a quick analysis workflow to spot weaknesses.
Gary Marcus argues the Anthropic matter was a scam: Altman publicly backed Dario while secretly striking a deal to undermine Amodei, timing tied to donations. He cites The New York Times on Altman working the deal before the Dario endorsement and Trump’s critique. Marcus says the government’s ban on Anthropic was punitive and biased, favoring a similar terms recipient with more contributions. He calls for equal terms for Anthropic, notes Amodei’s disputes and a $1.5B writers’ settlement, and warns the US is sliding from market competition to oligarchic influence.
Tomoshibi is a browser-based writing app where your words gradually fade from view while they’re saved locally. It features a dark screen with a small flame, no toolbars or menus, and requires no account. The idea is to push you to rewrite instead of obsess over edits: you can fix typos, but old lines fade away. It promises one-line-at-a-time writing, continuous progress, and a Mac app is in development.
JSTOR shows an access-check page due to unusual traffic, asking the user to complete a reCAPTCHA to proceed. It notes that Javascript is disabled and provides a block reference, IP address, timestamp, and JSTOR support contact information.
Werner Herzog’s The Future of Truth outlines ecstatic truth as poetry-driven, fabrication-allowed cinema beyond ‘accountant’s truth.’ The Nation review finds the book deflating: it's largely recycled from earlier books and interviews, offers few new arguments, and reads as a contractual, slapdash project compared with his masterful memoir Every Man for Himself and God Against All. The AI chapter is merely a catalog of LLM tricks, missing a deeper reckoning. Ultimately the piece maintains Herzog’s questing spirit, hoping the journey continues rather than ends.
Ghosts’n Goblins (Makaimura) debuted in 1985 arcades and, via Fujiwara’s design, fused demon-horror with cartoon energy and relentless difficulty. It topped Japan and UK charts in 1986, marking early cross-market appeal. Elite swiftly ported it to Commodore 64 (Chris Butler) and ZX Spectrum (Keith Burkhill, Nigel Alderton; graphics by Karen Trueman), with the Spectrum version adding a narrative intro. The UK’s arcade–computer–console ecosystem and positive reviews helped it become a lasting classic and influence later games.
Context Mode is an MCP server that sits between Claude Code and external tools to cut context-window use by ~98%. It sandboxes tool outputs so only stdout enters the chat; raw data stays in the subprocess. Real-world tests shrink outputs from 315 KB to 5.4 KB (examples: Playwright snapshot 56 KB → 299 B; issues 59 KB → 1.1 KB; logs 45 KB → 155 B). Sessions run ~3 hours vs ~30 minutes, with 99% context remaining after 45 minutes. It uses a 10-language sandbox, BM25-indexed Markdown knowledge base, and is MIT open-source; install via plugin marketplace or MCP.
Rivet Actors are a serverless primitive for stateful workloads. Each actor has built-in state, storage, workflows, scheduling, and WebSockets for real-time, scalable apps. Features include in-memory state with durable persistence, durable queues, timers, and WebSockets, plus multi-step workflows. Use cases: AI agents, sandbox orchestration, collaborative documents, per-tenant databases, and chat. Deploy with a single API: self-host (Rust binary or Docker on Postgres/FS/FoundationDB), Rivet Cloud, or Apache‑2.0 open source. Tooling includes RivetKit clients and runtimes.
Verified Spec-Driven Development (VSDD) fuses Spec-Driven, Test-Driven, and Verification-Driven Development into an AI‑driven pipeline. Specs are the source of truth; tests define behavior; adversarial review hardens spec and code. Roles: Architect (human), Builder (AI like Claude), Tracker (Chainlink), Adversary (Sarcasmotron). Phases: Phase 1 Spec crystallization (behavioral specs, verification architecture, purity boundaries, review); Phase 2 Test-First Implementation (red tests, minimal code, refactor); Phase 3 Adversarial Refinement; Phase 4 Feedback Integration; Phase 5 Formal Hardening (proofs, fuzzing, security); Phase 6 Convergence. Core: spec supremacy, verification-first, red before green, accountability, entropy resistance. Use for high-assurance, long-lived, multi-AI projects with strong security needs.
Quanta explains how Georg Cantor's 1874 proof that infinities come in different sizes reshaped math, but newly unearthed letters show it was partly plagiarized from Dedekind. A missing Nov 1873 letter to Cantor reveals Dedekind’s proof that algebraic numbers have the same size as the integers, which Cantor folded into his publication. The saga includes Kronecker’s opposition and Cantor’s depression. Journalist Demian Goos, pursuing a podcast, locates the letters in Halle archives between 2024–25, rebalancing the record. The piece argues math is a collaborative enterprise; Cantor’s genius remains foundational, but credit matters.
An interactive guide shows how diffusion models turn text prompts into images by denoising random noise in a compressed latent space. During training, an encoder/decoder links latent space to real images; prompts map to a high-dimensional embedding space that guides the denoising. The process starts from a random seed and a chosen number of steps; the guidance scale determines how strongly the prompt shapes the result. More detailed prompts yield tighter results; you can interpolate between prompts to generate in-between images. The article uses Photoroom PRX as an example.
On a flight, the narrator meets a Belgian 747 pilot who reflects that after twenty years, there’s little room for growth in his job. Meanwhile, the narrator, a programmer, describes how coding agents have begun replacing much of his work. Early on, AI assisted by search; now agents can produce end-to-end features, prompting evolves but is not enough. He worries about skill atrophy: if you rely on agents, you may not improve or understand solutions; sometimes you inherit wholesale, partially wrong code. The piece argues that deep domain knowledge remains essential, and suggests balancing hand coding with AI outputs.
The piece argues AI-assisted development boosts velocity but not comprehension, creating cognitive debt that remains invisible to velocity metrics. While features ship and MTTR may stay flat, engineers’ understanding of what they built declines as code is generated faster than it’s absorbed. This reshapes the reviewer role, risking shallow reviews and unapproved mental models. It also fuels a burnout pattern: fast output paired with low confidence. Tacit organizational memory erodes when new code is created without deep internalization, threatening future architectural judgement. Without direct measures of comprehension, leadership optimizes for what is measurable—velocity.
gitcredits rolls movie‑style credits for your git repo in the terminal. Install with go install github.com/Higangssh/gitcredits@latest or build from source and run gitcredits inside any repo. It displays an ASCII art title, project lead, contributors, notable feat/fix commits, and stats (total commits, contributors, GitHub stars, language, license). GitHub metadata (stars, description, license) can be shown if gh CLI is installed and authenticated; otherwise you get git‑only data. Requires Go 1.21+, MIT license. Controls: arrow keys to scroll; q or Esc to quit.
Made by Johno Whitaker using FastHTML