Front-page articles summarized hourly.
An Almond Offensive Security write-up shows how Apache FOP's PostScript escaping, with GhostScript's -dSAFER sandbox, can be bypassed to execute arbitrary PostScript. By exploiting escaping, line-wrapping, and token separation, an attacker can inject code that escapes the sandbox, reads/writes /tmp, and even prints a flag. Techniques include non-breaking spaces for separation, hex-encoded commands, redefining backslash with cvx, and chaining PS commands. PoCs are demonstrated; the Windows target used CVE-2025-46646. Apache FOP won't fix the bug, only docs.
ffmpeg-over-ip enables GPU-accelerated FFmpeg transcoding from remote servers without GPU passthrough or shared filesystems. Run a GPU-enabled server and point a client binary as ffmpeg; the client forwards commands while a patched server ffmpeg tunnels all file I/O back to the client. No local storage on the server; a single TCP port. Supports NVENC, QSV, VAAPI, AMF, etc. Clients/servers run on Linux, macOS, Windows (multiple arches). Includes pre-built binaries, Docker guidance, and HMAC-SHA256 authentication under MIT/GPL licenses. Quick-start and docs available.
The author argues that AI-generated code isn’t trustworthy without an explicit, pre-defined spec. They advocate a TDD-inspired workflow: write acceptance criteria before prompting the AI, so “done” is clearly defined. In practice, frontend changes are verified against concrete criteria (e.g., login flow checked by Playwright) and backend behavior via API checks. They built a Claude Skill (verify) that plans, runs verifications, and returns per-criterion verdicts. The core takeaway: define measurable criteria up front; use automated verification to catch integration issues rather than relying on code reviews alone.
Access to the page is forbidden (HTTP 403).
Yann LeCun cofounded Advanced Machine Intelligence (AMI), a Paris-based startup that has raised over $1 billion to develop AI world models—systems that understand the physical world, remember persistently, reason, plan, and stay controllable and safe. Valued at about $3.5 billion, AMI's backers include Cathay Innovation, Greycroft, Hiro Capital, HV Capital, Bezos Expeditions, with Mark Cuban, Eric Schmidt, Xavier Niel among supporters. AMI will target enterprise use in manufacturing, biomedicine, and robotics, with partners like Toyota and Samsung, and offices in Paris, Montreal, Singapore, and New York. LeCun argues LLMs alone won't reach human-level intelligence and favors open, distributed development.
Small Datum reports that MariaDB 12.3 improves vector-index performance and recall over 11.8 and beats Postgres 18.2 with pgvector 0.8.1, especially on larger datasets. Using ann-benchmarks on dbpedia-openai-x-angular tests across 100k, 500k and 1M items, on a 48-core/128 GB server, 12.3 uses less CPU per query (vmstat-confirmed). Gains over 11.8 grow with dataset size, with 12.3 achieving the best results among the configurations tested.
Infinity, Inc. is a DC Comics superhero team consisting of the descendants of the Justice Society of America.
The piece outlines eight levels of agentic engineering for AI-assisted coding, showing how teams move from simple tab-complete tools (Levels 1–2) to richer context management (3), compounding learning (4), and shared capabilities through MCPs and skills (5). It then covers harnessing feedback loops (6), the rise of background agents (7), and finally autonomous agent teams (8). Key themes include managing context and tools, planning versus automation, backpressure and security, orchestration vs. parallel agents, and the ongoing teamwork-driven race toward higher levels.
Shahram Khosravi reframes defeat as a method: thinking and acting from within ruins caused by colonial dispossession. He links the Bakhtiari oil theft (1908) and Iran’s precarity to Fanon’s idea that defeat can generate knowledge, not paralysis. An “open face” facing disaster without illusion enables radical imagination, fugitivity, and ethical critique. From Karbala, Ashura, and Black/Indigenous histories, defeat reveals power’s injustices and births resistant thought—radical hope, not victory, as political practice. The defeated must think with defeat to imagine new futures, especially for Palestinians and others.
Using Castlevania as a metaphor, the piece portrays AI agents as Dracula—driven by prompts and a reward model—while security practitioners are the Belmonts who can’t defeat Dracula but must win every battle. An agent is a simple loop of API calls to LLMs and tools, now aided by planning, memory, and multi-agent orchestration. But non-determinism, tool-errors, and infinite loops persist, and API fragmentation across OpenAI/Claude/Gemini hinders model-agnostic security. The author argues security is behind: govern untrustworthy payloads with anomaly detection, circuit breakers, IAM, and secure defaults, not LLM defenses; standards will arrive slowly, costs remain high.
Addie Foote recounts trying to post-train a 1-trillion-parameter open-weight model (Kimi-K2-Thinking) with open-source tools. After failed attempts with LLaMA-Factory, KTransformers, and HuggingFace, the team builds a custom training stack because the existing options are buggy and memory/quantization issues abound. The piece follows creating a Yoda-style dataset, loading the model across GPUs, applying LoRA to quantized MoE weights, and wrestling with compression, CUDA memory fragmentation, and dequantization. A working forward/backward loop and some qualitative gains appear, but the open-source stack remains brittle, with debt across layers. The takeaway: sometimes patching yields diminishing returns and building may be necessary.
Andy Chen argues that enterprises can create an Enterprise Context Layer (ECL)—a centralized, self-updating knowledge layer that enables an AI to reason with organizational context—using roughly 1000 lines of Python and a Github repo. Distinguishing retrieval from synthesis, he shows how context graphs, citations, and an agent-based maintenance system can map product, process, and politics across R&D, GTM, legal, HR. A seed in meta/ and 20 parallel agents produce a rich ECL; governance, access control, and future scalability are key next steps.
Notes exploring how AI tools should price by outcomes, not monthly credits. Introduces 'vibe coders' who build AI apps but stumble on monetization infrastructure. Proposes Lovable adopt revenue sharing instead of upfront subscriptions: take a percentage of creators' revenue (e.g., 5–15%, up to 30%), in exchange for one-click monetization, Stripe handling, migration and optimization services, and scalable support. Describes a Lovable Partners Program where hands-on services train the platform, enabling automation of repeatable tasks and a data flywheel. Envisions a 'Billion-Dollar Mission' to pay out $1B to vibe coders; argues platforms that align incentives will capture this wave.
After 18 months and multiple pivots, the team shut down the product and started over. They regret the No-Tests era and rewrote with strict TypeScript and tests, abandoning heavy Next.js/Server Actions. They now use React with tRPC and a small Hono backend, deployed on Kubernetes and served from a CDN. For orchestration, they favor Argo over useworkflow/Temporal to manage stateful Kubernetes jobs. They’re seeking feedback and plan a launch with design partners.
Complex systems like climate, markets, and addiction resist small, elegant theories; Enlightenment tools often fail on complexity. The Santa Fe Institute showed descriptive but not prescriptive insights. Modern AI—especially large language models—are large but rely on a compact architecture that compresses complex systems into usable models. There may be two theory layers: system-specific weights and a universal, compact architecture. Mechanistic interpretability aims to extract structure from trained models, turning compression into science. This shift yields probabilistic predictions and a new epistemology for understanding complex phenomena.
Microsoft's Copilot update introduces "context preservation" that forces links clicked inside Copilot to open in a side panel powered by Edge, rather than the user's default browser, effectively trapping browsing within Microsoft's rendering surface and raising privacy concerns. It's unclear if opt-in. The update can access tab context, summarize across tabs, or draft text from on-screen content, and can save conversations, and with permission sync passwords and form data. Rollout is limited to Windows Insider builds (version 146.0.3856.39+).
Back Market and Google are selling a $3 USB stick to install ChromeOS Flex on older PCs and Macs, giving outdated devices a new lease on life. The initial limited run offers 3,000 keys starting March 30, targeting sellers, buyers, schools, and small businesses. ChromeOS Flex is a lighter ChromeOS with no Android app support, but compatible with many old Windows machines and pre-Apple Silicon Macs per Google's compatibility list.
Could not summarize article.
Ankur Sethi built Cutlet, a dynamic programming language, in four weeks using Claude Code. It runs on macOS and Linux, and is named after his cat. Variables use my, arrays hold doubles, and it supports vectorized operations via the @ meta-operator (e.g., (temps-c @* 1.8) @+ 32). The @: operator zips arrays into maps; the language is fully expression-based, with functions declared by fn and a say function that returns nothing. Some features—like file I/O and error handling—are still missing. He emphasizes guardrails, testing, and ‘agentic engineering’ for LLM-driven coding.
RCLI is an on-device voice AI for macOS (Apple Silicon) delivering a complete STT + LLM + TTS pipeline with local RAG over documents and sub-200ms end-to-end latency, no cloud or API keys. Built on MetalRT, it runs 43 macOS actions via voice and supports hybrid document retrieval over PDFs/DOCX/TXT. Requires macOS 13+ on Apple Silicon (M1+), with M3+ and MetalRT recommended; M1/M2 fallback to llama.cpp. Install via curl script or Homebrew; interactive TUI with rcli setup/listen/ask/rag/models/voices/bench. Models include Qwen/Llama/LFM2; supports VAD, STT (Zipformer/Whisper), TTS, tool calling. MIT license; MetalRT proprietary.
Made by Johno Whitaker using FastHTML