“Why don’t you use dependent types?”
02 Nov 2025 [ memories AUTOMATH LCF type theory Martin-Löf type theory NG de Bruijn ALEXANDRIA ] To be fair, nobody asks me this exact question. But people have regularly asked why Isabelle dispenses with proof objects. The two questions are essentially the same, because proof objects are intrinsic to all the usual type theories. […]
Open-source communications by bouncing signals off the Moon
A New Frontier for Ham Radio Bouncing signals off the Moon—known as Earth-Moon-Earth (EME) communication—has long been the ultimate challenge for radio amateurs. It required large antennas, expensive equipment, and accurate manual pointing and tracking. We try to bring this down to Earth, providing all the tools needed to experience the thrill of space communication, […]
The Useful Personal Computer

To market their new products to people who had not already spent years pining for a computer of their own, the creators of the second wave of microcomputers had to face head on the question of what the microcomputer was actually good for. What was its value, if not as a hobby plaything for self-motivated […]
Writing FreeDOS Programs in C
Thanks to our supporters on Patreon I’d like to thank everyone who supported this project on Patreon, including Jason Pittman, Alexander Shendi, Gustavo Pezzi, Scott Bollinger, Ryan Harris, Rafael Campos, K MI, Nikola Markovic BGD, Wade Brainerd, Mike Garcia, Philip Espi, Vlastimil Holer, Dan Mons, Stephen Smoogen, Bill Marshall, Patater, Brett Owen, Ronald Eichler, Tom […]
X.org Security Advisory: multiple security issues X.Org X server and Xwayland
X.Org Security Advisory: multiple security issues X.Org X server and Xwayland Olivier Fourdan ofourdan at redhat.com Tue Oct 28 13:22:18 UTC 2025 ====================================================================== X.Org Security Advisory: October 28, 2025 Issues in X.Org X server prior to 21.1.18 and Xwayland prior to 24.1.8 ====================================================================== Multiple issues have been found in the X server and Xwayland implementations […]
Matched Clean Power Index

Many British consumers pay for 100% renewable electricity. But how much are they actually getting? The power sector has a dedicated system to track the generation and consumption of renewable power, and suppliers use it to back their green energy claims. But there’s a problem. The rules allow suppliers to claim that summer power from […]
Tongyi DeepResearch – open-source 30B MoE Model that rivals OpenAI DeepResearch
GITHUB HUGGINGFACE MODELSCOPE SHOWCASE From Chatbot to Autonomous Agent# We are proud to present Tongyi DeepResearch, the first fully open‑source Web Agent to achieve performance on par with OpenAI’s DeepResearch across a comprehensive suite of benchmarks. Tongyi DeepResearch demonstrates state‑of‑the‑art results, scoring 32.9 on the academic reasoning task Humanity’s Last Exam (HLE), 43.4 on BrowseComp and 46.7 on BrowseComp‑ZH in extremely complex information‑seeking tasks, and achieving a score of 75 on the user‑centric xbench‑DeepSearch benchmark, systematically outperforming all existing proprietary and open‑source Deep Research agents. Beyond the model, we share a complete and battle‑tested methodology for creating such advanced agents. Our contribution details a novel data synthesis solution applied across the entire training pipeline, from Agentic Continual Pre‑training (CPT) and Supervised Fine‑Tuning (SFT) for cold‑starting, to the final Reinforcement Learning (RL) stage. For RL, we provide a full‑stack solution, including algorithmic innovations, automated data curation, and robust infrastructure. For inference, the vanilla ReAct framework showcases the model’s powerful intrinsic capabilities without any prompt engineering, while the advanced Heavy Mode (test‑time‑scaling) demonstrates the upper limits of its complex reasoning and planning potential. Continual Pre‑training and Post‑training Empowered by Fully Synthetic Data# Continual Pre‑training Data# We introduce Agentic CPT to deep research agent training, creating powerful agentic foundation models for post‑training. We propose AgentFounder, a systematic and scalable solution for large‑scale data synthesis that creates a data flywheel with data from the post‑training pipeline. Data Reorganization and Question Construction. We continuously collect data from various sources, including documents, publicly available crawled data, knowledge graphs, and historical trajectories and tool invocation records (e.g., search results with links). As shown in the figure, these diverse data sources are restructured into an entity‑anchored open‑world knowledge memory. Based on randomly sampled entities and their corresponding knowledge, we generate multi‑style (question,answer) pairs. Action Synthesis. Based on diverse problems and historical trajectories, we construct first‑order action synthesis data and higher‑order action synthesis data. Our method enables large‑scale and comprehensive exploration of the potential reasoning‑action space within offline environments, thereby thereby eliminating the need for additional commercial tool API calls. Specifically, for the higher‑order action synthesis, we remodel trajectories as multi‑step decision‑making processes to enhance the model’s decision‑making capabilities. Post-training Data# High-quality synthetic QA pairs We develop an end‑to‑end solution for synthetic data generation. This fully automated process requires no human intervention to construct super‑human quality datasets, designed to push the boundaries of AI agent performance. Through long‑term exploration and iteration‑from early methods like reverse‑engineering QA pairs from clickstreams (WebWalker) to the more systematic graph‑based synthesis (WebSailor and WebSailor‑V2), then the formalized task modeling (WebShaper)‑our approach ensures both exceptional data quality and massive scalability, breaking through the upper limits of model capabilities. To address complex, high‑uncertainty questions, we synthesize web‑based QA data through a novel pipeline. The process begins by constructing a highly interconnected knowledge graph via random walks and isomorphic tables towards tabular data fusion from real‑world websites , ensuring a realistic information structure. We then sample subgraphs and subtables to generate initial questions and answers. The crucial step involves intentionally increasing difficulty by strategically obfuscating or blurring information within the question. This practical approach is grounded in a complete theoretical framework, where we formally model QA difficulty as a series of controllable “atomic operations” (e.g., merging entities with similar attributes) on entity relationships, allowing us to systematically increase complexity. To further reduce inconsistencies between the organized information structure and the reasoning structure of QA, enable more controllable difficulty and structure scaling of reasoning, we proposed a formal modeling of the information‑seeking problem based on set theory. With this formalization, we developed agents that expands the problem in a controlled manner, and minimizes reasoning shortcuts and structural redundancy, leading to further improved QA quality. Moreover, this formal modeling also allows for efficient verification of QA correctness, effectively addressing the challenge of validating synthetic information‑seeking data for post‑training. Furthermore, we have developed an automated data engine to scale up the creation of PhD‑level research questions. This engine begins with a multi‑disciplinary knowledge base, generating “seed” QA pairs that require multi‑source reasoning. Each seed then enters a self‑guided loop of “iterative complexity upgrades”, where a question‑crafting agent is equipped with a powerful toolset including web search, academic retrieval, and a Python execution environment. In each iteration, the agent expands knowledge boundaries, deepens conceptual abstraction, and even constructs computational tasks, creating a virtuous cycle where the output of one round becomes the more complex input for the next, ensuring a controllable and systematic escalation of task difficulty. Unleashing Agent Capabilities with Diverse Reasoning Pattern To bootstrap the model’s initial capabilities, we constructed a set of trajectories via rejection sampling, based on the ReAct and IterResearch frameworks (for details, see below). […]
HyperRogue, the non-Euclidean roguelike, is a mind-melting masterpiece
HyperRogue, the non-Euclidean roguelike, is a mind-melting masterpiece — Rock Paper Shotgun Current version: 12.0 (Jun 3, 2021) – get here or play online or buy it on Steam, itch.io, Google Play, or AppStore! [embedded content] See the Gallery for the high quality images of all lands. You are a lone adventurer in a strange […]
Mock – An API creation and testing utility: Examples
Delaying specific endpoints¶ Making an existing API slow can be easily accomplished combining mock’s Base APIs and the delay option. $ mock serve -p 8000 –base example.com –delay 2000 You may want however to make a specific endpoint slow instead of the whole API. This can be achieved using middlewares: $ mock serve -p 8000 […]
Your URL Is Your State

Couple of weeks ago when I was publishing The Hidden Cost of URL Design I needed to add SQL syntax highlighting. I headed to PrismJS website trying to remember if it should be added as a plugin or what. I was overwhelmed with the amount of options in the download page so I headed back […]