<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>0to1log — AI News &amp; Insights</title><description>AI news curated and contextualized. From Void to Value.</description><link>https://0to1log.com/</link><item><title>Google’s Gemma 4 goes fully Apache-2.0 with frontier-class reasoning on a single GPU</title><link>https://0to1log.com/en/news/2026-04-05-research-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-04-05-research-digest/</guid><description>A dense 31B and a 26B MoE with 3.8B active params, 256K context, native function calling, and multimodal I/O—now under Apache 2.0. Here’s what truly changed, what the numbers mean, and what still breaks.</description><pubDate>Sun, 05 Apr 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>Google’s Gemma 4 goes fully open-source as Microsoft asserts AI independence</title><link>https://0to1log.com/en/news/2026-04-05-business-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-04-05-business-digest/</guid><description>Google drops Apache 2.0 Gemma 4 tuned for agents and on‑device use—while Microsoft ships MAI models that undercut OpenAI’s moat. Here’s what shifts in the next two quarters.</description><pubDate>Sun, 05 Apr 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>Google’s Gemma 4 resets open-model efficiency: 31B dense hits Arena top-3, edge E2B/E4B go fully offline</title><link>https://0to1log.com/en/news/2026-04-04-research-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-04-04-research-digest/</guid><description>A 31B dense model edging trillion-parameter rivals and a 26B MoE firing only 3.8B params isn’t marketing—it’s a new efficiency baseline. Plus, fresh recipes for shorter CoT and autonomous multi-agent search.</description><pubDate>Sat, 04 Apr 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>OpenAI Buys TBPN As Google Pushes Gemma 4; Anthropic Snaps Up Coefficient Bio</title><link>https://0to1log.com/en/news/2026-04-04-business-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-04-04-business-digest/</guid><description>OpenAI is building its own megaphone while Google arms developers with Apache 2.0 models and Anthropic buys domain expertise. The next six months will be about distribution power, vertical AI, and who writes the narrative.</description><pubDate>Sat, 04 Apr 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>Google’s Gemma 4 goes Apache 2.0, pushes local multimodal LLMs from phones to H100s</title><link>https://0to1log.com/en/news/2026-04-03-research-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-04-03-research-digest/</guid><description>Open weights with real license freedom, 256k context, and edge variants tuned by Pixel’s silicon partners — plus NVIDIA’s 1M-token agent model and Microsoft’s new MAI stack. Here’s what actually changed.</description><pubDate>Fri, 03 Apr 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>OpenAI’s $122B War Chest Reshapes the AI Stack as Google Opens Gemma and Microsoft Ships Cheaper Multimodal Models</title><link>https://0to1log.com/en/news/2026-04-03-business-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-04-03-business-digest/</guid><description>Capital now decides AI winners: OpenAI locks in chips and data centers, Google removes licensing friction, Microsoft undercuts on price, and NVIDIA arms agents with 1M-token context.</description><pubDate>Fri, 03 Apr 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>NVIDIA’s Nemotron 3 Super pairs 1M-token context with latent MoE and MTP to push agentic throughput</title><link>https://0to1log.com/en/news/2026-04-02-research-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-04-02-research-digest/</guid><description>A hybrid Mamba-Transformer MoE with native 4‑bit pretraining and multi-token prediction lands—plus fresh results in computer-use agents and compact multimodal reasoning.</description><pubDate>Thu, 02 Apr 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>OpenAI pares projects amid compute crunch, leans on AWS and $122B war chest</title><link>https://0to1log.com/en/news/2026-04-02-business-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-04-02-business-digest/</guid><description>Compute—not models—is the new moat. OpenAI is cutting video bets, buying long-term capacity, and wiring a superapp to convert its 900M users. The next moves will reshape vendor power, margins, and who owns the enterprise agent stack.</description><pubDate>Thu, 02 Apr 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>OpenAI’s $122B war chest resets the AI stack: compute, distribution, and superapp ambition</title><link>https://0to1log.com/en/news/2026-04-01-business-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-04-01-business-digest/</guid><description>The largest private AI raise ever locks in compute, opens retail participation, and points to a unified superapp—while Microsoft goes multi‑model and Anthropic inks a government pact.</description><pubDate>Wed, 01 Apr 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>Microsoft ships Fara-7B on-device web agent and Harrier SOTA embeddings, as LIV-hybrid 350M model targets edge throughput</title><link>https://0to1log.com/en/news/2026-04-01-research-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-04-01-research-digest/</guid><description>Agentic computing moves local: a 7B visual-action model beats larger web agents while Microsoft quietly drops a decoder-only multilingual embedding SOTA. Meanwhile, a 350M LIV hybrid claims 40K tok/s on H100.</description><pubDate>Wed, 01 Apr 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>Speculative decoding gets task-aware: TAPS routes domain-tuned drafters while vLLM tests 2-bit KV cache; biomed agents hit 77% on new benchmark</title><link>https://0to1log.com/en/news/2026-03-31-research-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-03-31-research-digest/</guid><description>A new study shows speculative sampling speedups hinge on the draft model’s training data—and that inference-time routing beats weight merging. Meanwhile, vLLM experiments 4x KV cache capacity via learned quantization, and multi-agent biomed systems report hard numbers.</description><pubDate>Tue, 31 Mar 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>OpenAI Soars to $852B Valuation as Big Tech Locks In Multi-Billion AI Alliances</title><link>https://0to1log.com/en/news/2026-03-31-business-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-03-31-business-digest/</guid><description>A record $122B raise vaults OpenAI toward an IPO while Microsoft weaves multi-model Copilot and deepens ties with Anthropic and Nvidia—reshaping AI power blocs.</description><pubDate>Tue, 31 Mar 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>Shield AI secures $2B and buys Aechelon, consolidating AI airpower with simulation-to-flight stack</title><link>https://0to1log.com/en/news/2026-03-30-business-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-03-30-business-digest/</guid><description>A defense AI leader just raised at late-stage mega scale and snapped up a core simulation vendor. Meanwhile, Apple leans into an AI platform toll-road, Google takes multimodal search live worldwide, and Oracle targets FedRAMP-grade agentic AI.</description><pubDate>Mon, 30 Mar 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>PackForcing tames video KV-cache for 2‑minute generation; TurboQuant and PolarQuant redefine long‑context efficiency</title><link>https://0to1log.com/en/news/2026-03-30-research-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-03-30-research-digest/</guid><description>A three-part KV-cache split lets short-clip training scale to minute-long video, while new quantization methods squeeze long-context LLMs onto consumer GPUs without retraining.</description><pubDate>Mon, 30 Mar 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>Anthropic’s leaked ‘Capybara/Mythos’ resets AI security stakes as Big Tech tightens the enterprise playbook</title><link>https://0to1log.com/en/news/2026-03-29-business-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-03-29-business-digest/</guid><description>A frontier model leak collides with Google’s live, multimodal search rollout and OpenAI’s pre-IPO cleanup—forcing CISOs, PMs, and infra buyers to redraw their roadmaps.</description><pubDate>Sun, 29 Mar 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>Closing the loop on agent outputs: token-level runtime control beats static constraints</title><link>https://0to1log.com/en/news/2026-03-29-research-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-03-29-research-digest/</guid><description>A new runtime controller steers LLM decoding mid-flight, boosting first-try tool-call success by up to 37.8 points while slashing wasted retries. Meanwhile, graph-augmented memory, spectral diagnostics for label noise, and AI-ready materials tooling signal shifts from offline heuristics to online control and structured data.</description><pubDate>Sun, 29 Mar 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>Geometric feedback turns inference into training data: GIFT advances image-to-CAD program synthesis</title><link>https://0to1log.com/en/news/2026-03-28-research-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-03-28-research-digest/</guid><description>A new bootstrapping pipeline amortizes test-time search into model weights, delivering double-digit IoU gains and slashing inference compute in image-to-CAD. Meanwhile, edge AI goes carbon-aware and software agents get more context-savvy.</description><pubDate>Sat, 28 Mar 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>Anthropic’s ‘Mythos’ leak resets AI security stakes and jolts markets</title><link>https://0to1log.com/en/news/2026-03-28-business-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-03-28-business-digest/</guid><description>A leaked Anthropic model tier above Opus raises both capability and cyber risk bars—while defense and chip suppliers reposition fast.</description><pubDate>Sat, 28 Mar 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>Mistral’s open-weight Voxtral TTS takes aim at ElevenLabs as Cohere counters with ASR; defense AI consolidates with Shield AI’s $2B raise</title><link>https://0to1log.com/en/news/2026-03-27-business-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-03-27-business-digest/</guid><description>A lightweight, edge-ready TTS from Mistral challenges closed incumbents while Cohere pushes ultra-fast transcription—and defense AI doubles down on simulation with Shield AI buying Aechelon.</description><pubDate>Fri, 27 Mar 2026 09:00:00 GMT</pubDate><category>ai-news</category></item><item><title>Trillion-parameter science model lands, while long-memory attention hits 100M tokens and open TTS gets real-time on-device</title><link>https://0to1log.com/en/news/2026-03-27-research-digest/</link><guid isPermaLink="true">https://0to1log.com/en/news/2026-03-27-research-digest/</guid><description>Intern-S1-Pro scales scientific reasoning with a 1T-parameter MoE, MSA pushes end-to-end memory to 100M tokens, and Mistral’s Voxtral TTS brings 90ms edge latency.</description><pubDate>Fri, 27 Mar 2026 09:00:00 GMT</pubDate><category>ai-news</category></item></channel></rss>