Pentagon brings Big Tech AI onto classified networks
The Defense Department inks agreements with Google, Nvidia, OpenAI and others to run AI tools on classified systems. Also today: Nvidia ships a multimodal model for faster agents, Meta buys a humanoid AI startup, and IBM rolls out an enterprise SDLC assistant.
One-Line Summary
Defense, products, and platforms converge: the Pentagon signs AI deals for classified use while Nvidia, Meta, and IBM ship tools that push multimodal agents, embodied AI, and governed enterprise development.
Big Tech
Pentagon signs AI deals with Google, Nvidia, OpenAI
The U.S. Department of Defense says it signs agreements with eight tech firms — including Google, Nvidia, SpaceX, OpenAI, Microsoft, Amazon Web Services, Oracle, and startup Reflection — to deploy their AI tools on the Pentagon’s classified networks for "lawful operational use," aiming to build an "AI-first fighting force." 1
The department characterizes the move as a way to strengthen decision-making across all domains of warfare; a Pentagon official adds that some partners already hold active contracts while others are at the agreement stage pending formal contracts. The Pentagon’s CTO is quoted discussing how company-by-company guardrails are negotiable but must align with government values and restrictions. 1
The deals arrive after a dispute with Anthropic over safety guardrails and worker pushback inside tech companies; hundreds of Google employees urge leadership to avoid classified AI workloads. OpenAI reiterates limits it places on military use — not for mass domestic surveillance, high‑stakes automated decisions, or directing autonomous weapons — while a source familiar with Nvidia’s agreement says it centers on its Nemotron AI models (not chips) and includes civil liberties language. 1
Forbes and Yahoo’s summary of the announcement lists OpenAI, Alphabet, Nvidia, SpaceX, Microsoft, Amazon and Reflection among the participants and highlights the Pentagon’s framing that the agreements accelerate transformation. 2
Nvidia debuts Nemotron 3 Nano Omni to speed AI agents
Nvidia introduces Nemotron 3 Nano Omni as a single model that understands video, audio, images, and text, letting agents avoid juggling separate perception models and cutting latency — with Nvidia claiming up to 9x higher throughput than other open omni models at similar interactivity. 3
Built on a 30B‑A3B hybrid mixture‑of‑experts backbone, the model integrates vision and audio encoders and uses multimodal token‑reduction to lower inference latency; Nvidia releases checkpoints in BF16, FP8, and FP4 along with portions of training data and code to spur customization and research. 4
Enterprises can deploy it broadly — from Jetson to DGX and major clouds — and access it via Hugging Face, OpenRouter, and as a NIM microservice; Nvidia says the Nemotron 3 family sees over 50 million downloads in the past year, and early adopters report faster GUI understanding, document intelligence, and audio‑video reasoning. 3
Industry & Biz
Meta acquires Assured Robot Intelligence to advance humanoids
Meta, which runs Facebook, Instagram, and WhatsApp, acquires humanoid robotics startup Assured Robot Intelligence (ARI) for an undisclosed sum; the team, including co‑founders Lerrel Pinto and Xiaolong Wang, joins Meta’s Superintelligence Labs to bring expertise in whole‑body robot control and self‑learning. 5
TechCrunch notes ARI builds foundation models for robots to perform physical tasks; Pinto previously co‑founded kid‑size humanoid startup Fauna Robotics (acquired by Amazon in March), while Wang worked at Nvidia and taught at UC San Diego. Meta frames the deal as adding frontier capabilities for robot control and model design. 5
Analysts highlight a broader industry view that training AI in the physical world can matter for long‑term capability, and market forecasts vary widely — from billion by 2035 to trillion by 2050 — underscoring both potential and uncertainty around humanoids. 5
New Tools
IBM launches Bob, an AI partner for enterprise SDLC
IBM launches IBM Bob, an AI‑first development partner that spans planning, coding, testing, deployment, and modernization with built‑in governance and security; IBM says 80,000+ employees use Bob internally with surveyed users reporting an average 45% productivity gain, and a 30‑day free trial is available. 6
Bob’s multi‑model orchestration routes each task to a suitable model based on accuracy, latency, and cost — drawing on Anthropic Claude, Mistral open models, IBM Granite, and specialized fine‑tuned models — while features like prompt normalization, sensitive‑data scanning, real‑time policy enforcement, and a BobShell CLI add auditability and control. 6
Case studies IBM shares include Blue Pearl completing a typical 30‑day Java upgrade in 3 days with 160+ engineering hours saved, IBM Instana developers self‑reporting 70% time reductions (about 10 hours/week), IBM Maximo reporting 69% time savings, and EY using Bob to modernize its global tax platform. 7
What This Means for You
The Pentagon’s agreements signal that mainstream AI vendors are entering highly regulated, high‑stakes environments — a cue for any team using AI to nail down "lawful operational use" definitions, data classification, and model guardrails. If your org uses AI for sensitive workflows, align acceptable‑use language to specific prohibited outcomes (e.g., surveillance, high‑stakes auto decisions) and approval paths. 1
Nvidia’s Nemotron 3 Nano Omni points to fewer moving parts in multimodal customer and ops workflows. Consolidating vision, audio, and language into one model can cut handoffs, reduce latency, and improve context for agents that read screens, parse PDFs, and summarize calls — helpful for service, risk, and compliance teams under time pressure. 3
IBM Bob shows how enterprises are operationalizing AI across the full software development lifecycle with multi‑model routing and built‑in governance. For product and platform leads, that means prioritizing auditable workflows, role‑specific approvals, and outcome‑based model selection instead of chasing a single "best" LLM. 6
Meta’s ARI deal underscores the growing link between digital agents and the physical world. Even if you’re not buying robots, expect data from sensors, video, and on‑device inference to shape how you scope AI projects and what skills — like multimodal evaluation and safety reviews — you need on cross‑functional teams. 5
Action Items
- Start a 30‑day IBM Bob trial: Spin up the trial and run a small modernization or test‑generation pilot with your engineering counterparts; document time saved and required approvals to gauge fit.
- Prototype a multimodal task with Nemotron 3 Nano Omni: Use a hosted endpoint to analyze a PDF plus a screenshot in one run; compare speed and output quality versus your current multi‑model workflow.
- Tighten your AI acceptable‑use policy: Add three concrete “not allowed” clauses inspired by today’s news (e.g., no mass domestic surveillance, no high‑stakes automated decisions, no weapon targeting) and route for legal/ops sign‑off.
- Map one agentic workflow for customer ops: Pick a support process (screen capture + call notes + logs) and outline inputs/outputs and approval gates so an AI agent could handle 1–2 steps safely.
Comments (0)