Google bakes AI into shopping and browsing, as Adobe and OpenAI push agentic assistants
Chrome’s AI Mode now calls nearby stores to check stock and opens sites side-by-side with AI, while Adobe and OpenAI roll out assistants that complete multi-step creative and desktop tasks.
One-Line Summary
Big Tech pushes AI deeper into everyday browsing and creative work, as Google turns Search and Chrome into a side-by-side assistant, Adobe launches a cross-app creative agent, OpenAI upgrades Codex for background PC use, and late-stage capital chases AI while OpenAI targets life sciences.
Big Tech
Google’s AI Mode adds side-by-side browsing and can call nearby stores
Google is making browsing and shopping more hands-on with AI: AI Mode in Chrome now opens websites side-by-side with the AI panel, so you can view a page and ask follow-ups without switching tabs, and it can search across your recent tabs, images, and PDFs for more contextual answers. 1
For travel season, Google also extends agentic features that can call local stores on your behalf to check what’s in stock near you, a capability it launched in Search last November and is rolling out to AI Mode in the coming weeks in the U.S.; you can also track prices for individual hotels directly in Search with email alerts on significant changes. 2
Media coverage underscores how the split-screen changes the click itself — instead of leaving the AI interface, the site loads alongside it — raising new questions for publishers about attribution and screen real estate even as Google says visits still register as page views. 3
Google is also personalizing image creation in the Gemini app by using Personal Intelligence with Nano Banana 2 and your Google Photos library (opt-in), letting you generate images that include you and loved ones without long prompts; Google says the app does not directly train models on your private Photos library. 4
Industry & Biz
Sequoia Capital raises a $7B expansion fund focused on late-stage AI
Sequoia Capital closes roughly $7 billion for its expansion strategy fund, nearly double its 2022 equivalent, targeting late-stage investments in AI across the U.S. and Europe under new co-stewards Alfred Lin and Pat Grady. 5
The firm already backs OpenAI and Anthropic and has placed bets in areas like robotics (Physical Intelligence) and enterprise agents (Factory), positioning for companies eyeing public listings as capital intensity in AI grows. 5
Roundups from additional outlets echo the size and late-stage focus, framing the raise as Sequoia’s biggest in this category and a signal that investor appetite for scaled AI remains strong. 6
OpenAI launches GPT-Rosalind for life sciences and drug discovery workflows
OpenAI introduces GPT-Rosalind, a life sciences model aimed at biology, drug discovery, and translational medicine tasks like evidence synthesis, hypothesis generation, and experimental planning, with controlled access via ChatGPT, Codex, and API for qualified users. 7
OpenAI says GPT-Rosalind integrates with 50+ scientific tools and has been tested with organizations including Amgen, Moderna, and the Allen Institute; early evaluations emphasize molecular reasoning and genomics tasks, with safeguards through a “trusted access” framework in the U.S. to mitigate misuse. 7
Secondary reporting highlights the research focus and partner testing, noting that the model helps query databases, read papers, use tools, and propose experiments — with broader market coverage tracking spillover impacts on public equities in the drug discovery space. 8
New Tools
Adobe Firefly AI Assistant: a cross‑app creative agent in public beta
Adobe is launching Firefly AI Assistant, a conversational agent that operates across Creative Cloud apps like Photoshop, Premiere, Lightroom, Illustrator, Express, and Firefly to execute multi-step creative tasks you describe in plain language. 9
The assistant suggests actions, orchestrates workflows between apps, and offers contextual controls (like sliders) based on your current project; Adobe says it will learn your preferences over time and is adding packaged “skills” such as preparing social media assets. Pricing details are not specified, and public beta is coming in the next weeks. 9
Analysts note it integrates third-party models and maintains context across sessions, with Adobe also announcing model updates like Firefly Image Model 5 and a node-based Project Graph in development to design reusable AI workflows across tools. 10
OpenAI Codex for Mac adds background computer use, in‑app browser, and memory
OpenAI’s Codex desktop app now performs tasks on your Mac in the background using its own cursor — seeing, clicking, and typing — while you work, adds an in-app browser (based on Atlas) for commenting precise instructions on pages, and integrates gpt-image-1.5 for image generation. 11
The update expands automations to resume and schedule work across hours, days, or weeks, introduces a preview of memory for personal preferences and workflows, and ships a large set of new Codex plugins combining app integrations and MCP servers. 11
Coverage also notes multi-terminal support, handling GitHub review comments, remote SSH (alpha), and that personalization features and “computer use” are rolling out with regional and enterprise availability caveats; OpenAI has positioned Codex as part of an eventual all-in-one “super app.” 12
What This Means for You
Google’s side-by-side browsing turns AI from a separate destination into a co-pilot that sits next to the page you’re reading. For shoppers and researchers, that means fewer tab flips, faster comparisons, and practical help like calling local stores to check stock — useful for last-minute needs or travel prep.
For marketers, content teams, and publishers, the split window changes how users engage with your site. Attention may split between your page and AI answers, affecting session length and scroll depth, even if page views still register. Responsive layouts, above-the-fold clarity, and clear calls-to-action matter more when screen real estate shrinks. 3
Creative teams get a lift from agentic tools. Adobe’s Firefly Assistant promises to collapse app handoffs (e.g., Photoshop → Premiere → Express) into a single conversation, while OpenAI’s Codex can quietly run multi-step tasks in the background. If your job involves repetitive creative or QA tasks, these tools could claw back hours each week. 9 11
If you work in or adjacent to life sciences, GPT-Rosalind’s controlled-access preview signals growing, specialized AI for scientific workflows — from literature triage to experiment planning. Non-scientists won’t use it directly yet, but product managers, analysts, or consultants serving healthcare clients should note the expanding toolchain and integration with 50+ research tools. 7
Action Items
- **Try AI Mode’s side-by-side browsing in Chrome ** (US): On desktop, open AI Mode and click through results to experience the split view; test adding recent tabs, images, or PDFs with the plus menu to see how it changes your research flow.
- Use Google’s hotel price tracking for a specific stay: Search a hotel by name on desktop and toggle price tracking (or use the Prices tab on mobile) to get email alerts on significant rate changes for your dates.
- Test Google’s agentic store-calling for last-minute items: In AI Mode (US), describe the item “near me” and let Google place calls to find nearby stock before your next trip.
- Join Adobe Firefly AI Assistant public beta: When available, use it to generate a social asset set from a single brief and measure time saved versus your current multi-app workflow.
- Update Codex for Mac and trial background tasks: Have Codex handle a small, low-risk task (e.g., addressing a GitHub review comment set or staging image variants) while you continue regular work to gauge impact and trust boundaries.
Comments (0)