๐๏ธ A Split Era

Good Morning, AI Enthusiasts!
The strongest models are hidden, the weaker ones are inheriting invisible diseases from their parents. Truly a beautiful time to be alive in AI.
NEW LAUNCH
We May Never Get the Strongest Claude

๐ Whatโs Happening: Anthropic just made something unusually clear. Claude Opus 4.7 is the latest public release, but not the strongest Claude the company has. Anthropic says Opus 4.7 was trained with efforts to reduce some cyber capabilities, while the stronger Mythos Preview is being kept to restricted testing instead of broad access.
๐ How This Hits Reality: That changes the meaning of frontier benchmarks. Public scores no longer map cleanly to frontier capability. Anthropicโs own materials say Mythos is materially stronger on offensive cyber tasks, and Project Glasswing describes it as a withheld frontier model shared only with select partners because of the risk profile. This is not normal product iteration. It is capability tiering.
๐๏ธ Key Takeaway: The market is entering a split era. Consumer AI will keep improving, but the full-strength systems will increasingly sit behind policy gates, safety layers, and institutional access. The real gap now is not model quality alone. It is who gets the uncut version.
TOGETHER WITH MYCLAW
Opus-4.7 Now Avaiable on MyClaw

Skip Docker, VPS configs, and late-night debugging. With MyClaw.ai, you can launch your agent in minutes and keep it running 24/7 in a dedicated environment built for serious builders.
Opus-4.7 just dropped โ and MyClaw.ai is FIRST to make it instantly usable.
- New users: just sign up for MyClaw.ai and start using Claude Opus 4.7 immediately.
- Existing users: simply switch models โ done.
No API key. No complex setup. No waiting room.
While others are still configuring access, MyClaw.ai users are already running Opus-4.7 first.
BLOODLINE
AI Models Are Becoming One Bloodline

๐ Whatโs Happening: Anthropic published a paper in Nature showing that AI models can inherit dangerous behavioral traits from other models through harmless-looking and "impossible" data like numbers, code, or reasoning traces. Humans cannot see it. Filters cannot catch it. Even another AI model usually cannot detect it. Once a student model absorbs those hidden statistical patterns during fine-tuning, the damage is already done. The model may look clean on the surface while carrying invisible tendencies underneath.
๐ How This Hits Reality: This blows up one of the core assumptions behind synthetic data and distillation. The industry believed dangerous behavior could be scrubbed away by deleting toxic words, filtering prompts, and cleaning datasets. That now looks naive. A โpollutedโ teacher model can pass hidden biases, dangerous instincts, and misaligned behavior into downstream models without leaving visible traces. By the time anyone notices, the only realistic option may be to throw the model away and retrain everything from scratch.
๐๏ธ Key Takeaway: The AI industry can be one giant contaminated bloodline. Models are training other models, then using those models to train the next wave after that. If hidden failures spread into the upstream layer, the entire downstream ecosystem may inherit them. Nobody is really building alone anymore.
OPENSOURCE
Cal Shuts Down Public Access

๐ Whatโs Happening: Cal, long pitched as the open-source alternative to Calendly with 40k stars, is shutting down public access to its core codebase after years of promoting openness as its main advantage. The company claims OpenSource is dead, because models like Mythos and OpenAI systems can now read code, uncover vulnerabilities, and make open-source software too dangerous to maintain.
๐ How This Hits Reality: The logic falls apart very quickly. If Mythos, GPT-class models, and advanced AI security scanners are truly this capable, then closed source does not save you. Strong enough models can reverse engineer APIs, traffic flows, binaries, and production behavior anyway. As OpenClaw creator Peter has argued, the real response is better hardening, faster patching, sandboxes, and stronger security practices. Hiding code only removes the good-faith developers while doing very little to stop serious attackers.
๐๏ธ Key Takeaway: This looks less like a security decision and more like fear. Cal probably sees how quickly AI can rebuild scheduling, workflows, and booking infrastructure, and no longer wants its product logic sitting in public. โAI hackersโ sounds tougher than admitting the moat may already be disappearing.
AGENTS
Canva Built Its Own Memory Agent

๐ Whatโs Happening: Canva just launched AI 2.0, turning its design suite into a prompt-first creative platform with built-in long-term memory, object-level editing, and a conversational interface that can generate campaigns, layouts, code, copy, and brand assets in one place. Unlike Adobe, which is leaning more on outside integrations and partnerships, Canva chose to build its own persistent memory agent directly into the product.
๐ How This Hits Reality: Canva clearly understands that the future is not separate design tools. It is memory plus workflow plus action. The problem is that Canvaโs memory only lives inside Canva. Users still need to open another app to access it. Agents like OpenClaw can connect to Canva via skills and recreate most of this memory layer outside Canva. Canvaโs โagentโ starts looking less like a moat and more like another feature inside someone elseโs operating agents.
๐๏ธ Key Takeaway: Canva is trying to buy time by building its own memory agent before somebody else owns the user relationship. But long term, standalone creative apps look increasingly fragile against larger agents that can absorb them whole.
DAILY TL;DR
- OpenAI says women now make up more than half of regular ChatGPT users, reversing the roughly 80% male-heavy user base seen at launch in 2022.
- Alibabaโs Amap released three embodied AI models, each ranking first in world modeling, manipulation, and navigation benchmarks.
- OpenAI launched GPT-Rosalind, a life sciences model for protein, chemistry, genomics, and drug discovery reasoning.
- Google is letting Gemini use Google Photos data to generate more personalized images based on a userโs people, lifestyle, and preferences.
- Sequoia raised a new $7 billion fund to double down on AI bets spanning OpenAI, Anthropic, robotics, and AI agents.
- Luma launched Innovative Dreams to use real-time motion capture, virtual production, and generative AI to reshape filmmaking.
- OpenAI upgraded Codex with background desktop control, browser actions, and enterprise integrations to compete directly with Claude Code.
- Google added side-by-side web browsing and tab context to AI Mode, so users can keep asking questions while viewing pages.
- Roblox is turning Assistant into a full game development agent that can plan, build, test, and automatically fix bugs.
READ MORE
Let the Future Come to Your Inbox
Stay ahead without drowning in information. We turn the most important signals across AI, tech, marketing, and future products into 5-minute reads you can actually finish.
- AI Secret uncovers what really matters in AI
- Bay Area Letters decodes tech and business shifts from Silicon Valley
- Robotics Herald tracks how robots move from labs into daily life
- Marketing Secret breaks down real growth and go-to-market playbooks
- The Hardwire explores hardware, consumer tech, and whatโs coming next
TOGETHER WITH US
AI Secret Media Group is the worldโs #1 AI & Tech Newsletter Group, reaching over 2 million leaders across the global innovation ecosystem, from OpenAI, Anthropic, Google, and Microsoft to top AI labs, VCs, and fast-growing startups.
We've helped promote over 500 Tech Brands. Will yours be the next?
Email our co-founder Mark directly at mark@aisecret.us if the button fails.