Welcome
In this issue, we continue with our ‘Intelligence at the Level of Thought’ series. In Part (I) we discussed a philosophy that I labelled ‘LLMs as tools for afterthought’. In this issue we proceed with the same philosophy while addressing a different problem.
Problem: The Model-Switching Trap
Have you ever found yourself staring at an AI response thinking, “This is good, but what would Claude have said? Or chatGPT? Or Gemini?” If you’re like most power users, you’ve probably opened multiple browser tabs, copied the same prompt across different AI interfaces, and waited for responses to trickle in one by one.
Welcome to the model-switching trap. The context-switching hell that every knowledge worker knows now layered on to new tools.
But what if you could select the desired model right at the moment of generation?
But what if you could prompt once and get multiple AI perspectives instantly?
Platforms like Open Router recognize this problem, and offer a unified interface. But for those of us whose work lives inside a Second Brain like Roam Research, the solution must be native. It must live where our thoughts live.
Solution: SmartBlocks + Live AI
In Roam, the answer lies in combining two powerful Roam extensions: SmartBlocks and Live AI. While Live AI already lets you switch models (with a few clicks), SmartBlocks transforms this into a faster, keyboard-driven workflow.
Selecting the desired model at the moment of generation
Using a custom SmartBlocks command, you can invoke your prompt and be met with a simple menu to choose the exact Live AI model you need for the job.
Here is the SmartBlock command that enables this. It turns a multi-click sequence into two keystrokes.
- #SmartBlock - Model Selection
- <%CURRENTBLOCKREF:uid,false%>
- <%NOBLOCKOUTPUT%><%SET:model,<%INPUT:Pick a Model%%openRouter/anthropic/claude-sonnet-4%%openRouter/google/gemini-2.5-pro-preview%%openRouter/openai/gpt-5-mini%>%>
- <%NOBLOCKOUTPUT%>
- <%SET:prompt,<%CONCAT:<%RESOLVEBLOCKREF:<%GET:uid%>%>,\n\n--- CHILDREN ---\n,<%CHILDREN:<%GET:uid%>,0,0,{text},0,true%>>%>
- <%FOCUSONBLOCK%>**<%GET:model%>**<%LIVEAIGEN:<%GET:prompt%>,<%GET:uid%>,{append},<%GET:model%>,0,true%>
Prompt once and get multiple AI perspectives instantly
But the real power comes from multi-model comparison. With a slightly enhanced smartblock, you can prompt multiple models and see their responses side-by-side:
Here is the SmartBlock command that makes this parallel workflow possible.
- #SmartBlock - Compare Models
- <%CURRENTBLOCKREF:uid,false%>
- <%NOBLOCKOUTPUT%><%SET:count,<%INPUT:Compare how many models?%%2%%3%>%>
- <%NOBLOCKOUTPUT%><%SET:m1,<%INPUT:Model #1%%openRouter/anthropic/claude-sonnet-4%%openRouter/google/gemini-2.5-pro-preview%%openRouter/openai/gpt-5-mini%>%>
- <%NOBLOCKOUTPUT%><%SET:m2,<%INPUT:Model #2%%openRouter/anthropic/claude-sonnet-4%%openRouter/google/gemini-2.5-pro-preview%%openRouter/openai/gpt-5-mini%>%>
- <%IFVAR:count,3%><%NOBLOCKOUTPUT%><%SET:m3,<%INPUT:Model #3%%openRouter/anthropic/claude-sonnet-4%%openRouter/google/gemini-2.5-pro-preview%%openRouter/openai/gpt-5-mini%>%>
- **Model Comparison** <%FOCUSONBLOCK%>
- <%NOBLOCKOUTPUT%>
- <%SET:prompt,<%CONCAT:<%RESOLVEBLOCKREF:<%GET:uid%>%>,\n\n--- CHILDREN ---\n,<%CHILDREN:<%GET:uid%>,0,0,{text},0,true%>>%>
- **<%GET:m1%>**<%LIVEAIGEN:<%GET:prompt%>,<%GET:uid%>,{append},<%GET:m1%>,0,true%>
- **<%GET:m2%>**<%LIVEAIGEN:<%GET:prompt%>,<%GET:uid%>,{append},<%GET:m2%>,0,true%>
- <%IFVAR:count,3%>**<%GET:m3%>**<%LIVEAIGEN:<%GET:prompt%>,<%GET:uid%>,{append},<%GET:m3%>,0,true%>
*Note:
To trigger the smartblock I place the cursor on the parent block so that it can take the children into context.
When the smartblock runs, the LLM outputs appear in the main window for easy focus and contextualization. Unlike a regular chat interface, I still retain the original prompting context in the sidebar while being able to continue chatting with the models and can feed their outputs back as context.
*Quick Tip! You can 'change the block view’ through Roam’s command pallet to ‘horizontal’ to view the model outputs side by side.
Explanation: Why This Workflow Works
The magic isn’t just in the speed – it’s in how this workflow changes your relationship with AI models. Instead of sequential, disconnected interactions, you get parallel, contextual conversations.
The Psychology of Choice Elimination
When you can instantly access any model or compare multiple perspectives, you eliminate the psychological burden of choice. No more second-guessing your model selection. No more “what if” scenarios that break your concentration. You simply prompt and process, staying in your flow state.
The Power of Contextual Comparison
Having responses side-by-side in your Roam graph creates unexpected insights. You start noticing patterns: Claude’s creative leaps, GPT-5’s coding capabilities, Gemini’s analytical depth. These aren’t just different answers they’re different cognitive approaches you can learn from and integrate.
The Compound Effect
Over time, this workflow doesn’t just save minutes per prompt – it fundamentally changes how you think with AI. You become more experimental, more willing to explore different angles, because the friction is gone. Your Roam graph becomes a laboratory for AI-assisted thinking rather than a repository of single-perspective responses.
I encourage you to give these smartblocks a try and let me know if they are useful. If you have any difficulty in setting them up, my DMs are open.
The two shared smartblocks use only a block and its children as prompt and context, without any extra input. In the next newsletter, we’ll explore how to set up a smartblock with additional context.
Have an interesting workflow in Roam and wish to be featured? Please reach out below.
To support the creators and the tools mentioned
Fabrice Gallet (Live AI) - buy me a coffee