Continuing to refine my dev workflow with AI assistants:
1. context windows are larger now, but not infinite
2. training data lags reality, meaning the model needs support
3. models are *very* good at summarization
Given these inputs, I've landed on an approach that is bearing fruit.
Comments (3)
What’s your go-to model?
I’m very agnostic, and generally put several to the test before settling on one for a particular task. Claude Sonnet 3.7 is extremely thorough, but its knowledge is a bit stale. OpenAI o1 is good at complex problem solving but only at a small scale. 4o is a nicely balanced model.
I also spend a lot of time using Ollama with local models, but they aren’t particularly effective with the approach I described earlier because free / local models have much shorter context windows or require a supercomputer to run 😆