Archive
Feeds Sign In
Archive Feeds Sign In
Note
Continuing to refine my dev workflow with AI assistants:

1. context windows are larger now, but not infinite
2. training data lags reality, meaning the model needs support
3. models are *very* good at summarization

Given these inputs, I've landed on an approach that is bearing fruit.

Jonathan's location at time of posting:

stationary -3.6 km/h 90%

Comments (3)

esouthard
esouthard via BlueSky

What’s your go-to model?

Jonathan LaCour
Jonathan LaCour via BlueSky

I’m very agnostic, and generally put several to the test before settling on one for a particular task. Claude Sonnet 3.7 is extremely thorough, but its knowledge is a bit stale. OpenAI o1 is good at complex problem solving but only at a small scale. 4o is a nicely balanced model.

Jonathan LaCour
Jonathan LaCour via BlueSky

I also spend a lot of time using Ollama with local models, but they aren’t particularly effective with the approach I described earlier because free / local models have much shorter context windows or require a supercomputer to run 😆