Archive
Feeds Sign In
Archive Feeds Sign In
Note
Also, the notion that you can’t have fantastic results with generative models training on only content you have permission to use is ridiculous. OpenAI and Meta are bad actors that are disengenuous. Its *easier* to get good results with ethical shortcuts, but you can acehieve amazing results without stealing.

Jonathan's location at time of posting:

stationary -3.6 km/h 85%

Comments (3)

fgtech
fgtech via Micro.blog

@cleverdevil This is 100% correct. Properly curated training data (which we have not yet seen) will yield dramatically better LLM results. Not more reliable, mind you, but should avoid some of the creepy and dark stuff we have seen emerge. Curating requires humans and will be expensive.

cleverdevil
cleverdevil via Micro.blog

@fgtech one thing that I see becoming more common is large foundation models trained on open data sets that are mostly used to provide the fundamentals of written communication, combined with specialized models trained on smaller data sets for very specific use cases. This gives you the best of both worlds.

fgtech
fgtech via Micro.blog

@cleverdevil Sounds great! Ethically sourced, transparent data sources will be key to cleaning up these models.