Discussion about this post

User's avatar
Alex Merwin's avatar

Great read! You've tackled some complex topics with a lot of depth. Just a few thoughts:

1. On progress in the last century, haven’t we made meaningful advancements in interventional cardiology, pharmacology, oncology, and other areas? Acknowledging these could add more balance to your argument about the rate of progress.

2. Regarding LLMs, the data they're trained on isn't random but comes from a structured and validated corpus - the internet. Letters are grouped into words, and words are grouped into sentences, all based on laws of grammar. This contrasts with the challenges LBMs face due to the lack of a comprehensive 'grammar' in biological processes. There is so much we don’t yet understand. You did hint at this at the end, but I think it’s a nuanced and vital point that could be worth exploring in future articles.

3. Your analogy between LLMs and LBMs, centered around accumulating structured biological data, is super intriguing! You do a great job highlighting the potential of LBMs if we can develop a rich and detailed enough training corpus.

I appreciate your insights and looking forward to seeing how your ideas evolve!

Expand full comment
Grigory Sapunov's avatar

Good topic! LLMs showed us the possibility and potential, now it's time to build foundation models for other domains.

I personally believe in world models as a next step from LLMs, and I deeply believe in large biological models which cover/model different levels from cell behavior to the whole organism, maybe then ecosystems (and here it will merge with world models) :)

Expand full comment
5 more comments...

No posts