Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I disagree, large broadly-based language-involved models might need that, but a performance-adaptive scope-specific application like texturing might just need part of a single graphics card's processing time, or an SD-card sized cache with narrow-register parallel processing on-board.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: