I disagree, large broadly-based language-involved models might need that, but a performance-adaptive scope-specific application like texturing might just need part of a single graphics card's processing time, or an SD-card sized cache with narrow-register parallel processing on-board.