Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
reliabilityguy
7 months ago
|
parent
|
context
|
favorite
| on:
Surprisingly fast AI-generated kernels we didn't m...
Is my understanding correct that they assumed a fixed size of the input?
If so, why is it surprising that generic implementations in PyTorch are worse?
GaggiX
7 months ago
[–]
Pytorch uses different kernels depending on the input size. There is a reason why it's so massive to download.
reliabilityguy
7 months ago
|
parent
[–]
Sure, some degree of customization is expected. However, I doubt that PyTorch implements
every
input size separately.
saagarjha
7 months ago
|
root
|
parent
[–]
Generally tiling to a handful of sizes gets you pretty close to "specialized to this size" performance.
reliabilityguy
7 months ago
|
root
|
parent
[–]
Sure, but the size can be anything.
saagarjha
7 months ago
|
root
|
parent
[–]
Yes, you pick the size that’s closest
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
If so, why is it surprising that generic implementations in PyTorch are worse?