Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not a strawman, it's a thought experiment: if the premise of AGI is that a superintelligence could do all these amazing things, what could it do today if it existed but only had its superintelligence? My suggestion is that even something a billion times more intelligent than a human being might not be able to cure cancer with the information it has available today. Yes it could build simulations and throw a lot of computing power at these problems, but is the bottleneck intelligence or computing power to run the algorithms and simulations? You're conflating the two, no one disagrees that one billion times more computing power could solve big problems, the disagreement is whether one billion times more intelligence has any meaningful value which was the point of isolating that variable in my thought experiment.


It's fair that I'm conflating raw computational power with strategic usage of that power. And it is at least theoretically conceivable that brute force computational power is not something that could be replaced by clever algorithms.

But if you agree that with 10²⁸ more times more computational power we could almost surely cure cancer without gathering much more data, then you agree that we have enough empirical data and just need to analyze it better. We're sort of arguing about the details of what kinds of approaches to analyzing the data better would work best.

I'll continue that argument about details a bit more here. So far, even with merely human intelligence, hard computational problems like car crash simulation, protein folding, and mixed integer-linear programming (optimization) have continued to gain even more efficiency from algorithmic improvements than from hardware improvements.

According to our current understanding of complexity theory, we should expect this to continue to be the case. An enormous class of practically important problems are known to be NP-complete, so unless P = NP, they take exponential time: solving a problem of size N requires k**N steps. Hardware advances and bigger compute budgets allow us to do more steps, while algorithmic improvements reduce k.

To be concrete, let's say k = 1.02, we have a data center full of 4096 1-teraflops GPUs, and we can afford to wait a month (2.6 megaseconds) for an answer. So we can apply about 10²² operations to the problem, which lets us solve problems up to about size N = 2600. Now suppose we get more budget and build out 1000 such data centers, so we can apply 10²⁵ ops, but without improving our algorithms. This allows us to handle N = 2900.

But suppose that instead we improve the heuristics in our algorithm to reduce k from 1.02 to 1.01. Suddenly we can handle N = 5100, twice as big.

We can easily calculate how many data centers we would need to reach the same problem size without the more intelligent algorithm. It's about 6 × 10²¹ data centers.

For NP-complete problems, unless P = NP, brute-force computing power lets you solve logarithmically larger problems, while intelligence lets you solve linearly larger problems, equivalent to an exponentially larger amount of computation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: