Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, I can definitely see a breaking point when even the false platitudes are outsourced to a chatbot. It's been like this for a while, but how blatant it is is what's truly frustrating these days.

I want to hope maybe this time we'll see different steps to prevent this from happening again, but it really does just feel like a cycle at this point that no one with power wants to stop. Busting the economy one or two times still gets them out ahead.



I think we really are in the last moments of the public internet. In the future you won’t be able to contact anyone you don’t know. If you want to thank Rob Pike for his work you’ll have to meet him in person.

Unless we can find some way to verify humanity for every message.


We need to bring back the web of trust: https://en.wikipedia.org/wiki/Web_of_trust

A mix of social interaction and cryptographic guarantees will be our saving grace (although I'm less bothered from AI generated content than most).


Maybe for nerds! But normies won't, can't, and shouldn't manage their own keys.


> Unless we can find some way to verify humanity for every message.

There is no possible way to do this that won't quickly be abused by people/groups who don't care. All efforts like this will do is destroy privacy and freedom on the Internet for normal people.


The internet is facing an existential threat to its very existence. If it becomes nearly impossible to determine signal in the noise, then there is no internet. Not for normal people, not for anyone.

So we need some mechanism to verify the content is from a human. If no privacy preserving technical solution can be found, then expect the non-privacy preserving to be the only model.


> If no privacy preserving technical solution can be found, then expect the non-privacy preserving to be the only model.

There is no technical solution, privacy preserving or otherwise, that can stave off this purported threat.

Out of curiosity, what is the timeline here? LLMs have been a thing for a while now, and I've been reading about how they're going to bring about the death of the Internet since day 1.


> Out of curiosity, what is the timeline here? LLMs have been a thing for a while now, and I've been reading about how they're going to bring about the death of the Internet since day 1.

It’s slowly, but inexorably increasing. The constraints are the normal constraints of a new technology; money, time, quality. Particularly money.

Still, token generation keeps going down in cost, making it possible to produce more and more content. Quality, and the ability to obfuscate origins, seems to be on a continual improve also. Anecdotally, I’m seeing a steady increase in the number of HN front page articles that turn out to be AI written.

I don’t know how far away the “botnet of spam AI content” is from becoming reality; however it would appear that the success of AI is tightly coupled with that eventuality.


> Out of curiosity, what is the timeline here?

I give it a decade. By that time social media had done irreparable damage to society.


So far we have already seen widespread damage. Many sites require a login to view content now, almost all of them have quite restrictive measures to prevent LLM scraping. Many sites are requiring phone number verification. Much of social media is becoming generated slop.

And now people are receiving generated emails. And it’s only getting worse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: