Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What would the Internet's architecture have to look like for DDOS'ing to be a thing of the past, and therefore Cloudflare to not be needed?

I know there are solutions like IPFS out there for doing distributed/decentralised static content distribution, but that seems like only part of the problem. There are obviously more types of operation that occur via the network -- e.g. transactions with single remote pieces of equipment etc, which by their nature cannot be decentralised.

Anyone know of research out there into changing the way that packet-routing/switching works so that 'DDOS' just isn't a thing? Of course I appreciate there are a lot of things to get right in that!



It's impossible to stop DDoS attacks because of the first "D".

If a botnet gets access through 500k IP addresses belonging to home users around the world, there's no way you could have prepared yourself ahead of time.

The only real solution is to drastically increase regulation around security updates for consumer hardware.


Maybe that's the case, but it seems like this conclusion is based on the current architecture of the internet. Maybe there are ways of changing it that mean these issues are not a thing!


It's not an architectural problem. It's a fundamental issue with trust and distributed systems. The same issues occur in physical spaces, like highways.

The core issue is that hackers can steal the "identity" of internet customers at scale, not that the internet allows unauthenticated traffic.


> The core issue is that hackers can steal the "identity" of internet customers at scale

That's on one end, right? There's also the other end: as a user connecting to the network, currently one is subscribing to receiving packets from literally everyone else on the internet.

> It's a fundamental issue with trust and distributed systems

We currently trust entities within the network to route packets as they are asked. The network can tolerate some level of bad actors within that, but there is still trust in the existing system. What if the things we trusted the network to do were to change slightly?


Do the IP addresses botnet members get logged? Could those IP addresses be automatically blocked by DNS until they fix their machine?


IP addresses aren't unique or stable. You can't use them to identify individual devices.


Lets say your samsung fridge gets hacked and is now a member of a botnet. How do you detect that before the botnet does something?


Why fridge need to have rights to initiate connection to something on internet ?

Why fridge need to even be reachable from the internet ?? You should have some AI agent for managing your "smart" home. At least it's how sci-fi movies/games show it, eg. Iron man or Starcraft II ;)


> Why fridge need to have rights to initiate connection to something on internet ?

So you can access it from a phone app even when outside your home network.


I was thinking of a reaction to a DDOS event, so those devices are flagged as being infected. You could prevent future attacks if those devices are ignored until they get fixed.


That is what ISPs do these days. Most botnet members don't end up spamming a lot of requests, usually just a few before they are blocked.

The issue with DDOS is specifically with the distributed nature of it. One single bot of a botnet is pretty harmless, it's the cohesive whole that's the problem.

To make botnets less efficient you need to find members before they do anything. Retroactively blocking them won't really help, you'll just end up cutting off internet for regular people, most of whom probably don't even know how to get their fridge off of their local network.

There's not really any easy fix for this. You could regulate it, and require a license to operate IoT devices with some registration requirement + fines if you don't keep them up to date. But even that will probably not solve the issue.


What would that look like? A network with built-in rate & connection limiting?

The closest thing I can think of is the Gemini protocol browser. It uses TOFU for authentication, which requires a human to initially validate every interaction.


Built it into the protocol that you must provide bandwidth in order to have your requests served. A bit like forcing people to seed torrents.


Works for static content and databases, but I don't think it works for applications where there is by necessity only one destination that can't be replicated (e.g. a door lock).


Something like a mega-transnational-parent ISP authority and give tech giants LaLiga kind of power.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: