Having tested uMatrix and uBlock Origin for years, and having tried many other Firefox extensions, IMO the best Firefox advantage is neither of those nor any other extension. It is a rarely discussed about:config option called
network.dns.forceResolve
Chrome desktop also has something like this, but it's a command-line option. Firefox OTOH allows one to select a global domain-to-IP mapping while the browser is running.
uMatrix and uBlock are IMHO designed for graphical browsers and the graphical www. For me, graphics are secondary, not a priority. I can get better (easier) control over HTTP requests and real-time transparency into TLS traffic through a forward proxy.
Firefox is still massive overkill for me. Ridiculously large and complicated. No doubt there are people who are comfortable and pleased with this sort of complexity. Glad they like it, but I am not one of those people.
Unlike Chromium or Firefox the relatively small and simple software I use to extract information from the web can be compiled in seconds on inexpensive hardware. The speeds of "no-browser" (HTTP generator plus TCP client) or the text-only browser I use easily beat any graphical, Javascript-running browser. Better control over HTTP headers, cookies and real-time, configurable logging. Not only that but I can process large, catenated HTML files that make the complex, popular browsers stall and choke.
If the goal is to achieve some customised graphical representation of a complex website, I think uBlock Origin and uMatrix are unmatched. But if the goal is "blocking", i.e., only making the HTTP requests that the user intends, and controlling the content of those requests, without regard for graphics, then I think I do better with the foward proxy.
I dabbled in the cli browser space but a lot was left to be desired due to the state of the modern web. I was having trouble with HN threads even. They would lose indentation structure for the comments and appear all one after another. Mostly dabbled in links and w3
I actually dislike the indented structure. I disable tables in links.
Non-hyperlinked HN can also be retrieved and read without using a browser, e.g., using the Firebase JSON endpoint. I can filter the JSON into formatted text the way I like it.
For anything other than reading, I can use the command line: short shell scripts I wrote for retrieving, submitting, replying and editing.
I also filter HTML to SQL and have the HN pages^1 stored in SQLite database. I prefer searching (not fulltext) using sqlite3 over using Algolia.
For me, the whole idea of not using a popular browser is that is is _different_. As I mentioned, these smaller programs can be more robust and can handle many MBs of HTML at a time without a hiccup. There is no auto-loading of resources, no CSS or Javascript. There are no "web sockets". The web developer's control is minimised and the computer owner's control is maximised. All websites look more or less the same. That's a feature not a bug, IMHO.
If I wanted to try to recreate what so-called "modern" browsers do, potentially giving control over one's entire computer to "web developers", then I would not be making HTTP requests outside the browser and using a text-only browser to read HTML.
At this point I am heavily biased. I have been reading text on a black, textmode screen (no X11) for so long that the color and indentation on HN threads in a graphical browser is ugly to me. Perhaps it is difficult for a graphical browser user to switch to a text-only browser for reading HN because, if nothing else, it is so unfamiliar. It is certainly difficult for me to switch from a text-only HTML reader to a graphical browser for reading HN. It is very awkward.
As a text-only www user, I find that the so-called "modern" web is continually becoming _more_ not less text-friendly. (Many HN commenters complain about so-called "SPAs", I welcome them.) Because, in general, more and more websites and every "web app", have a resource serving plain text, usually JSON, sometimes CSV, XML, GraphQL, etc. The early www had text files, and I still like the old formatting that was used back then, but the text was not as structured as what I get today.
I am using an HTTP generator, yy025, plus a TCP client, not FF, Safari or Chrome. The HTTP generator is written by me.
The TCP client is typically Al Walker's original netcat, djb's tcpclient or haproxy's tcploop (modified). But any TCP client will work.
I generally use haproxy and tinyproxy-stunnel as TLS forward proxies. The former lets me monitor all HTTPS traffic from computers I own over the the network I own and modify headers, cookies, URLs, response bodies, prevent SNI, etc. (Most use haproxy as a reverse proxy.)
I do not make remote DNS queries immediately followed by associated HTTP requests. They are separeted in time. The DNS data is gathered in bulk from varied sources periodically. I do this with software tools I wrote myself that are designed for HTTP/1.1 pipelining. The domain-to-IP mappings are stored in the proxy's memory. There are no remote DNS requests when I make HTTP requests.
I use a modified text-only browsser as an HTML reader. It does not auto-loead resources, process CSS or run Javascript.
I do text processing on bulk HTML and DNS data, e.g., from HTTP/1.1 pipelining, with custom filters I wrote myself to produce SQL, CSV and other formats.
This is only a sample of things I do differently according own specific preferences.
The so-called modern browsers cannot do all of these things in combination, as separate programs. In some cases, e.g., HTTP/1.1 pipelining, real-time monitoring of HTTPS traffic in plaintext, even something as simple as preventing SNI from being sent, these browsers cannot do them at all, even with extensions. The so-called "modern" browsers are enormous by comparison and ridiculously complicated. They are distributed by corporations invested heavily in online ads.
Perhaps the most important difference is that I can compile each of the software tools I use in minutes, in most cases less than one minute. I can easily edit the source code in an edit-compile-test loop to address issues that arise and to suit personal preferences. This is not feasible with the so-called "modern" web browsers. Trying to compile these so-called "modern" browsers from source is excruciatingly slow. I can compile UNIX kernels with complete userlands, an entire OS, faster, easier and with only minimal resources (CPU, memory, storage).
network.dns.forceResolve
Chrome desktop also has something like this, but it's a command-line option. Firefox OTOH allows one to select a global domain-to-IP mapping while the browser is running.
uMatrix and uBlock are IMHO designed for graphical browsers and the graphical www. For me, graphics are secondary, not a priority. I can get better (easier) control over HTTP requests and real-time transparency into TLS traffic through a forward proxy.
Firefox is still massive overkill for me. Ridiculously large and complicated. No doubt there are people who are comfortable and pleased with this sort of complexity. Glad they like it, but I am not one of those people.
Unlike Chromium or Firefox the relatively small and simple software I use to extract information from the web can be compiled in seconds on inexpensive hardware. The speeds of "no-browser" (HTTP generator plus TCP client) or the text-only browser I use easily beat any graphical, Javascript-running browser. Better control over HTTP headers, cookies and real-time, configurable logging. Not only that but I can process large, catenated HTML files that make the complex, popular browsers stall and choke.
If the goal is to achieve some customised graphical representation of a complex website, I think uBlock Origin and uMatrix are unmatched. But if the goal is "blocking", i.e., only making the HTTP requests that the user intends, and controlling the content of those requests, without regard for graphics, then I think I do better with the foward proxy.