Gitea has a builtin defense against this, `REQUIRE_SIGNIN_VIEW=expensive`, that completely stopped AI traffic issues for me and cut my VPS's bandwidth usage by 95%.
I would like to point people to the Odroid H4 series of boards. N97 or N355, 2*2.5GbE, 4*SATA, 2 W in idle. Also has extension boards to turn it into a router for example.
The developer hardkernel also publishes all relevant info such as board schematics.
I've been involved in the RSS world since the beginning and I've never clicked on an RSS link and expected it to be rendered in a "nice" way, nor have I seen one.
"XSLT is currently the only way to make feeds into something that can still be viewed."
You could use content negotiation just fine. I just hit my personal rss.xml file, and the browser sent this as the Accept header:
You can easily ship out an HTML rendering of an RSS file based on this. You can have your server render an XSLT if you must. You can have your server send out some XSLT implemented in JS that will come along at some point.
To a first approximation, nobody cares enough to use content negotiation any more than anyone cares about providing XML stylesheets. The tech isn't the problem, the not caring is... and the not caring isn't actually that big a problem either. It's been that way for a long time and we aren't actually all that bothered about it. It's just a "wouldn't it be nice" that comes up on those rare occasions like this when it's the topic of conversation and doesn't cross anyone's mind otherwise.
---
We all know about Twitter acquirer Elon Musk, who bent the platform to fit his political worldview. But he’s not alone.
Here’s Microsoft CEO Satya Nadella, owner of LinkedIn, who contributed a million dollars to Trump’s inauguration fund.
Here’s Mark Zuckerberg, who owns Threads, Facebook, Instagram, and WhatsApp, who said that he feels optimistic about the new administration’s agenda.
And here’s Larry Ellison, who will control TikTok in the US, who was a major investor in Trump, and who one advisor called, in a WIRED interview, the shadow President of the United States.
Social media is very quickly becoming aligned with a state that in itself is becoming increasingly authoritarian.
---
This was the real why. When control amasses to the few we end up in a place where there is a dissonance between what we perceive to be true and what is actually true. The voice of the dictator will say one thing but the people's lived experience will say something else. I don't think mastodon or Bluesky or even Jack Dorsey's new project Bitchat solves any of this. It goes much deeper. It is ideological. It is values driven. The outcome is ultimately decided by the motives of the people who start it or run it. I just don't think any western driven values can be the basis of a new platform because a large majority of the world are not from the west. For better or worse, you have the platforms of the west. They are US centric and they will dominate. Anything grassroots and fundamentally opposed to that will not come from the west. It must come authentically from those who need it.
I see mention of the voltage of 200 mV/mm, though no mention if AC or DC, presume it is DC.
I have seen a few articles over the years on stimulating wound healing and did a little digging and found it goes back further than I appreciated:
1843: Carlo Matteucci (Italy) observes that wounded tissue generates a steady current — the first evidence of endogenous “healing current.”
Modern experimental era (1950s–1980s)
1950s–1960s: F. W. Smith and others at the Royal Free Hospital (London) and USSR researchers start applying DC microcurrents to chronic ulcers.
1960s–1970s: Robert O. Becker (NYU, later VA Medical Center) systematically studies wound and bone healing with DC and pulsed currents — showing accelerated healing and even partial limb regeneration in amphibians.
1972: Becker and Murray publish seminal paper: “Low intensity direct current stimulation of bone growth and wound healing.”
Late 1970s–1980s: Clinical trials on pressure ulcers and diabetic wounds using microamp DC show improved epithelialization.
Clinical device development (1990s–present)
1990s: FDA approvals for electrical bone-growth stimulators, later expanded to soft-tissue wound dressings.
2000s: Research into pulsed DC, AC, and capacitive coupling grows; low-frequency (1–200 Hz) electrotherapy devices enter wound-care practice.
2010s–2020s: Rise of microfluidic and bioelectronic dressings (like the Chalmers study, 2023), nanogenerators, and self-powered wound patches — merging electronics and biology.
Looking into the AC/DC aspects:
DC = best for directional healing and wound closure.
AC = best for tissue conditioning, circulation, and long-term comfort.
Combination or cycling gives the fastest and safest overall healing, especially for chronic or deep wounds. Also, prevent polarisation irritation over prolonged usage.
Certainly does feel like a technology that has been sleeping in the wind, and a future first aid tool. Of note, electronically, such a device could also aid in cleaning the wound by killing bacteria, which may be one reason that healing is improved.
Wow, Slack does not allow business customers to export their chats. WTF. Found this:
"Workspace Owners can apply for Corporate Export. This lets you export all messages (including DMs and private channels), but only if your company has legal or compliance requirements and Slack approves the request. Once approved, exports are scheduled and delivered automatically."
So they have the tech built, you just aren't allowed to use it. Who would use this piece of garbage?
I also ditched docker when I could. In my experience...
Podman with pods is a better experience than docker-compose. It's easy to interactively create a pod and add containers to it. The containers ports will behave as if they were on the same machine. Then `podman generate kube` and you have a yaml file that you can run with `podman kube play`.
Rootless networking is very slow unless you install `passt`. With Debian, you probably should install every optional package that podman recommends.
The documentation is lacking. Officially, it's mostly man pages, with a few blog posts announcing features, though the posts are often out of date.
Podman with its docker socket is often compatible with Docker. Even docker-compose can (usually) work with podman. I've had a few failures, though.
Gitlab-runner can use podman instead of docker, but in this case the is no network aliases. So it's useless if the runner needs to orchestrate several images (e.g. code and db).
Also check out oklch.com, I found it useful for building an intuition. Some stumbling blocks are that hues aren’t the same as HSL hues, and max chroma is different depending on hue and lightness. This isn’t a bug, but a reflection of human eyes and computer screens; the alternative, as in HSL, is a consistent max but inconsistent meaning.
Another very cool thing about CSS’s OKLCH is it’s a formula, so you can write things like oklch(from var(--accent) calc(l + .1) c h). Do note, though, that you’ll need either some color theory or fiddling to figure out your formulas, my programmer’s intuition told me lies like “a shadow is just a lightness change, not a hue change”.
Also, OKLCH gradients aren’t objectively best, they’re consistently colorful. When used with similar hues, not like the article’s example, they can look very nice, but it’s not realistic; if your goal is how light mixes, then you actually want XYZ. More: https://developer.mozilla.org/en-US/docs/Web/CSS/color_value....
Also, fun fact: the “ok” is actually just the word “ok”. The implication being that LCH was not OK, it had some bugs.
1. Your main domain is important.example.com with provider A. No DNS API token for security.
2. Your throwaway domain in a dedicated account with DNS API is example.net with provider B and a DNS API token in your ACME client
3. You create
_acme-challenge.important.example.com not as TXT via API but permanent as CNAME to
_acme-challenge.example.net or
_acme-challenge.important.example.com.example.net
4. Your ACME client writes the challenge responses for important.example.com into a TXT at the unimportant _acme-challenge.example.net and has only API access to provider B. If this gets hacked and example.net lost you change the CNAMES and use a new domain whatever.tld as CNAME target.
I'm pretty much an AI layperson but my basic understanding of how LLMs usually run on my or your box is:
1. You load all the weights of the model into GPU VRAM, plus the context.
2. You construct a data structure called the "KV cache" representing the context, and it hopefully stays in the GPU cache.
3. For each token in the response, for each layer of the model, you read the weights of that layer out of VRAM and use them plus the KV cache to compute the inputs to the next layer. After all the layers you output a new token and update the KV cache with it.
Furthermore, my understanding is that the bottleneck of this process is usually in step 3 where you read the weights of the layer from VRAM.
As a result, this process is very parallelizable if you have lots of different people doing independent queries at the same time, because you can have all their contexts in cache at once, and then process them through each layer at the same time, reading the weights from VRAM only once.
So once you got the VRAM it's much more efficient for you to serve lots of people's different queries than for you to be one guy doing one query at a time.
> Does having experience implementing a web browser engine feature change the way you write HTML or CSS in any way?
I think I'm more concious of what's performant in CSS. In particular, both Flexbox and CSS Grid like to remeasure things a lot by default, but this can be disabled with a couple of tricks:
- For Flexbox, always set `flex-basis: 0` and `min-width: 0`/`min-height: 0` if you can without affecting the layout. This allows the algorithm to skip measuring the "intrisic" (content-based) size.
- For CSS Grid, the analogous trick is to use `minmax(0, 1fr)` rather than just `1fr`.
(I also have a proposal for a new unit that would make it easier to get this performance by default, but I haven't managed to get any traction from the standards people or mainstream browsers yet - probably I need to implement it and write it up first).
> Do you still google "css grid cheatsheet" three times a week like the rest of us?
Actually no. The process of reading the spec umpteen times because your implementation still doesn't pass the tests after the first N times really ingrains the precise meanings of the properties into your brain
depends on your boot configuration. if you use systemd-boot, use kernelstub -a "i915.mitigations=off". if you have /etc/default/grub, add it as a kernel parameter then update-grub.
Wait, this code uploads data to a server somewhere? To what end? I would not have expected capture to come with mandatory redistribution, nor would I trust any third party with my location, let alone the output of my car's camera feeds. And I definitely wouldn't trust meta with, well, anything, let alone my own personal identifying information.
Instead of laboriously calling CreateWindow() for every control, traditionally we would lay out a dialog resource in a .rc file (Visual Studio still has the dialog editor to do it visually) and then use CreateDialog() instead of CreateWindow(). This will create all the controls for you. Add an application manifest and you can get modern UI styling and high-DPI support.
If you ollama pull <model> the modelfile will be downloaded along with the blob. To modify the model permanently, you can copypasta the modelfile into a text editor and then create a new model from the old modelfile with the changes you require/made.
Here is my workflow when using Open WebUI:
1. ollama show qwen3:30b-a3b-q8_0 --modelfile
2. Paste the contents of the modelfile into -> admin -> models -> OpenwebUI and rename qwen3:30b-a3b-q8_0-monkversion-1
3. Change parameters like num_gpu 90 to change layers... etc.
4. Keep | Delete old file
Pay attention to the modelfile, it will show you something like this: # To build a new Modelfile based on this, replace FROM with:
# FROM qwen3:30b-a3b-q8_0 and you need to make sure the paths are correct. I store my models on a large nvme drive that isn't default ollama as an example of why that matters.
EDIT TO ADD:
The 'modelfile' workflow is a pain in the booty. It's a dogwater pattern and I hate it. Some of these models are 30 to 60GB and copying the entire thing to change one parameter is just dumb.
However, ollama does a lot of things right and it makes it easy to get up and running. VLLM, SGLang, Mistral.rs and even llama.cpp require a lot more work to setup.
I can’t stop thinking about this article. I spent a long time in ad tech before switching to broader systems engineering. The author captures something I've struggled to articulate to friends and family about why I left the industry.
The part that really struck me was framing advertising and propaganda as essentially the same mechanism - just with different masters. Having built targeting systems myself, this rings painfully true. The mechanical difference between getting someone to buy sneakers versus vote for a candidate is surprisingly small.
What's frustrating is how the tech community keeps treating the symptoms while ignoring the disease. We debate content moderation policies and algorithmic transparency, but rarely question the underlying attention marketplace that makes manipulation profitable in the first place.
The uncomfortable truth: most of us in tech understand that today's advertising systems are fundamentally parasitic. We've built something that converts human attention into money with increasingly terrifying efficiency, but we're all trapped in a prisoner's dilemma where nobody can unilaterally disarm.
Try this thought experiment from the article - imagine a world without advertising. Products would still exist. Commerce would still happen. Information would still flow. We'd just be freed from the increasingly sophisticated machinery designed to override our decision-making.
Is this proposal radical? Absolutely. But sometimes the Overton window needs a sledgehammer.
P.S. If you are curious about the relationship between Sigmund Freud, propaganda, and the origins of the ad industry, check out the documentary “Century of the Self”.
Searching the web is a great feature in theory, but every implementation I've used so far looks at the top X hits and then interprets it to be the correct answer.
When you're talking to an LLM about popular topics or common errors, the top results are often just blogspam or unresolved forum posts, so the you never get an answer to your problem.
More of an indicator that web search is more unusable than ever, but interesting that it affects the performance of generative systems, nonetheless.
> We burned months trying (and ultimately failing) to get Nvidia’s host drivers working to map virtualized GPUs into Intel Cloud Hypervisor... We think there’s probably a market for users doing lightweight ML work getting tiny GPUs. This is what Nvidia MIG does, slicing a big GPU into arbitrarily small virtual GPUs. But for fully-virtualized workloads, it’s not baked; we can’t use it. Near as we can tell, MIG gives you a UUID to talk to the host driver, not a PCI device.
Apparently this is technically possible, if you can find the right person at Nvidia to talk about vGPU licensing and magic incantations. Hopefully someone reading this HN front page story can make the introduction.
Anthropic is a Public Benefit Corporation whose governance is very different from a typical company in that it doesn’t put shareholder ROI above all else. A majority of its board seats are reserved for people who hold no equity whatsoever and whose explicit mandate is to look out for humanity.
We're working on it. Our next hardware should have USB HID, FIDO2, and persistent storage. The persistent storage will be per device app. Well, that's the idea anyway. I'm not sure when we'll be done but check our website in a few months, or join our mailing list.
DeepSeek was built on the foundations of public research, a major part of which is the Llama family of models. Prior to Llama open weights LLMs were considerably less performant; without Llama we might not have gotten Mistral, Qwen, or DeepSeek. This isn't meant to diminish DeepSeek's contributions, however: they've been doing great work on mixture of experts models and really pushing the community forward on that front. And, obviously, they've achieved incredible performance.
Llama models are also still best in class for specific tasks that require local data processing. They also maintain positions in the top 25 of the lmarena leaderboard (for what that's worth these days with suspected gaming of the platform), which places them in competition with some of the best models in the world.
But, going back to my first point, Llama set the stage for almost all open weights models after. They spent millions on training runs whose artifacts will never see the light of day, testing theories that are too expensive for smaller players to contemplate exploring.
Pegging Llama as mediocre, or a waste of money (as implied elsewhere), feels incredibly myopic.