Hacker Newsnew | past | comments | ask | show | jobs | submit | akhdanfadh's commentslogin

Firefox was my main browser after Chrome MV3 stuff, but now I'm moving to Orion by Kagi. I found, on my Macbook M1, Firefox hog the battery a lot seen from the energy impact on Mac's activity monitor (average 12hr power >1000 compared to Orion ~350). Don't expect extensions to work well on Orion, though, but I can live with it for now.


It supports both Chrome and Firefox extensions. Vimium works flawlessly.


I'm interested in what qualifies as good Python production quality code. Do companies typically address this through hiring practices, or specific training on production patterns? Or is it more about how the open-source Python packages are developed?


While there are some Python-specific idioms (like how you do parallelize and especially how do you iterate in large loops), it's mostly the same as with any other language.

All of those are checked in interviews, but can also be learned on the job if there are sufficiently experienced Python developers.

A clear give-away is if someone says they are doing scripting in Python (that usually means no testing, no CI or CD, no code style checkers, no typing...).


Interesting, I hadn't fully considered how hiring constraints influence technology choices.


This is actually promising, kudos!

AI tagging feature like those in Pocket and Karakeep, in the first place, seems helpful. But months later, you will get lots of tags to handle. Content recommendation, especially if that only consider what we saved, can replace the tagging things I guess. I wonder how do you do this for free, though.

Also your HN/Reddit integration is what I'm looking for. The way I save things from HN so far in Karakeep is that I save the main article and add the HN url manually to the note.


So is it possible to load the ollama deepseek-r1 70b (43gb) model on my 24gb vram + 32gb ram machine? Does this depend on how I load the model, i.e., with ollama instead of other alternatives? Afaik, ollama is basically llama.cpp wrapper.

I have tried to deploy one myself with openwebui+ollama but only for small LLM. Not sure about the bigger one, worried if that will crash my machine someway. Are there any docs? I am curious about this and how that works if any.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: