I like it and would like to see an entire Linux OS being done in a similar manner. Or shell / wrapper / whatever.
A sane homogeneous cli for once, that treats its user as a human instead of forcing them to remember the incompatible invocation options of `tar` and `dd` for absolutely no reason.
zip my-folder into my-zip.tar with compression level 9
write my-iso ./zip.zip onto external hard drive
git delete commit 1a4db4c
convert ./video.mp4 and ./audio.mp3 into ./out.mp4
merge ./video.mp4 and ./audio.mp3 to ./out.mp4 without re-encoding
And add amazing autocomplete, while allowing as many wordings as possible. No need for LLMs.
> zip my-folder into my-zip.tar with compression level 9
What do you mean, I don't have write permissions in the current working directory? I meant for you to put the output in $HOME, i mean /tmp, i mean /var/tmp, i mean on the external hard drive, no other other one.
> git delete commit 1a4db4c
What did you do? I didn't mean delete it and erase it from the reflog and run gc! I just mean "delete it" the way any one would ever mean that! I can never get it back now!
Things that definitely need interactive prompts before running or fail out of ambiguity otherwise. Let's not pretend these are impossible problems to overcome design-wise.
helpme ffmpeg assemble all the .jpg files into an .mp4 timelapse video at 8fps
helpme zip my-folder into my-zip.tar with compression level 9
helpme git delete commit 1a4db4c
...
This originated from an ffmpeg wrapper I wrote but then realized it could be used for all commands:
Depends on your definition of "using" JavaScript. The main difference between common TypeScript and TS-based JSDoc is the need for an additional build step. Being able to ftp-upload your `.js` files and then be done with it is a remarkable advantage over Vite/Webpack/whatever in small to medium-sized projects. If editor based type support is sufficient to you (i.e. no headless checks), you won't need to install any TS packages at all, either. tsserver is still used in the background, but so are thousands of other binaries that keep your editor, OS and computer running, so I don't see that as an argument.
> So (TS)JSDoc support is a relic from when Microsoft was trying to get market share from Google.
> Today in 2025, TS offers so much more than the (TS)JSDoc implementation. Generics, Enums, Utility types, Type Testing in Vitest, typeguards, plus other stuff.
None of that is true! Please don't share misinformation without looking it up first.
I see where you're coming from, but "on a phone" hasn't been a valid qualifier for performance benchmarks for a long time. Phones and their GPUs are ridiculously powerful nowadays. We've been smoothly running 3D apps on GPUs with orders of magnitude less MFLOPS 20 years ago already. Apps and games with far more going on than blurry glassy alarm clock, albeit somewhat less beautiful. When I run Fluid Glass on a 10 year old laptop with an integrated GPU and move my cursor, I'm seeing less than 10 FPS. When will we finally start readjusting our expectations for "fast" software and stop blindly following Wirths law?
> Neither are humans, so this argument doesn't really stand.
Even when we give a spec to a human and tell them to implement it, we scrutinize and test the code they produce. We don't just hand over a spec and blindly accept the result. And that's despite the fact that humans have a lot more common sense, and the ability to ask questions when a requirement is ambiguous.
This is a nice idea, but looking back at how not only documentation, but also UX in general has not improved the slightest over the last decades, it's fair to say the only way we'll ever get close to anything like this is by leveraging personal LLM assistants, unfortunately.
So be it? An additional thought I had was that new tools can have awesome discoverability in a webdirectory of tutorials. Normally the more exotic the creature the better it hides. We could be publishing a lot of things that could be great for an audience near zero. It wouldn't even need an awesome name to point at, just the right location in the right tutorial(s).
If llms are to do it we could probably have it make videos too. I want Derek Banas on the project.
Pulling the source and compiling the package instead of pulling the package. Not much difference. Maybe slower build times but more secure and better builds.
I ran some of these in comparison with Chrome, and Chrome was consistently faster, but only marginally (1-20%). I'm actually quite impressed, an integrated Intel HD 620 / 4x2.4 GHz (!) rendering 10,000 fishes at 30 FPS in a webbrowser.
For reference my numbers are for an RTX 4070, Firefox has no excuse for not being able to crack 60 fps on a demo that looks like it's from the late 2000s in terms of graphics.
Isn't the fps capping? I'm pretty confident it is because it won't go above that on my system even when I do a trivial number of fish and my monitor maxes out at 60fps...
> on a demo that looks like it's from the late 2000
Okay... now I think I shouldn't take you seriously...
The literal visual aesthetics aren't really important for the test. You could place some nicer shaders and it wouldn't necessarily change the compute load. Hell, it could just be highly unoptimized. Benchmarks are mostly about having something static to test, not making something visually pleasing.
I'm half kidding, it's entirely possible to overload any GPU with too many draw calls with the end result not looking like much. These fish would run reasonably well on something from that era though I'm sure, it's no GTA San Andreas.
But no it's not capped at 30, it jumps to like 33, 34 sometimes with those settings, it's capped to 60 like Chrome as well. Probably vsync.
I'm running Gazebo at 10 times realtime and inference through cuda, trust me it's working. If Firefox doesn't take advantage of it that's its problem. I've enabled every config setting for acceleration I could find.
Great game idea. I'd suggest you limit the questions top international topics though. I got a suggestion for "name a hurricane" - this probably wouldn't generate very interesting responses outside of America.
Pinta *IS* Paint.net, just forked at an earlier stage before the latter became closed source software.
Also, is "washed and bleak" really that big of a problem for an image editor? It just doesn't matter what it looks like as long as the UI is intuitive and has the features that you need. It should also be noted that Pinta very much looks just like your overall Desktop appearance on Unix. I'm on XFCE and it's so incredibly theme integrated it looks like it's part of the system.
Personally, I really like Pinta. Biggest problem is the bugs and crashes. Wish I could use actual Paint.net though, but there's no way to use it on Linux.
A sane homogeneous cli for once, that treats its user as a human instead of forcing them to remember the incompatible invocation options of `tar` and `dd` for absolutely no reason.
And add amazing autocomplete, while allowing as many wordings as possible. No need for LLMs.One can dream.
reply