Hacker Newsnew | past | comments | ask | show | jobs | submit | shevy-java's commentslogin

It has really become horrible in the last some days, probably weeks now. It is not just fake-videos AI generated, without Google having any decency to mark it, as it wastes my time and the time of others - but there are now also videos where some videos are real (I know because some of these videos came from years ago), mixed in with AI fake crap. Now, I am able to spot many AI videos, but I bet many other people simply don't have the knowledge. That is also a generational problem, where people have a harder and harder time to separate real from fiction. But I have no desire to waste my time with fake, so Google now has started to kill youtube. I still have use cases of youtube without this problem, e. g. good music (here, whether it is AI generated or not, makes no real difference IF the music is good; but most AI music is crap anyway but I can not listen to it and only focus on good music), but this is getting more and more of a dead end here.

Google already killed its search engine and other things. It is continuing on its path to now kill Youtube. And, mind you - Youtube already had problems before AI. Many content creators felt violated and abused by Google. I really think we should end Google as a company - it is not doing the world any good now. It changed completely; the old Google is permanently gone. Nobody needs the AI slop infected money-milking-via-ads machine.

Also, Google further ruined its already by-now-total-crap search engine, with crap videos nobody really cares about in 99% of the cases. Or the "others searched for xyz" - what the heck do I care what others did? If I want to find something, I don't want google to distract with excuses. Google abuses people here. It is an EVIL company now. These are not "accidents" - this is deliberately aimed at wasting people's time. I want compensation money for Google wasting my time here. This has been different in the past, so it is 100% Google's fault. No more excuses here.

Google, you are the guilty party.


"researchers, including those spearheading the work, are cautious about overselling their results"

Either it is correct; or it is not. Perhaps it is somewhat correct, but then it may not be fully correct, so it would contain wrong information.

I write this here because science does not really work well when it is based on speculation. So this article is weird. It starts by speculating about something rather than analyse the article. It then continues to "textbooks have to be rewritten". Well, I think if you are in science, you need to demonstrate that all your claims made need to be correct - and others can verify it, without any restriction whatsoever.

> “We just don’t have really any understanding of how RNAs can do this, and that’s the hand-wavy part,” Conine said.

So their theory is incomplete as of yet. That's not good.

There are examples of where theories were lateron shown to be wrong.

See this article:

https://www.science.org/doi/10.1126/science.1197258

It was later redacted - a total fabrication. A lie.


> So their theory is incomplete as of yet. That's not good.

I hard disagree. Your comment to me reads as if a paper should either prove a new theory or disprove an existing theory.

However, publishing new results without a clear understanding of how it works is just as valid and this seems to be that. In Phsyics and Astronomy, new observations are often published without a theory of how it works. This is not a bad thing, that is part of the collaborative nature of science. The same holds true for papers suggesting a new theory, but lacking either observational or theoretical proof.


These flaws aren't failings of the article, but univeral to science, knowledge, and human endeavor:

> Either it is correct; or it is not. Perhaps it is somewhat correct, but then it may not be fully correct, so it would contain wrong information.

This describes all science and all knowledge; if that's not good enough, nothing is good enough. Everything somewhat correct and somewhat incorrect; the best stuff is much more of the former. Newton's Laws are mostly correct, somewhat incorrect.

> science does not really work well when it is based on speculation

Speculation is the foundation of science: it leads to an hypothesis, which leads to research, which leads to more speculation.

> their theory is incomplete as of yet. That's not good.

That also is the nature of all science. For example, papers include analyses of their own blind spots and weaknesses, and end with suggestions for further research by others.

> There are examples of where theories were lateron shown to be wrong.

That's also part of science and all human endeavor. If you disallow that, we might as well go back to being illiterate - everything we read is flawed, and inevitably some is wrong.


There is plenty of room in science for research that is just to examine and collect data. I don't understand your argument that science should only be to demonstrate claims and "completing" theories. Is science not about experimenting to slowly form a more complete understanding about how our world works? Research that does little more than collect novel data and show probable correlations is still extremely valuable.

Detecting an effect is present is separate from effect power and mechanism. Showing an effect is present is usually the first step before the other two.

> Either it is correct; or it is not. Perhaps it is somewhat correct, but then it may not be fully correct, so it would contain wrong information.

I don't understand your criticism.

It makes complete sense that the researchers are worried about the research being oversold. It's routine for media to take a scientific finding and grossly exaggerate its impact, i.e. "New research proves you can exercise your way to a fit child" or whatever.

This is science, we don't know if anything is "correct." The more compelling the research, the more we can adjust our priors as to what is "correct."

> There are examples of where theories were lateron shown to be wrong.

There are also lots of examples where theories were later not shown to be wrong. What's your point?

Do you have an actual, concrete criticism of the methodology of the epigentic research in TFA, or are your just bloviating?


Interesting that you mention it. Now I recall we also had wooden blocks, they were rectangular. I played with them a lot to build simple things. Kind of before I transitioned into LEGO. But those wooden blocks really were great - simple, durable and one could do quite a lot with them. I think I also built houses for my cat back then. Quite amazing how wood is so dominating - price-wise nothing beats it. And LEGO is now so expensive that I wouldn't buy it due to that outrageous prices alone.

I think the name mruby kind of makes sense; we have MRI (matz ruby implementation) so the leading "M" there; we have jruby too. We also have truffleruby which is a bit against that name scheme ... but we could call it truby. Nobody does that, but we could. And MRI could also be called c-ruby. These are not great names though. Murby is also not a great name; it reminds me of Murphy from Robocop though.

I'd like mruby as some kind of fail-save boot system. Ruby powering the operating system as much as possible (ultimately ruby is just syntactic sugar over C, though, so I am fine using C of course).

The lack of documentation means that I'd just waste my time though. Not going to do that.

Also, I think mruby and MRI should not be separate. It doesn't do the project any good. It should be as modular as possible but one code base only.


I know both C and Ruby, and Ruby is far more than syntactic sugar over C.

Like not even close.


Agreed. Lua is older though. It was created in 1993.

mruby was created in 2012.

I have only two gripes with regard to mruby.

1) The primary users are C hackers. That's ok, but it means it also leaves out many other people. (Lua has the same problem of course.)

2) Documentation. This is something that really plagues about 90% of ruby projects. And it's not getting any better. It is as if in ruby, only 10% care about documentation - at best. Look at rack, opal, wasm for ruby - the documentation is TOTAL TRASH. Non-existing; look at rack. What a joke.

Now that ruby is following perl in its extinction path (sorry, the numbers are hard and real, there is no way to deny it), the ruby community should instead try to reverse that trend. Instead you see mega-corporations such as shopify pwning the remaining ecosystem and cannibalizing on it or people such as DHH rant about how Europe is collapsing (what the actual ... https://world.hey.com/dhh/europe-is-weak-and-delusional-but-... - we need an alternative to rails, how can anyone still work with DHH? Lo and behold, another shopify guy. The message is so clear for everyone to see now). None of this will of course revitalize ruby. Without an active AND actively growing community, ruby is set to die. I say this as someone who still uses ruby daily; I am tired of the "rumours of ruby dying are exaggerated". Yes, the rumours are exaggerated - but they are not rumours. The numbers are solid. TIOBE alone, with its 10000 faults, shows this trend clearly.


Ruby was used in Japan before Rails appeared, and it will continue to be used after Rails dies.

It's genuinely wild how many times people feel the need to declare that Ruby is dead.

If our competitors voluntarily choose to use tools that are demonstrably less productive, that's great news for us.

So yes: Ruby is totally dead. No question. Without a doubt.


Hanami remains a bright spark as an alternative growing Ruby app framework to Rails. The project is under active development, I’ve met the core dev and they are lovely and much more humble than DHH and the project aims to stick much closer to the Ruby ways of doing things as opposed to the rails way.

Sure the project just can’t be as mature as Rails but it deserves a look and we need to get behind projects like this if we do indeed want to see Rails alternatives flourish and grow.

https://hanamirb.org/


> look at rack. What a joke.

Yeah, I think the Ruby world burned out of the whole "make everything nice" aesthetic when they alienated the everloving fuck out of _why. Now it's a post-apocalyptic wasteland where if you want nice things, you better be prepared to become a right expert because you will have no one to turn to when it breaks. I don't mind, I don't make money coding anymore and the challenge makes me feel alive again.

If I ever get to the point to where I gotta learn the C API, I'll do it through mruby. But it'll be a much easier path to systems work to interop with Rust instead.


Soon uv will deliver results without you even thinking about them beforehand!

I agree on the first part. This is valid for all programming languages though.

I disagree that you get the same safety as C# anywhere though. But even more importantly - I don't think people should write C#-like code in ruby. It does not really work that well. It is better to write ruby like ruby; and even in ruby there are many different styles. See zverok using functional programming a lot. I stick to oldschool boring OOP; my code is very boring. But usually well-documented. I want to have to think as little as possible because my brain is very inefficient and lazy; zverok has a good brain so he can write more complex code. It is very alien code to me though, but he also documents his code a lot. You can find his code or some of his code here, it is a quite interesting ruby style, but also alien to me: https://zverok.space/projects/

(Ruby also adopted some habits from perl. I also don't think writing perl-like ruby makes a lot of sense. See also the old global variables inspired by perl; I can not recall most of them off-hand, so I avoid using them. My brain really needs structure - it is such a poor thinking machine really. And slow. My fingers are much faster than my brain really.)


I agree somewhat, but I'd rather call it their brain adjustment than a religion though.

I think about 99% of people who suggest to slap down types onto dynamic languages have already been using types since decades, or many years, in another language. Now they switch to a new language and want to have types because their brain is used to.


Nah. 99.9% of the people who wanted the addition of DryStructs to a codebase I worked on wanted it because they'd been bit, repeatedly, by someone sending one kind of object into a function rather than what the function accepted and it just not getting caught.

A robust type system allows you to make "compiler errors" out of runtime errors. One of these takes *way more tests to catch* than the other. I'll let you guess which.


Nah that's just a lack of understanding in the role of unit tests in dynamically typed languages.

Elsewhere in this thread, dynamic typing advocates malign the hassle of maintaining types, and it is always coupled with strong advocacy for an entire class of unit tests I don't have to write in statically typed languages.

And that's the problem, if you want your code to actually work you do need to write those unit tests. A program not crashing doesn't mean it does the right thing.

With experience you will learn to either write unit tests or spend the same amount of time doing manual testing.

Once you start doing that then the unit tests just replace the static typing and you start shipping better code to your customers.


> the best point for developer productivity IMO.

That is a fair opinion. My opinion is different, but that's totally fine - we have different views here.

What I completely disagree with, though, is this statement:

> Without any types in a dynamic language, you often end up with code that can be quite difficult to understand what kinds of objects are represented by a given variable.

I have been writing ruby code since about 22 years (almost) now. I never needed types as such. My code does not depend on types or assumptions about variables per se, although I do, of course, use .is_a? and .respond_to? quite a lot, to determine some sanitizing or logic steps (e. g. if an Array is given to a method, I may iterate over that array as such, and pass it recursively into the method back).

Your argument seems to be more related to naming variables. People could name a variable in a certain way if they need this, e. g. array_all_people = []. This may not be super-elegant; and it does not have as strong as support as types would, but it invalidates the argument that people don't know what variables are or do in complex programs as such. I simply don't think you need types to manage this part at all.

> Especially in older poorly factored codebases where there are often many variations of classes with similar names and often closely related functions it can feel almost impossible until you're really familiar with the codebase.

Note that this is intrinsic complexity that is valid for ANY codebase. I highly doubt just by using types, people automatically understand 50.000 lines of code written by other people. That just doesn't make sense to me.

> With an actual fully typed language you're much more constrained in terms of what idioms you can use

I already don't want the type restrictions.

> A gradual type system on top of a dynamic language gets you some of the best of both worlds.

I reason it combines the worst of both worlds, since rather than committing, people add more complexity into the system.


The times I've been bitten by type safety issues is far less than the hassle of maintaining types. Seriously, it is a much smaller issue than people make it out to be. I will say that I do get bitten by the occasional `NoMethodError` on `nil`, but it really doesn't happen often. Since ruby is very dynamic it is hard to say how many of those errors would be caught even with type annotation. I also don't find myself needing to write specs to cover the different cases of type checking. For me it is a tradeoff with productivity.

That said, I do like it when an LSP can show some nice method signature info, and types are helpful in that way. I think it depends. At the surface level, I like some of the niceties that type annotations can bring, but I've seen how tricky defining more complex objects can get. Occasionally I would spend way too much time fighting types in elixir with dialyzer, and I've often not enjoyed TypeScript for the verbosity. So I understand the cost of defining types. To me, the cost often outweigh the benefit of type annotation.


I fully agree with this. I'm building a site in OCAML, and I just this week spent 90 minutes debugging some weird error I didn't understand because an implicit type was being pulled through in a global context. It was pretty irritating.

Maybe this isn't a fair comparison, since I'm pretty new to OCAML and I'm sure an experience developer would have seen what was happening much quicker than I would have. But I'm not sure I spent 90 minutes TOTAL on type errors doing Python web dev.

Maybe I'm exaggerating, and I probably just don't remember the first time I hit a type error, but my experience with type errors was that I would very occasionally hit them, and then I would just fix the type error. Pretty easy.


I would strongly oppose mandatory typing for these reasons, but I'm very happy to have stable low level libraries add type annotations.

When I'm writing code that will be distributed to other devs, I feel type annotations make more sense because it helps document the libraries and there is less ambiguity about what a method will take. As with everything, "it depends"

That's true, but it can also add unnecessary constraints if done thoughtlessly.

E.g. if you require an input to be StringIO, instead of requiring an object that responds to "read".

Too often I see people add typing (be it with a project like this, or with is_a? or respond_to?) that makes assumptions about how the caller will want to use it, rather than state actual requirements.

That is why I prefer projects to be very deliberate and cautious about how they use types, and keep it to a minimum.


Well said. There are many problems you have to deal with when writing code and type annotations only solve one particular kind. And even type annotations can be wrong: when you're dealing with data from external sources, dynamic languages like Python, JavaScript and Ruby will happily parse any valid JSON into a native data structure, even if it might not be what you specified in your type hints. Worse yet, you may not even notice unless you also have runtime type checks.

The kind of messy code base that results from (large) numbers of (mediocre) developers hastily implementing hacky bug fixes and (incomplete) specifications under time pressure isn't necessarily solved by any technical solution such as type hints.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: