Hacker Newsnew | past | comments | ask | show | jobs | submit | larkost's commentslogin

The discharge petition to all the bill that forced this release was going nowhere until President Trump declared that he was onboard, and then it happened. Until then it was going nowhere.

My guess is that someone suggested to Trump that they could redact most of the bad bits and plausibly deny that they were doing that, and he decided that this was the path of least resistance.

So I don't think there is any chance that he will easily allow any more votes to go the way of putting more pressure. Unless the pressure gets so bad that he has no choice (read: Newsmax and FoxNews both start pressure campaigns).


This is not correct at all.

The GOP are masters of using parliamentary procedure to avoid votes that would pass that they don't want to pass, nominations and bills that they can't defend voting against.

This was a big issue in the Obama era where Mitch McConnell was determined to make Obama a one term president and decided to "obstruct, obstruct, obstruct" on things that historically never been obstructed, or at least not to the degree they were under Obama. For example, judicial appointments would get stuck in committee and never come up for a vote because the vote would pass. The most famous example of this was the Merrick Garland Supreme Court nomination that was never given a vote for 11 months, which was completely unprecedented.

The GOP has a narrow working majority in the House. The House, unlike the Senate, has the discharge petition process where if a majority of House representatives sign it, it forces a vote. All the Democratic reps signed on so it only took about 4 GOP reps for it to pass.

The lengths Mike Johnson went to to avoid this were unprecedented. 3 Democratic reps have died in office this Congressional session. Texas has consistently delayed a special election to avoid a replacement. Arizona had a special election. A Democrat won and Johnson avoided swearing her in for 7 weeks because she would be the 218th and final signature on the discharge petition.

4 GOP reps signed on and the White House and the Speaker both put incredible pressure on them to change their mind. It was a big part of why Trump fell out with Marjorie Taylor-Greene (she was one of the 4).

Why go to all this effort? Because Epstein was core foundational mythology for MAGA, reps couldn't defend voting against it and everybody knew it.

Johnson then tried to use a procedure to pass a vote called unaminous consent. Basically, rather than go through a roll call of up to 435 members, the House is given the option to object. If anyone does, it forces a vote. Why would he do this? Because there's no voting record for unanimous consent. It gives members cover to say they did or didn't vote for something. A roll call is an official record. Democrats objected and thus we got an official vote with only 1 "no" vote (Rep Clay Higgins).

The SEnate passed it with unanimous consent.

This was a veto proof majority. So if it was so popular, why just not schedule a vote to begin with?

And the obstruction continues. Johnson again put the House in recess 1 day before the 30 day deadline. Coincidence? I think not.

And now we're getting illegal redactions, not meeting the 30 day deadline and a drip feed of document releases because (IMHO) they can't find enough ethically-challenged lackeys to do doc review and redact the names and images of Trump and powerful people, many of whom are likely donors.

Johnson may well lose his position over this. The Attorney General has a non-zero chance of being impeached and removed over it.

There is no putting this genie back in the bottle. It's not going away and at no point was the Trump circle comfortable they could redact their way out of it. They are in full on panic mode right now.


An actual meltdown at sea would have the now-molten uranium come in contact with seawater, which would instantly flash to high-pressure steam, throwing the uranium into a cancer-causing cloud that the world has never yet seen.

This is absolutely a terrible idea about how to deal with a meltdown.


Doing the math, it looks like the amount of uranium in pre-disaster Chernobyl is 200 metric tons. Apparently, that can bring 333ML (133 Olympic sized swimming pools) of room temperature water to a boil.

I did find it odd that there was no discussion about whether those other media now represent the exact colors that they had when they were originally created. I know from experience that colors fade, but the argument seems to ignore that.

I also know that most of the old paintings that we have today have been though multiple rounds of "refreshment" in order to counter both the fading and dirt/soot that they were exposed to over the years (remember: most of these were displayed by torchlight/lamplight/candlelight for centuries). Nowadays there is a real emphasis on trying to produce an original ascetic, but that has not always been the case.

So I would want a better discussion of how accurate those "standard candles" are.


GitHub has still been managing the orchestration and monitoring of runs that you run on your own (or other cloud) hardware. They have just decided that they are no longer going to do this for free.

So the question becomes: is $0.002/minute a good price for this. I have never run GitHub Actions, so I am going to assume that experience on other, similar, systems applies.

So if your job takes an hour to build and run though all tests (a bit on the long side, but I have some tests that run for days), then you are going to pay GitHub $.12 for that run. You are probably going to pay significantly more for the compute for running that (especially if you are running on multiple testers simultaneously). So this does not seem to be too bad.

This is probably going to push a lot of people to invest more in parallelizing their workloads, and/or putting them on faster machines in order to reduce the number of minutes they are billed for.

I should note that if you are doing something similar in AWS using SMS (Systems Management Service), that I found that if you are running small jobs on lots of system that the AWS charges can add up very quickly. I had to abandon a monitoring system idea I had for our fleet (~800 systems) because the per-hit cost of just a monitoring ping was $1.84 (I needed a small mount of data from an on-worker process). Running that every 10 minutes was going to be more than $250/day. Writing/running my own monitoring system was much cheaper.


As a solo Founder who recently invested in self-hosted build infrastructure because my company runs ~70,000 minutes/month, this change is going to add an extra $140/month for hardware I own. And that's just today; this number will only go up over time.

I am not open to GitHub extracting usage-based rent for me using my own hardware.

This is the first time in my 15+ years of using GitHub that I'm seriously evaluating alternative products to move my company to.


But it is not for hardware you own. It is for the use of GutHubs coordinators, which they have been donating the use of to you for free. They have now decided that that service is something they are going to charge for. Your objection to GitHub "extracting usage-based rent from me" seems to ignore that you have been getting usage of their hardware for free up to now.

So, like I said, the question for you is whether that $140/month of service is worth that money to you, or can you find a better priced alternative, or build something that costs less yourself.

My guess is that once you think about this some more you will decide it is worth it, and probably spend some time trying to drive down your minutes/month a bit. But at $140 a month, how much time is that worth investing?


No. It is not worth a time-scaled cost each month for them to start a job on my machines and store a few megabytes of log files.

I'd happily pay a fixed monthly fee for this service, as I already do for GitHub.

The problem here is that this is like a grocery store charging me money for every bag I bring to bag my own groceries.

> But at $140 a month, how much time is that worth investing?

It's not $140/month. It's $140/month today, when my company is still relatively small and it's just me. This cost will scale as my company scales, in a way that is completely bonkers.


> The problem here is that this is like a grocery store charging me money for every bag I bring to bag my own groceries.

Maybe they can market it as the Github Actions corkage fee


> It is not worth a time-scaled cost each month for them to start a job on my machines and store a few megabytes of log files

If it is so easy why don’t you write your own orchestrator to run jobs on the hardware you own?


> The problem here is that this is like a grocery store charging me money for every bag I bring to bag my own groceries.

This is an odd take because you're completely discounting the value of the orchestration. In your grocery store analogy, who's the orchestrator? It isn't you.


Do you feel that orchestration runs on a per-minute basis?

As long as they're reserving resources for your job during the period of execution, it does.

Charging people to maintain a row in a database by the minute is top-tier, I agree.

If you really think that's all it is, I would encourage you to write your own.

It would be silly to write a new one today. Plenty of open source + indy options to invest into instead.

For scheduled work, cron + a log sink is fine, and for pull request CI there's plenty of alternatives that don't charge by the minute to use your own hardware. The irony here, unfortunately, is that the latter requires I move entirely off of GitHub now.


so they are selling cent of their CPU time for a minute's worth

> My guess is that once you think about this some more you will decide it is worth it, and probably spend some time trying to drive down your minutes/month a bit. But at $140 a month, how much time is that worth investing?

It's $140 right now. And if they want to squeeze you for cents worth of CPU time (because for artifact storage you're already paying separately), they *will* squeeze harder.

And more importantly *RIGHT NOW* it costs more per minute than running decent sized runner!


I get the frustration. And I’m no GitHub apologist either. But you’re not being charged for hardware you own. You’re being charged for the services surrounding it (the action runner/executor binary you didn’t build, the orchestrator configurable in their DSL you write, the artefact and log retention you’re getting, the plug-n-play with your repo, etc). Whether or not you think that is a fair price is beside the point.

That value to you is apparently less than $140/mo. Find the number you’re comfortable with and then move away from GH Actions if it’s less than $140.

More than 10 years of running my own CI infra with Jenkins on top. In 2023 I gave up Jenkins and paid for BuildKite. It’s still my hardware. BuildKite just provides the “services” I described earlier. Yet I paid them a lot of money to provide their services for me on my own hardware. GH actions, even while free, was never an option for me. I don’t like how it feels.

This is probably bad for GitHub but framing it as “charging me for my hardware” misses the point entirely.


feels like a new generation is learning what life is like when microsoft has a lot of power. (tl;dr: they try to use it.)

I was born in 1993. I kind of heard lots of rumbling about Microsoft being evil as I grew up, but I wasn't fully understanding of the anti trust thing.

It used to suprise me that people saw cool tech from Microsoft (like VSCode) and complain about it.

I now see the first innings of a very silly game Microsoft are going to start playing over the next few years. Sure, they are going to make lots of money, but a whole generation of developers are learning to avoid them.

Thanks for trying to warn us old heads!


ABuse it.

Feels like listening to Halo generation being surprised MS fucks them over, because they thought they were Good Guys, coz they Made Thing They like

Yeah, I'm no GitHub apologist, but I'll be one in this context. This is actually a not-unreasonable thing to charge for. And a price point that's not-unreasonable.

It makes sense to do usage-based pricing with a generously-sized free tier, which seems to be what they're doing? Offering the entire service for free at any scale would imply that you're "paying" for/subsidizing this orchestration elsewhere in your transactions with GitHub. This is more-transparent pricing.

Although, this puts downward pressure on orgs' willingness to pay such a large price for GH enterprise licenses, as this service was hitherto "implicitly" baked into that fee. I don't think the license fees are going to go down any time soon, though :P


I run about 1 action a day taking 18h running on 2 runners One being self hosted 24gb ram 8 core ARM vps and one being a 64gb 13900k x86 dedicated server

Now the GitHub pricing change definitely? costs more than both servers combined a month ... (They cost about 60$ together )

3 step GitHub action builds around 1200 nix packages and derivations , but produces only around 50 lines of logs total if successful and maybe 200 lines of log once when a failure occurs And I'm supposed to pay 4$ a day for that ? Wonder what kind of actual costs are involved on their side of waiting for a runner to complete and storing 50 lines of log


It sounds like you'd be better off self-hosting Jenkins. The other issue with GHA is they cap all runs at 6 hours.

Despite what people say about "maintaining" Jenkins (whatever that means to them personally) - you can set it up in an IaaC way including the jobs. You can migrate/create jobs en masse via its API (I did this about 10 years ago for a large US company converting from what was then called TFS)


What problem does Jenkins solve? When we got jenkins working how we wanted it was a giant groovy script that was handling checkout manually.

I'll likely check out buildbot or just switch to gitlab

Somewhere around 0.00004$ probably.

Nice profit margin…


You know, one might ask what the base fee of $4k/mo (in my org's case) is covering, if not the control plane?

Unless you're on the free org plan, they're hardly doing it "for free" today…


Exactly this. It’s not like they don’t have plenty of other fees and charges. What’s next, charging mil rates for webhook deliveries?

> They have just decided that they are no longer going to do this for free.

Right, instead, they now charge the full cost of orchestration plus runner for just the orchestration part, making the basic runner free.

(Considering that compute for "self-hosted" runners is often also rented from some party that isn't Microsoft, this is arguably leveraging the market power in CI orchestration that is itself derived from their market power in code hosting to create/extend market power in compute for runners, which sounds like a potential violation of both the Sherman Act and the Clayton Act.)


Sure, but that shouldn't be a time-dependent charge. If my build takes an hour to build on GH's hardware, sure thing, charge me for that time. But if my build takes an hour to build on _my_ hardware, then why am I paying GH for that hour?

I get being charged per-run, to recoup the infra cost, but what about my total runtime on my machine impacts what GH needs to spend to trigger my build?


> is $0.002/minute a good price for this

It was free, so anything other than free isn't really a good price. It's hard to estimate the cost on github's side when the hardware is mine and therefore accept this easily.

(Github is already polling my agent to know it's status so whether is "idle" or "running action" shouldn't really change a lot on their side.)

...And we already pay montly subscription for team members and copilot.

I have a self-hosted runner because I must have many tools installed for my builds and find it kinda counter productive to always reinstall those tools for each build as this takes a long time. (Yeah, I know "reproducible builds" aso, but I only have 24h in most of my days)

Even for a few hundreds minutes a month, we're still under a few $ so not worth spending two days to improve anything... yet.


Is it polling the runner, or is the runner sending it progress?

The runner sends progress info, polls for jobs and so on. The runners don't have to be accessible from GitHub, they just needs general internet access (like through a NAT device).

> $0.002/minute a good price for this.

It is not only not good. It is outrageous. The amount of compute required for orchestration is small (async operations) and also they already charge your for artifacts storage. You need to understand that the orchestration just receives details (inbound) from the runner. It needs very little resources.


> is $0.002/minute a good price for this

Absolutely not, since it's the same price as their cheapest hosted option. If all they're doing is orchestration, why the hell are they charging per-minute instead of per-action or some other measure that recognizes the difference in their cost between self-hosted and github-hosted?


> is $0.002/minute a good price for this

I think a useful framing of this question is: would you run a c7gn.large instance just to do this orchestration?


Additionally, they could just self-host their code since code is data is a moat.

> GitHub has still been managing the orchestration and monitoring of runs that you run on your own (or other cloud) hardware. They have just decided that they are no longer going to do this for free.

This argument is disingenuous. Companies pay GitHub per seat for access to PR functionality etc. What's next, charging per repository? Because of a decision to no longer provide the repositories "for free"? It's not for free, you're paying already, it's included in the per-seat pricing. If you charge per seat then sometimes there are users who hardly use it and sometimes there are users who use it a lot. The per-seat pricing model is supposed to make the service profitable overall regardless of the usage levels of individual users.


No the original Microsoft business model was to get the incumbent (IBM) to bundle your product (DOS, bought from someone else) onto their product so that you had a near-monopoly, then use that to sell your other software onto that, occasionally making technical changes to make it difficult for your competitors.


China uses Capitalism as a tool where the Party feels it would be beneficial (for the Party), and crushes it mercilessly when it gets in the way (other than this real estate problem they have right now).

In the U.S. we have mistaken Capitalism for a religion, and so it wags the dog, so to speak. Since our founding we have made some attempts at finding a balance between our use of the tools of Capitalism and socialism (in more the Democratic Socialism style, rather than the Communism style), and we had a good run in the decades after WWII. But starting with McCarthyism, and really picking up under Regan we have prided ourselves on adopting Capitalism as a religion, and it really shows up in both the income inequality as well as the increasing role of (and corrupting influence of) money in our politics/government.


Yes, the features in YAML absolutely get in the way. The "Norway problem" for example ("no" translates to "False" in versions prior to YAML 1.2), but that is just the one more people ran into. Here is a nice overview of some of the problems:

https://ruudvanasseldonk.com/2023/01/11/the-yaml-document-fr...

I would argue that most, if not all, of those problems stem from too many features.


Could go the other way and only have strings a la CONL[0]. Which feels reasonable once I got over my initial shock. Your program is already going to enforce data types, why force that into the config language?

[0] https://cirw.in/blog/conl


Many (possibly all) of the boats in question were not capable of making it to the U.S. from where they were hit without refueling multiple times. It is not possible that they were headed directly to the U.S..


>> Many (possibly all) of the boats in question were not capable of making it to the U.S.

Now none of them are.


I haven't used Jenkins in a few years, so some of this might change, but in working with it I saw that Jenkins has a few fundamental flaws that I don't see them as working to change:

1. There is no central database to coordinate things. Rather it tries to manage serialization of important bits to/from XML for a lot of things, for a lot of concurrent processes. If you ever think you can manage concurrency better than MySQL/Postgres, you should examine your assumptions.

2. In part because of the dance-of-the-XMLs, when a lot of things are running at the same time Jenkins starts to come to a crawl, so you are limited on the number of worker nodes. At my last company that used Jenkins they instituted rules to keep below 100 worker nodes (and usually less than that) per Jenkins. This lead to fleets of Jenkins servers (and even a Jenkins server to build Jenkins servers as a service), and lots of wasted time for worker nodes.

3. "Everything is a plugin" sounds great, but it winds up with lots of plugins that don't necessarily work with each other, often in subtle ways. In the community this wound up with blessed sets of plugins that most people used, and then you gambled with a few others you felt you needed. Part of this problem is the choice of XMLs-as-database, but it goes farther than that.

4. The way the server/client protocol works is to ship serialized Java processes to the client, which then runs it, and reserializes the process to ship back at the end. This is rather than having something like RPC. This winds up being very fragile (e.g.: communications breaks were a constant problem), makes troubleshooting a pain, and prevents you from doing things like restarting the node in the middle of a job (so you usually have Jenkins work on a Launchpad, and have a separate device-under-test).

Some of these could be worked on, but there seemed to be no desire in the community to make the large changes that would be required. In fact there seemed to be pride in all of these decisions, as if they were bold ideas that somehow made things better.


I am not sure that if it actually stops the program, but it does at least stop programs from printing, so for anything that gives feedback on stderr/stdout you are at least pausing the main thread. I have a mostly-non-threaded program that this happens to, and it does not continue to send messages to other systems until I un-pause it.


Ctrl-Z suspends the program in most UNIX shells. ("fg" to resume)

Ctrl-S may or may not end up stopping the program, depending on how much it's printing, and how much output buffering there is before it blocks on writing more.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: