I like programming so I'd probably just keep doing that.
If I didn't have to work in order to live I'd probably spend more time sailing, playing music, and being with my family.
But I'd still be programming. The kind of programming I'd do would be focused on my interests rather than the interests of businesses and shareholders that employ me though.
Hamming was also writing from a highly privileged position. He was able to work at Bell Labs for the majority of his career. That just doesn’t exist today.
The Art of Doing Science and Engineering is a great book but it needs context. The last edition was released in 1994. Programmers had a lot of labour power back then.
Today though? The median house costs more than a third of the median income. Inflation has raised costs of living to unsustainable levels. And for programmers there have been hundreds of thousands of layoffs since 2023 and a low number of job openings.
I don’t think it’s unreasonable to take what job you can get or stay in a job you don’t care for until the trade winds return.
This is a framing issue. You can't control the times, but good advice is applicable in good and bad times. If Hamming was operating from a "good time", it should be true that his policies are also applicable in the "bad times".
His advice to "work on the worlds hardest problems" was spoken to people who had worked their way past the initial difficulties. General advice to "Move towards important problems", which is precisely the same thing, applies in good and bad times, and is very likely to produce in you a valuable expertise.
Personally, I find the advice useful. Most who provide a framing for causes of success either do not place it in relation to anything, or relate it primarily their own situation, and their argument becomes susceptible to interpretation as survivorship bias. Some try to extend their argument to cover more cases, but can be seen as overconfident based on limited experience. It's hard for one writer to "prove" what general success rules are.
It's not about good or bad times. All compasses are broken so any particular direction you believe you are walking deliberately might as well be uniformly random.
"Direction" and "design" are probably the wrong metaphors for careers.
I think it is better for your mental health to see yourself as having some agency. You certainly have some, though how much we can debate. But saying something like "all compasses are broken" sounds so defeatist that I worry you are experiencing depression.
It seems more odd to me you are placing as much value on career agency to infer ones mental health broadly. I'm not saying this isnt a norm in many cultures, but I'd like to hear your argument for supporting it.
> Hamming was also writing from a highly privileged position.
Hamming had a lot of career capital. He was the only person in the world with his track record. If you needed his kind of research/output/teaching/etc, he was the person you needed.
Cal Newport talks a lot about this. Great books.
Have something [unique/valuable] to offer and you'll be surprised how many doors it opens. Yes it takes time to stairstep your way there. A fresh grad has less career capital than a seasoned engineer with a track record of building billion dollar companies.
>I don’t think it’s unreasonable to take what job you can get or stay in a job you don’t care for until the trade winds return.
Having a goal does not seem at all at odds with weathering a storm. Your choices can then be what you learn in your free time, or what horizontal moves you make at that job, or which people you get closer to, for example.
When I got into the industry over 20 years ago, it was unheard of for an IC software engineer to retire early. Unless you got extremely lucky like working at Microsoft or Apple pre-IPO, you just weren't making the kind of money big law or doctors made.
That is not the case now. Yes the competition is incredibly fierce but the pay has skyrocketed.
There were a lot of factors that went into making salaries sky-rocket. One of those was leverage: there was more demand for skilled programmers than there were available. You also had the ZIRP era from 2008-2021ish. If you could write fizz buzz and breathe you could get a good paying job.
In the 90s inflation-adjusted salaries were still rather high. A 75k USD salary in 1995 is roughly 150k USD today. And the median house was less than a third of your income. And in the 90s there was even more demand for programmers.
The early 2000s were a bit rough unless you were insulated inside Google and big tech.
But 2025 is a very different landscape. I’ve talked to lots of highly talented developers who’ve been consistently employed since the early 2000s who have been on the job search for 9 months, a year.
It’s one thing to have a goal for one’s career but it’s not like you can wait around to find that perfect opportunity forever, right?
Some times you have to find something and work. It might not fit into your plans for your career but it might provide you with the income you need to keep your family afloat and maybe let you indulge in a hobby.
> … but it’s not like you can wait around to find that perfect opportunity forever, right?
> Some times you have to find something and work …
Rather than waiting for a perfect choice, I read Hamming as reminding us that are making choices all the time and cannot avoid doing so. Even not choosing, e.g., staying in a less-than-ideal role, is a choice. Given that we have no choice but to choose, Hamming suggests knowing up front where we want to go in the long term and biasing choices in that general direction.
Swizec mentioned Cal Newport elsewhere[0], and Newport’s recommendations around lifestyle-centric career planning provide an interesting bridge between your comments about occasionally needing to weather a storm and Hamming.
Some view titles, particular projects, or certain roles to be worthwhile goals in themselves. “I just graduated law school, so I want to make partner at a big NYC law firm” is a goal that a motivated new attorney might set. Does that career goal serve her if she despises traffic, subway travel, and apartment living? Newport advocates beginning with a vision of an ideal lifestyle and working backward from there by setting career goals to achieve the desired lifestyle.
Where he may be in a conflict with Hamming is warning people about what he calls the grand goal theory, of which the fresh law school grad aiming at partner shows the pitfalls. Hamming’s advice will help you go far. Newport warns that if you’re going to go far, be sure it’s in the direction you want to go.
In the case you mentioned of someone who is long-term unemployed, having a job that produces income is certainly nearer to any Hamming career goal and any Newport ideal lifestyle than that person’s present circumstance of draining savings or, worse, accumulating debt for basic living expenses.
>In the 90s inflation-adjusted salaries were still rather high. A 75k USD salary in 1995 is roughly 150k USD today. And the median house was less than a third of your income. And in the 90s there was even more demand for programmers.
Right but how many programmers were making even 150k back then, even if they were 10x geniuses? I don't think even high level ICs at IBM or Microsoft were making that much. Even inflation adjusted, that's lower than the medain at FAANG these days.
Look man, life was unironically just better for everyone back in the 90s. Minimum wage, median wage, highest earners, if you do the math everything was cheaper and wages were comparatively higher. Gas, food, electricity, housing, it was all cheaper. There were fewer regulations and less bureaucracy.
The buying power of a programmer in the 90s was much, much higher than an average programmer today.
>Look man, life was unironically just better for everyone back in the 90s.
It was not. Programmers were not buying Porches and living in luxury neighborhoods or retiring early.
Watch Office Space. Being a programmer was a low status, averagely paying job.
Was life better back in the 90s for the average programmer? Maybe? Housing was certainly cheaper, I'll give you that. But for exceptional engineers was it better?
Did programmers show up to work to have a barista make them a gourmet coffee, have catered lunches, free massages, all the meanwhile getting paid hundreds of thousands of dollars extra per year in RSUs? I don't think so.
There's no way an exceptional engineer had a better quality of life in the 90s than they would today. There was no FAANG, no deca-corns, no big tech giving near as many perks and comp. It just wasn't comparable.
I think the parent sufficiently qualified their take to mean how much an average person could realistically expect to make in inflation-adjusted dollars. Whether "exceptional" engineers were pulling numerically similar salaries or not seems like a bit of a strawman. Thankfully, the day-to-day conditions of cube farms in grey-space aren't as common today, but it's not wildly different for a majority of people. Trade the cube for a standing desk, and it's often still the same grey office in a tower somewhere working on something boring. After inflation and accounting for cost of housing, the numerically higher salary doesn't mean a whole lot, especially so since it's often theoretical money and not vastly changed tax brackets. Our needs as people haven't changed; we don't suddenly need a Porsche that we can afford instead of a house that we can't. Some things have become much cheaper in inflation-adjusted dollars, which is great, but if they didn't, we simply wouldn't have the money for them.
My point is that 20+ years ago, there was frustration here (and elsewhere) that even if we weren't 10x engineers, even being 3x engineers could not get us 3x the compensation.
That changed in the last 20 years for the better. People who had the work ethic and aptitude to become medical doctors or lawyers or management consultants no longer had to sacrifice compensation if they loved tech.
This is notable and worth calling out, and pointing out it wasn't always like this.
Ya I suppose that is a fair point, albeit a tangential somewhat luck-based one. Additionally, that ceiling has been likely raised across technical professions for non-men as well who have the potential and drive to be at the top of whatever ladder. I say tangential because while the ceiling was raised significantly, the parent's argument was that it wasn't nearly as necessary to be the 1% fortunate genius landing a dream gig, which is still true.
Okay sure, life was unironically just better for everyone back in the 90s except for the highest paid exceptional programmers.
Do you feel better now? Will you admit the economy is bad, and has been getting worse for 50 years straight for absolutely everyone (except the most exceptional engineers)?
Comparatively cheaper, no. Americans could afford a lot of things, but the average American home looked like Malcolm in the Middle, and not so much more fancy for the higher class. Meanwhile in 2025, people have immense furbished kitchen (I’m European so I always notice that in abs-training-bro-youtube-slop, I’m not talking about influencers here) and living rooms, order food deliveries all the time, and perhaps some americans could access the number of flights that we saw in movies like Die Hard (going from NYC to SF to see a wife), but that was unimaginable for Europeans. We’re seeing wealth levels that are unimaginable, and global poverty has receded so much that the UN overhauled their definition to redirect their efforts towards human rights rather than hunger.
No, the average 30 year old American owns far less than the average 30 year old American did in the 50s, 60s, 70s, 80s, and 90s. Owning a home in a safe community is what is most important, and most young men can't seem to get that. Things are getting worse.
Heck, I'll lower the bar! I'm (mid-30s American) not so worried about 'safe'. A certain amount of danger is fine. Desirable, even, if it meant I could live in my home town, not in an unfamiliar city.
I could buy half of a house, right now, cash. I don't, because the moment I do, I'll be forced to sell/move/whatever. Again. Where I am [for work] and where I want to be are forever at odds. Leaders have found it fashionable to bundle us all together. Spin the wheel and see if we hit RTO, let's bid against each other [again].
All to say, I'd give half my salary to never negotiate it or my location, again. Clearly not an option, so what to do? Endure and save. You won't see me buying toys or status symbols, that's for sure. At Will employment, meet At Will spending.
> Programmers had a lot of labour power back then.
Huh? Back then, there was very little glamor to software engineering. Computing just wasn't serious enough. There was relatively little competition, salaries were unremarkable, and sure, you could land a job for life, but that still exists today. If you are an IT guy for a lumber mill, a regional ISP, or a grocery store, it's not going to be as cutthroat as Big Tech. It's just that you're not gonna be making millions.
We're pretending that this type of cozy tech jobs don't exist anymore, but they do. They just don't come with IPOs and RSUs.
I suppose it could be interpreted either way, but your interpretation probably makes more sense in context. In reports about housing cost in Canada, they tend to use both the median income to median servicing cost like you have, but also home price-to-income ratio which often is a multiple of annual income for whatever reason.
Further down in the thread there's mention of median housing cost to income ratio for programmers in the 90's, and in that situation it seems like the absolute total cost of a house was a fraction of annual income, so it could go either way, but it would be much tougher now for your annual salary to surpass the cost of a house unless it's severely in the boonies.
It ultimately doesn't change the advice. Strongly deciding I wanted a career change led me to putting in some extra time and tripling my income. It's easier if you can reduce obligations and noise and focus on what matters to optimize for whatever you want. It may not be easy, but you have some degree of power to alter your trajectory to some extent
I read/skimmed it this year. I don't feel it was worth it. The first few and last few chapters have some nuggets but for the most part it's pretty highly technical stuff that feels not super relevant or interesting for a software engineer today (in my opinion).
There’s an anecdote from one of Djikstra’s essays that strikes at the heart of this phenomenon. I’ll paraphrase because I can’t remember the exact edw number off the top of my head.
A colleague was working on an important subsystem and would ask Djikstra for a review when he thought it was ready. Djikstra would have to stop what he was doing, analyze the code, and would find a grievous error or edge case. He would point it out to the colleague who would then get back to work. The colleague would submit his code for review again and this could carry on enough times that Djikstra got annoyed.
Djikstra proposed a solution. His colleague would have to submit with his code some form of proof or argument as to why it was correct and ready to merge. That way Djikstra could save time by only having to review the argument and not all of the code.
There’s a way of looking at LLM output as Djikstra’s colleague. It puts a lot of burden on the human using this tool to review all of the code. I like Doctorow’s mental model of a reverse centaur. The LLM cannot reason and so won’t provide you with a sound argument. It can probably tell you what it did and summarize the code changes it made… but it can’t decide to merge those changes. It needs a human, the bottom half of the centaur, to do the last bit of work here. Because that’s all we’re doing when we let these tools do most of the work for us: we’re here to take the blame.
And all it takes is an implementation of what we’re trying to build already, every open source library ever, all of SO, a GW of power from a methane power plant, an Olympic pool of water and all of your time reviewing the code it generates.
At the end of the day it’s on you to prove why your changes and contributions should be merged. That’s a lot of work! But there’s no shortcuts. Luckily you can reason while the LLMs struggle with that so use it while you can when choosing to use such tools.
Princess Bride, small excerpt, "Vizzini: You'd like to think that, wouldn't you? You've beaten my giant, which means you're exceptionally strong, so you could've put the poison in your own goblet, trusting on your strength to save you, so I can clearly not choose the wine in front of you." and the dialog goes on with Vizzini dominating it and arguing with himself. In the end, it came down to a coin toss, he picked up a goblet, drank and died.
Anyone who allows a 10K LOC LLM generated PR to be merged without reviewing every single line, is doing the same thing, a coin toss.
Worse for Vizzini; the LLM doesn’t even care about the truth. It was trained on all of the sloppy code we could find. Even if he reads every line of code he could miss the non-obvious bugs and expire anyway when management gets wind that it was his LLM generated code that led to the PII breach which cost them 10% of their share value in a week.
At least a liar is trying to deceive you. Vizzini’s entire exercise is moot.
When it comes to programming I find speed is of dubious value.
It comes when you already know what you’re doing. Which, if you’re an engineer, you should know what you’re doing according to Hamming.
But then you may not be tackling innovative or interesting problems. Much of software development is research: understanding customers, patterns, systems and so on. You do not know what you are doing, it’s more akin to science.
Then in order to go fast you must sacrifice something. Most people lose the ability to spot details or consider edge cases. They make fast and loose assumptions. And these trade offs blow up much later when the system experiences pressure.
It’s good to iterate and throw out bad ideas quickly for sure. You just have to know what area you’re in. Are you at the stage where you’re an engineer or are you doing more science related work?
You're not always doing something groundbreaking. Sometimes you're just building a thing that needs to exist. People who build houses don't obsess over this shit, they just build a house and then someone moves into it.
I wage a constant battle of motivating myself because my neurology craves novel sources of dopamine but my job is doing the needful 90% of the time.
Yeah, this is very real, and I think it can inflict paralysis on programmers with a certain level of experience and 'i know better' syndrome. Or even a 'it _might_ be better' type syndrome.
Sometimes, you might really know better, and it doesn't matter. You build the thing with the wrong tools, with a crummy framework, with a product at the end that will probably not succeed. But that is okay, hopefully you learn something and your team and your org learn something.
And if not, that is okay, sometimes its just a job and you need a paycheck and a place to be from 9 to 5.
This is why I love the bootstrapping stories here on HN.
Like one anecdote where they were building an "app" for automatic hotel reservations IIRC.
The "app" was a form that fed into a Google Sheet, where the founders themselves took the data and called the hotels.
When they got some income, they automated small bits of the process one by one.
Sometimes it's good to just have _something_ out there and see if there's a market for it before carefully crafting a beautiful piece of software nobody will ever use. It all depends on whether you're doing it to solve a problem for someone or for the process of writing code. Both are perfectly valid reasons.
It's totally fine to prototype, but you need to take care when you try to morph a prototype into a real product.
Very often people just take the shortest path from a to b, every single time. So you start with a reasonably shoddy prototype, but then you add some small feature, repeat 1000 times and now you still have a shoddy prototype but it's actually a pretty big project and it's all completely cursed because at no point did anyone do any actual software engineering. And of course now it's too big to rewrite or fix so the only way forward is to keep building on this completely broken mess.
At some point you need to scrap the prototype and replace it with something proper, or at least have a solid plan for how you're going to morph the prototype into something proper. This is often challenging and time consuming work, so a lot of developers tend to never really do it. They just execute the shortest path to get each new feature implemented over and over for years while number of bugs keeps increasing and velocity keeps decreasing because nothing makes sense, everything is more difficult than it should be etc.
> People who build houses don't obsess over this shit
Because they have built the same house 20 times already. And this exact house has been built 2 million times before. They know the requirements and how to do it, they know what can go wrong and how, and know how long it will take.
It makes a lot of sense to build the same physical house again and again, but if you are doing the same for software, you are definitely doing it wrong. Thus, typically each software development project is bespoke and has a lot of unknowns.
I've definitely built the same piece of software hundreds of times over, probably thousands. I've even set up CI to automate the build process.
The problem is that the construction equivalent of a software developer is not a tradesman but an architect. Programs are just blueprints that tell the compiler what to build.
There's a thing on YouTube these days where house inspectors basically shame builders. There's one guy who says "I can't tell you who the builder is" while walking past the builder's sign then proceeds to show how the house is completely fucked. Real sloppy work. Brick wall with no concrete so you can literally push it over with one hand. Tiles with voids underneath. Door and window frames cracked. Shower leaking water. Roof tiles broken. Roof vents loose, some times already blown away by the wind. Missing/sloppy insulation. Broken roof trusses.
Maybe the people who build houses should obsess a bit more over this shit.
Not just speed, but cutting costs. Same issue is popping up globally.
In Finland (smart) people are buying older houses where they still used good old methods to build them. They are easier to maintain and the failure points are known.
New builds have the weirdest basic issues because of cost cutting, sound carries way too much, the air quality is shit because nobody knows or bothers to do the design for moving the air properly etc.
But every time I suggest to a team I’m working on that we should try modelling the problem with it… people are less than enthusiastic.
These are all great tips. Especially starting with the most simple, abstract model and extending it with refinement if you need to.
That basically means that you write your first spec as abstractly and simply as possible with as little detail about the actual implementation as you can.
If details matter, you use the first spec as a module and write a new spec with the added details. The new spec will use implication that if this spec holds then the first one holds too.
Anyway, I was kind of mad when I learned TLA+ that it and systems like it aren’t part of the standard curriculum and common practices.
“We built this distributed database that guarantees quorum if you have at least 3 nodes!” someone will excitedly claim. And when you ask how you get an informal paper and 100k lines of C++ code. Wouldn’t it be nicer if you just had 1k lines of plain old pure maths?
And pure maths that can be exhaustively checked by the computer? It’s not as perfect as a proof but it’s great for a lot of practical applications.
I agree with everything you've said, and about refinements in particular. TLA+ newcomers often think of refinement as an advanced tool, but it's one of the most pragmatic and effective ways to specify.
But I'd like to add some nuance to this:
> And pure maths that can be exhaustively checked by the computer? It’s not as perfect as a proof but it’s great for a lot of practical applications.
1. A run of a model checker is exactly as perfect as a proof for the checked instance (which, indeed, is a weaker theorem than one that extends to an unbounded set of instances). In fact, it is a proof of the theorem when applied to the instance.
2. A proof is also not perfect for real software. A proof can perfectly convince us of a correctness of an algorithm, which is a mathematical construct, but a software system is not an algorithm; it's ultimately a physical system of silicone and metal elements that carry charge, and a physical system, unlike an algorithm, cannot be proven correct for the simple reason it's not a mathematical object. Virtually all correctness proofs assume that the hardware behaves correctly, but it only behaves correctly with high probability. My point is only that, unlike for algorithms or abstract theorems, there can be no perfect correctness in physical systems. Increasing the confidence in the algorithm beyond the confidence in the hardware (or human operators) may not actually increase the confidence in the system.
Absolutely, I was waving my hands a lot there. Thanks for adding that.
And I’d also add the small model theory from Software Abstractions: if there is an error in your spec in a large instance then it is likely also going to appear in a small one. Most of errors can be caught by small instances of a model.
I only mention it because the common retort when people find out that the model checker only exhaustively checks a limited instance is, software is complex and you can’t model everything.
It’s true. But we don’t need models that account for the gravitational waves in our local region of space to get value from checking them. And neither do most programmers when they use unit tests or type checkers.
And proofs… I think I might disagree but I don’t have the professional experience to debate it fully. I do know from a colleague who does proof engineering on a real system written in C++ that it’s possible however… and likely more difficult than model checking.
I find the same. Even those who are interested in it in theory hit a pretty unforgiving wall when they try to put it in practice. Learning TLA+ is way harder than leaning another programming language. I failed repeatedly while trying to "program" via PlusCal. To use TLA you have to (re)learn some high-school math and you have to learn to use that math to think abstractly. It takes time and a lot (a lot!) of effort.
Now is a great time to dive in, though. LLMs take a lot of the syntactical pain out of the learning experience. Hallucinations are annoying, but you can formally prove they're wrong with the model checker ^_^
I think it's going to be a learn these tools or fall behind thing in the age of AI.
I think the "high school math" slogan is untrue and ultimately scares people away from TLA+, by making it sound like it's their fault for not understanding a tough tool. I don't think you could show an AP calculus student the equation `<>[](ENABLED <<A>>_v) => []<><<A>>_v` and have them immediately go "ah yes, I understand how that's only weak fairness"
Oh, hey -- you're that guy. I learned a lot of what I know about TLA from your writings ^_^
Consider my behavior changed. I thought the "high school math" was an encouraging way to sell it (i.e. "if you can get past the syntax and new way of thinking, the 'math' is ultimately straight forward"), but I can see your point, and how the perception would be poor when they hit that initial wall.
Fatalism is a part of the language of fascism. Statements like, "it is inevitable," are supposing that we cannot change the future and should submit to what our interlocutor is proposing. It's a rhetorical tool to avoid critique. Someone who says, "programming as a profession is over, AI will inevitably replace developers so learn to use it and get with the program," isn't inviting discussion. But this is not how the future works and TFA is right to point out that these things are rarely ever, "inevitable."
What is inevitable? The heat death of the universe. You probably don't need to worry about it much.
Everything else can change. If someone is proposing that a given technology is, "inevitable," it's a signal that we should think about what that technology does, what it's being used to do to people, and who profits from doing it to them.
This seems to assume that these non-technical people have the expertise to evaluate LLM/agent generated solutions.
The problem of this tooling is that it cannot deploy code on its own. It needs a human to take the fall when it generates errors that lose people money, break laws, cause harm, etc. Humans are supposed to be reviewing all of the code before it goes out but you’re assumption is that people without the skills to read code let alone deploy and run it are going to do it with agents without a human in the loop.
All those non-technical users have to do is approve that app, manage to deploy and run it themselves somehow, and wait for the security breach to lose their jobs.
I think you're underestimating (1) how bad most B2B is (from a bug and security vulnerability perspective) & (2) how little B2B companies' engineers understand about how their customers are using their products.
The frequency of mind-bogglingly stupid 1+1=3 errors (where 1+1 is a specific well-known problem in a business domain and 3 is the known answer) cuts against your 'professional SaaS can do it better' argument.
And to be clear: I'm talking about 'outsourced dev to lowest-cost resources' B2B SaaS, not 'have a team of shit-hot developers' SaaS.
The former of which, sadly, comprises the bulk of the industry. Especially after PE acquisition of products.
Furthermore, I'm not convinced that coding LLMs + scanning aren't capable of surpassing the average developer in code security. Especially since it's a brute force problem: 'ensure there's no gap by meticulously checking each of 500 things.'
Auto code scanning for security hasn't been a significant area of investment because the benefits are nebulous. If you already must have human developers writing code, then why not have them also review it?
In contrast, scanning being a requirement to enabling fast-path citizen-developer LLM app creation changes the value proposition (and thus incentive to build good, quality products).
It's been mentioned in other threads, but Fire/Supabase-style 'bolt-on security-critical components' is the short term solution I'd expect to evolve. There's no reason from-scratch auth / object storage / RBAC needs to be built most of the time.
I’m just imagining the sweat on the poor IT managers’ brow.
They already lock down everything enterprise wide and hate low-code apps and services.
But in this day and age, who knows. The cynical take is that it doesn’t matter and nobody cares. Have your remaining handful of employees generate the software they need from the magic box. If there’s a security breach and they expose customer data again… who cares?
That sweat doesn't lessen dealing with nightmare fly-by-night vendors for whatever business application a department wants.
Sometimes, the devil you know is preferable -- at least then you control the source.
Folks fail to realize the status quo is often the status quo because it's optimal for a historical set of conditions.
Previously... what would your average business user be able to do productively with an IDE? Weighed against security risks? And so the point that was established.
If suddenly that business user can add substantial amounts of value to the org, I'd be very surprised if that point doesn't shift.
Yeah. I used to manage a team that built a kind of low-code SaaS solution to several big enterprise clients. I sat in on several calls with our sales people and the customer’s IT department.
They liked buying SAP or M$ because it was fully integrated and turnkey. Every SaaS vendor they added had to be SOC2, authenticate with SAML, and each integration had to be audited… it was a lot of work for them.
And we were highly trained, certified developers. I had to sign documents and verify our stack with regulatory consultants.
I just don’t see that fear going away with agents and LLM prompts from frontline workers who have no training in IT security, management, etc. There’s a reason why AI tech needs humans in the loop: to take the blame when they thumbs up what it outputs.
Is it still being maintained? There used to be a game jam, Octojam, that was stopped years ago[0]. Seems mostly out of a lack of interest? Would be cool if it revived.
It inspired me to start zig8[1], my own CHIP-8 emulator. It's not ready for prime time yet but it's getting there. When it's ready I hope it will have a visual debugger and feel good: fast, better shaders, better sound, good defaults, etc.
CHIP-8 is a neat system. If you're interested in emulation it's a great place to start in my humble opinion. It's simple enough that you can finish it before deciding whether you like writing emulators.
Hopefully interest in CHIP-8 would pick up again, it's a neat bit of history and a cool little system.
I'm running a jam that starts Sunday called Langjam Gamejam that might interest you then. You have to make your own language and then use that to make a game. We have >120 people signed up and I'm expecting quite a few of the submissions to be similar to Chip8 or PICO8.
The Peripheral by William Gibson.
Enshittification by Cory Doctorow.
reply