Okay, but…most programs are written in Python or Rust or something, where invoking library functions is a lot safer, more ergonomic, more performant, and more common than spawning a subprocess and executing a program in it. Like you can’t really ignore the human expectations and conventions that are brought to bear when your code is run (the accommodation of which is arguably most of the purpose of programming languages).
When you publish a library, people are going to use it more liberally and in a wider range of contexts (which are therefore harder to predict, including whether a given violation requires human intervention)
My understanding is that scientific research has a dual problem, where the number of students needed to carry out existing professors' research is much larger than the number of junior faculty positions generally available. The result being that most trained PhDs must leave (US) academia because there are no jobs for them. In fact, I've heard scientists complain that universities owe it to students to provide more help finding a job in industry after they graduate.
Given all that, where are professors supposed to find and hire students who don't want to stay in academia themselves? I think a lot of these students wind up being aspiring immigrants, and I'm not surprised that a lot of them would also have a hard time finding a place for themselves after graduating and that many of them would leave. Also, the abstract seems to argue that that US still benefits greatly from this arrangement: "though the US share of global patent citations to graduates' science drops from 70% to 50% after migrating, it remains five times larger than the destination country share."
If the culture normalized such that a much larger proportion of research was conducted by permanent, non-faculty, research employees, this would both reduce the need for so many students and increase the jobs available for students, and create a new employment niche with a different balance of teaching/administration/research. It would basically be turning "post doc" into an actual career rather than a stop over.
This would be better for everyone involved, at the admitted cost of being quite a bit more expensive. My guess is that the market would naturally converge on this equilibrium if the information of job placement rates on a per-program (or even per lab/advisor) were more readily available.
This isn't really a culture problem, IMO, as much as a funding one.
My group currently employs two people of the description you have, and it does reduce the need for students (and honestly, increase productivity).
It's also by far the most stressful part of my job. Funding them involves writing multiple grants per year (because the expectation of any particular grant is low, even with a decent hit rate) and I am constantly worried that I won't be able to keep them employed.
If one of them leaves this year, I'm not likely to replace them, simply because in the current funding environment, I can't look someone in the eye and promise them a long term position. There are so many more ways to fund a student, and they're inherently time limited, so even if things collapse, there's ways to white knuckle through it in a way there aren't for staff scientists.
The funding problem is a cultural problem though. Religious right wing politicians in the US have attacked science and education funding at every opportunity. Science and education produce ideas that are at odds with right wing religious orthodoxy, so those things must not be allowed in society.
It's not just that science contradicts orthodox religious views. It's also that humanities education and exposure to a diversity of people and thought can "deprogram" students away from traditional ways of thinking, which is a threat to traditional power hierarchies.
The finding problem is an economic demand problem. There is not enough market demand for research, particularly some... questionable research. Yes, sometimes seemingly useless research can lead to breakthroughs. No, that doesn't make them economically attractive. You are effectively doing the same thing as gambling on crypto.
A person arrives on a 18 month funded postdoc (believe me, plenty exist). They have just completed a PhD which means they probably have a couple papers published and maybe another one or two in the pipeline. So as they spin up their time with you, they are also finishing these papers from their previous job. By six months in they are done with that and fully onboarded to the project. So they spend six months working. But now, they only have six months left of contract. You don't have money to keep them or perhaps your country will require you to offer a permanent contract if it is being renewed so you cannot offer them to extend their position with you. So they spend the final six months of their postdoc looking for a job. So, for 18 months of salary, you get six to eight months of work. It's unreasonable. Things need to change.
Or lets say you have a mission critical project that must be done by a postdoc. You offer them a 3 year contract that is grant funded. It is three years because most grant agencies work on three year cycles. The project requires a year commitment to building an apparatus (maybe its a lab experiment, maybe it's training some foundation model, whatever). After that year, the apparatus can be used for science. Your postdoc comes to you in year 2 month 3 and says, well I have been offered a faculty position at university X so I am leaving in the fall. So you get 18 months of work out of them and now cannot hire anyone else because you only have 18 months of funding left, but your country requires you to offer a minimum of 24 months contract. Things need to change.
It's important to note that academics often keep projects from their former positions going at their new ones. But as soon as someone leaves to industry, this falls apart. Because industrial positions expect the person to work on the project they specify, they rarely hire someone to work as an academic, pursuing their own research directions.
I think the solution here is as others have suggested, spend more money on hiring people for longer term and with higher salaries. But we shall see if anyone listens to that advice.
Notably even the role of the professor has drastically changed in the last few decades. The "publish or perish" paradigm has really taken over and changed the type of research being done. Higgs famously said he wouldn't make it as a non-tenured faculty in today's academic culture.
Not to mention that the type of research being done has drastically changed too. There's many more projects that require wide collaboration. You're not going to do something like CERN, DESI, LIGO, or many other scientific mega projects from a single lab, or even single field of study.
The academic deal has changed. It used to be that by becoming a professor you were granted facilities and time to carry out your research. In return you had to help educate and foster the next generation. It is mutually beneficial. There were definitely abusers of the system, but it is generally not too difficult to tell who in your own department is trying to take advantage of the system, but incredibly difficult to identify these people when looking from the perspective of a university administration. There's been more centralization in the university administration and I'm afraid Goodhart's Law is in full force now.
What I'd like to see is more a return to the Laissez-faire approach. It shouldn't be completely relaxed, but to summarize Mervin Kelly (who ran Bell Labs): "You don't manage a bunch of geniuses, they already know what needs to be worked on. That's what makes them experts in the first place." At the end of the day we can't run academia like a business and it really shouldn't be. The profits generated from academia are less direct and more distributed through society. Evaluating universities by focusing on their expenditures and direct profits alone is incredibly naive. We're better able to make less naive evaluations today, but we still typically don't (it is still fairly complex)
Your suggestion would have fewer fresh eyes to look at the problem. If the scientific enterprise were just about churning out widgets, then yes it’s better to have permanent staff.
But having a strong training pipeline for the globe is a huge plus for US prestige, and the top people are still offered jobs as faculty or industry within the country, so it still a net gain for USA. But it’s brutally competitive for the individual scientists
While I'm more skeptical than you are of the value of a string of new students coming through as opposed to just keeping the very best students, I'm also not suggesting we mandate this change or force it. I'm suggesting that we give people more information to make better informed decisions. If students decide that they are comfortable with a sub 20% job placement rate, then great, nothing needs to change. If they aren't satisfied with that, and we decide that actually they were performing a valuable service, then it behoovs society to pay them enough that they becoming willing to make that gamble again.
The current information assymetry is exploitative. One of two things would happen under my proposed system: either nothing would change because students think they are getting a good deal as is or students don't think the deal is worth it which means that the current system only works because students are having the reality of the job market hidden from them.
I think a mix of the current system with more permanent researchers makes sense.
There is a lot of work in research that fits the permanent worker better than the fresh 22 year old. But having that fresh talent is really beneficial to science.
> If students decide that they are comfortable with a sub 20% job placement rate, then great, nothing needs to change.
The problem is in my opinion not this low job placement rate per se (it is very easy to find out that this is the case for basically every prospective researcher). The problem rather is the "politics" involved in filling these positions, and additionally the fact that positions are commonly filled by what is currently "fashionable". If you, for some (often good) reason, did good research in an area that simply did not become "fashionable": good luck finding an academic position.
> Your suggestion would have fewer fresh eyes to look at the problem
Why? That paradigm doesn't change the influx of new students.
But the current system has a problem of training people for a job and then sending them to do something else. Even a professorship is a very different job than a graduate researcher or postdoc. Most professors do little research themselves these days, instead managing research. Don't you think that's a little odd, not to mention wasteful? We definitely should have managers, and managers with research backgrounds themselves, but why not let people continue honing their research skills?
> it’s brutally competitive for the individual scientists
It is. But this is also a social choice dictated by how much we as a country want to fund research.
Thats interesting, I don't know if I have ever seen this kind of labor market logic applied to science before. Is this an agreed upon idea? In my mind, science and the kind of focused research it entails is kind of definitionally distinct from something like "innovation." Like, frankly, yes, I want a stream of widgets; if that means consistent units of research done to contribute to an important area/problem, which are reviewed and judged by peers.
Like what's even the alternative? We want a Steve Jobs of science? That's really what we are going for?
Are you suggesting science and innovation are distinct?
Scientific progress is largely driven by the “Steve Jobs” of sciences.
Only a tiny fraction of papers remain relevant. So that means the quality of the average paper doesn’t matter as much as the quality of the best paper.
There is actually a lot of debate as to whether scientific discovery is driven by "heroes and geniuses" (as you argue) or by multiple people simultaneously and independently coming up with the same idea [1], often called "multiple discovery". Certainly both have occurred many times over.
That said, multiple discovery seems to be more common nowadays due to the rapid diffusion of information, which means that most people are operating in roughly the same information environment (initial conditions) when they start their research. It is interesting how often multiple discovery happens when you start to look closely at this.
What you’re describing sounds a lot like the Department of Energy national labs. They have (or had) many permanent-track research roles without teaching obligations, where scientists can have long stable research careers.
The problem, as always, is funding. In the US, the federal govt is essentially the only “customer” of basic research. There’s some private funding, often from kooky millionaires who want someone to invent a time machine, but it’s the exception that proves the rule. Universities sometimes have pure research roles, but they’re generally dependent on the employee paying themselves with a constant stream of grants. It’s a stressful and precarious position.
What all is tuition paying for anyway? It's not paying for the professors, since they have to fund themselves with grants. It's not paying for research overhead, because that also get claimed from grants. It's not paying for extracurriculars, since those get funded by donations, student contributions, and revenue. It's not paying for new facilities, since those all get named after donors.
It certainly doesn't seem to be paying a lot for post docs, grad students, (who are either contributing their own tuition or getting it contributed by someone else anyway), adjuncts, or other non-professor faculty, since they famously make starvation wages.
I'm being a bit facetious, since tuition has "transparent" line items stating how much goes to what, but university revenue streams are a bit baffling. Mountains of money go in, and mountains of money go out, but the two seem to have a very indirect relationship at times.
And I know, the common answer is that it goes to some nebulous "administration", but the executive administrative staff, while reasonably well compensated, make a pretty small portion of the overall budget, and the rest of the admin seems like it could be more reasonably be split into the actual services and departments they're administering, which, again, seem adequately funded between grants, donors, and tuition. So I'm not clear what all this ambiguous "administration" that's not executive staff and not tied directly to, say, health insurance (which gets paid for as part of tuition) or research (grants). What are they administering??
To a large extent, I think this could be solved by labs having more long-term permanent research staff (technicians, data analysts, scientists) and reducing the number of PhD students. Many students would gladly stay on in that position instead of leaving, so it increases job opportunities. It would also improve the quality of the science because the permanent staff would have more historical knowledge, in contrast to the current situation where students constantly rotate in and out with somewhat messy hand-offs. The students could also then focus more on scholarly work, planning and overseeing research execution with the team. The problem is that the incentives are aligned to allocate students to doing all lab tasks, not long term staff. I think we could change this through changes to the requirements and structure of science funding mechanisms however, since ultimately that's the source of the incentives.
> much larger than the number of junior faculty positions generally available
Expanding on this a bit, insight credited to bonoboTP: in a steady state the number of junior faculty positions will only open up at the same rate as current faculty retires. But each faculty member is expected to train dozens of students that are all in principle qualified for such jobs. Therefore, the vast majority, let's say 95%, of PhD graduates have to take industry jobs, there is no way around it. But this does not seem to be the goal of the 95%, hence the incredibly tight job market. Returning to their home country for a faculty job acts as another release valve, but sooner or later those will be filled as well, except in countries in the rapidly expanding phase in terms of university education.
The tenure system is incredibly broken as a result. Ideally, I think there needs to be more non-faculty careers available for PhD graduates either outside or inside academia. After all, there is clearly some value in the work a PhD student does, otherwise they would not be paid. Perhaps we can have public or semi-public research institutions that hire these scientists for actual development. Most likely this will require an upstream incentive change so that grants are awarded to these newly minted organizations.
Universities charge a large overhead in part to cover the "tuition" for the PhD students, which is really a meaningless number since it's taken out of the same check they give you the remainder of. If we just strip out this part and give most of it to the scientist, economically it should be a viable salary.
When I was a physics grad student ~35 years go, this was called "the birth control problem. I had every intention of going into industry. I described it to my dad who got his PhD in the 1950s and he said it was the same back then. But there's a perennial "this time it will be different."
It wasn't the same in the 1950s. When it became really clear to me how dire the long term job situation was when I getting my PhD in the 1990s I started combing through issues of Physics Today and noticed that the field and academia as a whole was explosively expanding from 1920-1968 or so and there was a sudden crisis in the late 1960s, with an echo in the late 1970s and also when I was in in the late 1990s. (Physics Today said I had 2% odds of getting a permanent job even coming from a top school)
I had one day when I'd posted a Java applet to the web that got 100,000 impressions and getting so much attention for that and so little attention for papers that took me a year to write made me resolve to tell my thesis advisor that I was going to quit. Before I could tell him, he told me he had just a year of funding for me and I thought.. I could tough it out for a year. People were shocked when I did a postdoc when most of my cohort were going straight to finance.
My mental health went downhill in Germany and I stomped away, in retrospect I was the only native English speaker at the institute and I could have found a place for myself for some time had I taken on the task of proofreading papers and I can easily imagine I could have made it in academia but heck, life on a horse farm doing many sorts of software development has been a blast.
One big disruption in the job market was that mandatory age-based retirement was outlawed. This created a span of several years when there were virtually no retirements.
I should have mentioned that my dad's degree was in chemistry, and it might have been a different vibe. But the production of PhDs at a rate faster than they could be absorbed by academic hiring was a thing. My dad (and mom, she got her master's in chemistry) went into industry too, so maybe I was lucky to have good role models.
"Ideally, I think there needs to be more non-faculty careers available for PhD graduates either outside or inside academia."
For awhile, I loved that my field had lots of opportunities outside academia for PhD students, and that they were held in pretty equal regard, prestige wise, with academic positions.
Then the current administration gutted the entire field.
Academia is a pyramid, like most organizations, eventually most PhDs cannot get a full time position.
The fact that many PhDs leave is..normal..if you get few high impact publications you can find full time positions outside US, even as an associate professor and not just a researcher.
And the reason why many go to universities around the world for PhDs is not because they want to stay in that place necessarily but because you're more likely to fund your PhD research and get a high impact publication.
There's that and the fact that a lot of people who attain graduate degrees are immigrants who do so for the sake of immigration.
The whole system essentially self selects for cheap labor and exploitation.
If the feds put a high salary requirement on it like the E or O series visas, perhaps the system might change.
The scientific minds of India, China, and Russia don't come to the US and slave away in the lab purely out of passion for advancing science, they do so because it's a path towards the green card. The PIs and laboratory heads all know damn well how the system works, they are no better than those bosses of H1B sweatshops, except perhaps they do their exploitation from ivy filled ivory towers rather than in Patagonia vests.
> The PIs and laboratory heads all know damn well how the system works, they are no better than those bosses of H1B sweatshops, except perhaps they do their exploitation from ivy filled ivory towers rather than in Patagonia vests.
In my observation there do exist quite some people among the PIs and laboratory heads who are quite highly idealistic for research, but have no other option than playing this rigged game of academia.
I was accepted into a PhD CS program despite applying for a masters. The advisor had something on his door about the limited number of slots open for people who graduate from grad school. Tried to discourage me from the program.
> where the number of students needed to carry out existing professors' research is much larger than the number of junior faculty positions generally available.
This is definitely true, there are more physics PhDs graduating from the top 2 schools than there are total faculty positions listed each year.
BUT you are missing that there is still demand for positions out in industry as well as government labs. But there's also a decline in that right now as we're going through a time of encouraging more engineering and less research.
In reality there's a pipeline of research. If you haven't been introduced to it, I like to point to NASA's TRL (Technology Readiness Level) chart[0]. The pipeline is from very basic research to proven systems. Traditionally academia and government labs do the majority of work in the low TRL while industry research handles mid level (stuff that isn't quite ready for production). The reason for this is due to the higher rate of failure of low level research and so shifts risks away from industry. Not to mention that industry has different incentives and is going to be more narrowly focused. Academia and gov labs can research more long term projects that will have large revenue growths but may take decades to get those returns. I mean how much do we get from the invention of calculus? Or the creation of WWW? We'd also get far less growth and profits were these not more distributed.
So while yes, getting a professorship is a challenge and highly competitive, it is far from the only path for these graduates. We can also do a lot to increase (or decrease) their options by increasing (or decreasing) funding for science. There's a lot of science that happens outside academic labs and they still depend on PhD graduates to be able to do most of that work. If you want these people to have jobs, fund more low level research[1]
> I've heard scientists complain that universities owe it to students to provide more help finding a job in industry after they graduate.
A big reason for this is that networking is still a big issue. I can tell you as someone who does not have a good relationship with my former advisor that this has made job hunting a much harder experience compared to other peers. While my credentials are better than some of those people they come in through a side door (often skipping things like LeetCode challenges) and instead I have to go through the standard applicant pool. I don't think they don't deserve those jobs (most of them do), but just pointing out that networking is still a critical part of hiring. I mean even one simple part is that when applying you might not even know what a group is doing and if that's what you want to do. Solicitations are often vague. Even if there were no advantage to the hiring process networking still provides a huge advantage to the filtering process.
I mean even putting the personal experience to the side, don't we want to make the most use of the resources we have? Don't we want to get graduates connected to labs/work places where they will be most effective? This is still a surprisingly complex problem to resolve and even limiting the hiring problem to PhDs (where there's far less noise than general hiring) it is still a complicated problem.
[1] But I'd also say that we might be encouraging too many people to do PhDs. Doing a PhD "for a job" is a bit odd. A masters is better intended for that. But a PhD is more directed towards doing research work. That said, in the worst case a PhD says "this person can work on ill-defined tasks and has the diligence to see them through." Regardless of the industry, that is a pretty useful skill.
> That said, in the worst case a PhD says "this person can work on ill-defined tasks and has the diligence to see them through." Regardless of the industry, that is a pretty useful skill.
Very few companies and industries want employees who
- are very conscientious ("has the diligence to see [the tasks] through"), and
- are much more effective working on their own, i.e. are no "team players" because they don't really need a team ("this person can work on ill-defined tasks").
All good points, particularly the control group piece. Scrutinizing control groups makes it easy to invalidate most studies about treatments in this space because control groups are so difficult to assemble. You should see the variance in autism spectrum.
It still worked for my son and my friend’s two children.
I have no affiliation with the program at all. I talk about it because it worked for us.
I latched onto it because I know the type of things that I have struggled with my entire life, but just learned a lot of coping mechanisms. I’m also very self aware. I pay a lot of attention to how my own brain works because of the need to develop those coping mechanisms. When I saw the full program, everything made perfect sense to me and I absolutely believe that it would have helped me when I was younger.
Had I been able to tolerate working half days for 7 weeks, I would have participated in the program myself.
That is the program, yes. I’m not trying to sell you on it, just sharing our experience.
I found out about it from one of my neighbors who has two children with dysgraphia who did the full time program for 3 years each. He tells everybody about it.
I toured that location when my son was going into 3rd grade and we ended up sending doing just the summer program after 7th grade. What I saw on the tour would have helped me when I was a kid and my sons brain seems to work just like mine.
If you threatened me with 3+ hours a day of speed reading clocks instead of a normal summer I'd probably double down on effort too. And probably not in a way that's healthy long term.
Well, it wasn't a threat. He knew exactly what he'd been struggling with from 1st grade on (officially minor ADHD) and we were trying really hard to keep him off of medication. Since the program has finished he's asked to do it again several times (but we haven't because it's expensive). I've thought about teaching him programming by having him build his own clock trainer.
It's hard to explain to random people on the internet but here's the difference we saw.
- Went from doing homework everyday after school until 10pm to always being done by 6pm at the latest.
- Went from forgetting to turn in that same homework and sometimes major assignments frequently to rarely. 7th grade year he had over 20 zero's for assignments that he did and simply kept forgetting to turn in. 8th grade year he forgot two homeworks all year.
- Went from years of extreme disorganization to...still disorganized but a significant improvement.
- Went from uncertainty about whether he was going to be able to keep up with the workload in high school to, for lack of a better way of saying it, a star student. Teacher reports changed. GPA is a 3.7 (he's in 11th grade now). Juggling seasonal sports, Scouts, school, clubs, social life, honors/AP classes with no assistance from us at all.
It's hard for people to understand when you watch the same patterns and struggles for 6 or 7 years and then they just stop being a struggle. That 7th grade year, all that my wife and I did after we got home from work was try to make sure he would get his work done. It consumed our life to the point that, after me trying to convince my wife that this could help (because she was very skeptical too) that it was bad enough that she finally agreed it was worth a shot.
He and I were actually going to fly across the country to stay in Seattle for 7 weeks to have him do the program in person because I didn't think he would be able to pay attention to the virtual. The hotel that we had booked a couple of blocks from the school cancelled our reservation due to renovations and we ended up pivoting to the virtual program at the last minute. He did surprisingly well in the remote class format. The hotel was also close to Microsoft's campus and I got the impression that Microsoft had paid them to renovate to prepare for a lot of people they were going to have in town.
Well that is interesting and if you had results then that's all that matters for your family of course.
But sorry to clarify I'm still hung up on the "8 handed clock" thing - what does that mean? What information is displayed on the clocks other than hours, minutes, and seconds?
I didn’t sit in on it so I can’t say for sure. My son got up to the 2nd version of the 6 handed clock. You have to have perfect accuracy within a certain amount of time to advance to the next tier.
Even with the 6 handed I don’t remember exactly what each was though. I asked Grok and this is what it said.
> In the Arrowsmith Program’s Cognitive Intensive Program (CIP), the primary exercise is the Symbol Relations exercise, commonly known as “Clocks.” This involves reading analog clock faces that progress from 2 hands to up to 8 (or sometimes more) hands.
Each hand on the clock represents a separate time (an independent position pointing to a specific hour/minute on the clock face). Participants must interpret the positions of all hands simultaneously, understand the relationships between them (e.g., angles, relative positions, and sequences), and record the times accurately under time pressure.
The multiple hands do not represent different concepts symbolically (like hours, minutes, seconds); instead, they increase cognitive load to train the brain’s ability to process and relate multiple pieces of information at once. This strengthens the Symbol Relations cognitive function, which supports logical reasoning, comprehension, seeing connections between ideas, cause-and-effect understanding, and abstract thinking.
Progression adds more hands as mastery is achieved, making the task more complex to build capacity in handling interrelated symbols and concepts. The CIP focuses intensively on this exercise to accelerate improvements in reasoning, processing speed, and related skills.
Not to mention the partner who he made move to another country and then still wouldn’t tell anyone about. The more I think about this post the more insanely controlling the guy seems!
In support of your point, after Hashicorp relicensed Terraform (thereby killing, or at least squeezing, a lot of Terraform consultancies who had collectively contributed a massive amount to Terraform and prompting the creation of OpenTofu), Oxide and Friends spent several episodes discussing what was reasonable for open source maintainers to do.
They had a great discussion with Kelsey Hightower about it[^1], and his answer (which I liked a lot) was basically just that maintainers’ only real obligation was to be transparent about governance. If you want to be a dictator and ignore bugs and contributions that aren’t personally compelling to you, that’s fine—but please just put that in the repo. That way, people who are trying to build a business on open source work and want customers, or to build trust with users for any other reason, can distinguish themselves from maintainers that don’t care (as is their right). Otherwise the reputation of Open Source as a whole suffers.
Genie delivers on-the-fly generated video that responds to user inputs in real time.
Marble renders a static Gaussian Splat asset (like a 3D game engine asset) that you then render in a game engine.
Marble seems useful for lots of use cases - 3D design, online games, etc. You pay the GPU cost to render once, then you can reuse it.
Genie seems revolutionary but expensive af to render and deliver to end users. You never stop paying boatloads of H100 costs (probably several H100s or TPU equivalents per user session) per second.
You could make a VRChat type game with Marble.
You could make a VRChat game with Genie, but only the billionaires could afford to play it.
To be clear, Genie does some remarkably cool things. You can prompt it, "T-Rex tap dancing by" and it'll appear animated in the world. I don't think any other system can do this. But the cost is enormous and it's why we don't have a playable demo.
When the cost of GPU compute comes down, I'm sure we'll all be steaming a Google Stadia like experience of "games" rendered on the fly. Multiplayer, with Hollywood grade visuals. Like playing real time Lord of the Rings or something wild.
Interestingly, there is a model like Google Genie that is open source and available to run on your local Nvidia desktop GPU. It's called DiamondWM [1], and it's a world model trained on FPS gameplay footage. It generates a 10 fps 160x160 image you can play through. Maybe we'll develop better models and faster techniques and the dream of local world models can one day be realized.
Graphics have long reached diminishing returns in gameplay, people aren't going to playing VRChat tomorrow for the same reasons today.
AI can speed up asset development, but that is not a major bottleneck for video games, what matters is the creative game design and backend systems, which existing on the interaction between players and systems is just about as hard as any management role, if not harder.
From what I can tell, you can actually export a mesh in (paid) Marble, whereas I haven't seen mesh exports offered in Genie 3 yet (could be wrong though).
I love the author's argument, but this conclusion feels obvious to me—rationalizing emotional decisions is like the oldest human activity there is. Try asking somebody why they're whatever-religion or whatever-political-party that their parents, friends, or partner is/are. I further claim that much of the purpose of managers is to do this for individual contributors: give people a story that scaffolds a decision you hope they'll make: staying at the company and going along with whatever decision has just been announced.
I also don't think it's necessarily bad that people do this. The input to any decision a person makes includes their entire life experience up to that point[1]. How could an executive encode all that in some kind of pat logical explanation, and how could the also-human engineers at the company possibly digest such an explanation, and what could make it more compelling to them than their own life experiences? People need to get through life, though, so they need to make decisions. They can't fully rationalize every single one, but they want to feel at least OK about the decisions they're making, so they tell themselves and each other these incomplete little stories and get on with it. That managers scaffold this process with their own stories is a little manipulative, but how else could people cooperate enough to have companies? The whole process just seems intrinsically human to me.
The most important part of being an executive is understanding all of this and choosing to hire people who will ultimately make good strategic decisions for you. Don't hire a well-known Perl contributor as your CTO unless you like the idea of rewriting your product in Perl. If your company is dying because this has happened, my condolences but at least you're not alone.
Edit: I hadn't read this far when I wrote the comment but the author also literally says, "The moment you hire a Rust developer to evaluate languages, you’ve already chosen Rust." I guess I just disagree that it could work differently. Each of us possesses bounded knowledge and bounded rationality, and "which language is best", is probably too complicated for an individual to figure out, especially when you don't even know what the roadmap will be in a year—you'd have to build the company several times in several languages and compare the results (and the best engineers I've met do write code multiple times, but rarely more than twice IME). Each of us can only really know how we would solve the company's problems. Executives' job is to try and guess, and make decisions that are ideally optimal but at least internally consistent.
[^1] My favorite example of this, actually: even in the highly-rational field of scientific research, scientists have to decide whether a given body of evidence is dispositive of a particular theory, and the standards they apply likewise depend on who they are and what their life experience is. So, as Max Planck put it, science advances one funeral at a time.
Given all the money spent on trying different educational models to achieve better outcomes, it's really gratifying to see a result suggesting that improvement is actually possible. I have a lot of teachers in my family and they tend to take the perspective that education is an engineering problem rather than a research problem. That is, any apparent progress is due to extra funding or filtering students or the like.
> "Those costs do not include anticipated savings from improved teacher morale and retention, a dynamic demonstrated in other data."
That seems like some kind of supportive evidence as well. Teachers should logically be happier when working inside a system optimized for teaching efficacy!
Personally: we put our child in a Montessori preschool because we liked its emphasis on self-directed learning (I kind of think all learning has to be self-directed on some level. Even a lecture requires you to listen to and think about the lecture, instead of something else). We later moved him to a Reggio Emilia program for non-pedagogical reasons (there were problems with the building that the Montessori school was in). They're definitely different—in Montessori, he mostly played on his own, and in Reggio we now see him in pairs and groups all the time. I have no idea which is better, but his teachers at the Reggio school seem to like it.
Seems like the more boring but more real story here is that this mom is really struggling to hold her career together and give her kids the care she clearly wishes she could because her husband is being lazy. To the haters in the thread: I think this article can be read as "avoiding UPFs is completely unrealistic for authors trying to establish themselves while functionally raising a baby and a toddler by themselves". Which, even as a perpetual proponent of the anti-UPF book "Ultra-Processed People," I kind of understand.
I get a lot of pride and satisfaction from being an involved dad. I do almost all of the cleaning, a fair amount of cooking, and probably 2/3 of the missing-work-because-no-childcare (and I try to put in a good amount of solo weekend time, to let my spouse catch up on work). A valuable life lesson I learned in Boy Scouts: if you're not doing about twice as much work as you think is fair, you're probably not doing enough.
(I have my own theory, which is that a large brain increases the risk of ADHD rather than autism—a larger flow of thoughts and ideas requires more executive function to manage, and therefore more executive function is required to achieve the same attention span—but that ADHD is a kind of multiplier for autism, because social situations are more challenging to navigate if you can’t reliably stay focused on the social interaction you’re having.)
When you publish a library, people are going to use it more liberally and in a wider range of contexts (which are therefore harder to predict, including whether a given violation requires human intervention)
reply