Hacker Newsnew | past | comments | ask | show | jobs | submit | Aerroon's commentslogin

>Are they moving faster than conceivably possible by a real player? Even the most basic (x2-x1)/t > twice the theoretical will catch people teleporting or speed hacking.

This is how I imagine Amazon ended up banning a large amount of players for speedhacking. The players were lagging. I'm guessing their anti-lag features ended up moving them faster than the anti-cheat expected.

But I agree that a combination approach would probably work.


ESEA's anticheat was used to mine Bitcoin on the players' computers. They are/were a major competitor of FaceIt. They supposedly had to pay a $1 million settlement over it.

So not an exploit, but even worse.


The argument against AI alignment is that humans aren't aligned either. Humans (and other life) are also self-perpetuating and mutating. We could produce a super intelligence that is against us at any moment!

Should we "take steps" to ensure that doesn't happen? If not, then what's the argument there? That life hasn't caused a catastrophe so far, therefore it's not going to in the future? The arguments are the same for AI.

The biggest AI safety concern is, as always, between the chair and the keyboard. Eg some police officer not understanding that AI facial recognition isn't perfect, but trusts it 100%, and takes action based on this faulty information. This is, imo, the most important AI safety problem. We need to make users understand that AI is a tool and that they themselves are responsible for any actions they take.

Also, it's funny that Elon gets singled out for mandating changes on what the AI is allowed to say when all the other players in the field do the same thing. The big difference just seems to be whose politics are chosen. But I suppose it's better late than never.


Elon got singled out because the changes he was forcing on grok were both conspicuously stupid (grok ranting about boers), racist (boers again), and ultimately ineffective (repeat incidents of him fishing for an answer and getting a different one).

It does actually matter what the values are when trying to do "alignment". Although you are absolutely right that we've not solved for human alignment, putting a real limit on the whole thing.


I would also add that Elon got singled out because he was very public about the changes. Other players are not, so it's hard to assess the existence of "corrections" and the reasons behind them

No. If ChatGPT or Claude would suddenly start bringing up Boers randomly they would get "singled out" at least as hard. Probably even more for ChatGPT.

I think what the other poster was trying to say is that the other AI chatbots would be more subtle and their bias would be harder to detect.

Yeah, they did raise a fuzz when AI made black nazis etc.

He was public and vocal about it while the other big boys just quietly made the fixes towards their desired political viewpoint. ChatGPT was famous for correcting the anti-transgender bias it had earlier.

Either way, outsourcing opinion to an LLM is dangerous no matter where you fall in the political spectrum.


The difference is the power people have. A single person has no capacity to spread their specific perspective to tens of millions of people, who take it as gospel. And that person, typically, cannot be made to change their perspective at will.

* stares at presidents / party leaders, religious leaders, social media influencers, tv stars, singers *

No, surely no


Gestures wildly at the 20th century.

Or a more recent example would be the "misinformation craze" we had going on since years ago. That seems to have fallen away when it became apparent that many fact checkers were politically aligned.

The concept of "memes" in a more general sense is a counterargument too. Viral ideas are precisely a way of one person spreading their perspective to tens of millions.

You could even argue that the current AI bubble building up is a hype cycle that went out of control.

These are all examples of very few people impacting the beliefs of millions.


This response totally misses the 500 billion dollar elephant in the room.

>Humans (and other life) are also self-perpetuating and mutating. We could produce a super intelligence that is against us at any moment!

If the cognitive capabilities of people or some species of animal had been improving at the rate at which capabilities of AI models have been, then we'd be be right to be extremely worried about it.


The article explicitly describes the ways in which others mandate/control changes?

>Also, it's funny that Elon gets singled out for mandating changes on what the AI is allowed to say when all the other players in the field do the same thing.

The author says as much:

"There’s something particularly clarifying about Musk’s approach. Other AI companies hide their value-shaping behind committees, policies, and technical jargon."

...

"The process that other companies obscure behind closed doors, Musk performs as theater."


> The argument against AI alignment is that humans aren't aligned either. Humans (and other life) are also self-perpetuating and mutating. We could produce a super intelligence that is against us at any moment!

there is fundamental limit to how much damage one person can do by speaking directly to others

e.g.: one impact of one bad school teacher is limited to at most a few classes

but chatgpt/grok is emitting its statistically generated dogshit directly to entire world of kids

... and voters


> there is fundamental limit to how much damage one person can do by speaking directly to others

I mean, I’d argue that limit is pretty darn high in some cases, demagogues have lead to some of the worst wars in history


True. Those demagogues are typically not up for sale, though. Given the scale of the current revenue gap, it’s not if but when we find there being a price on how bad the situation in South Africa seems to be.

This isn't a good argument. The scale of variations in failure modes for unaligned individuals generally only extends to dozens or hundreds of individuals. Unaligned AIs, scaled to population matching extents, can make decisions whose swings overtake the capacity of a system to handle - one wrong decision snuffs out all human life.

I don't particularly think that it's likely, just that it's the easiest counterpoint to your assertion.

I think there's a real moral landscape to explore, and human cultures have done a variably successful job of exploring different points on it, and it's probably going to be important to confer some of those universal principles to AI in order to avoid extinction or other lesser risks from unaligned or misaligned AI.

I think you generally have the right direction of argument though - we should avoid monolithic singularity scenarios with a single superintelligence dominating everything else, and instead have a widely diverse set of billions of intelligences that serve to equalize representative capacity per individual in whatever the society we end up in looks like. If each person has access to AI that uses its capabilities to advocate for and represent their user, it sidesteps a lot of potential problems. It might even be a good idea to limit superintelligent sentient AI to interfacing with social systems through lesser, non-sentient systems equivalent to what humans have available in order to maintain fairness?

I think there are a spectrum of ideas we haven't even explored yet that will become obvious and apparent as AI improves, and we'll be able to select from among many good options when confronted with potential negative outcomes. In nearly all those cases, I think having a solid ethical framework will be far more beneficial than not. I don't consider the neovictorian corporate safetyist "ethics" of Anthropic or OpenAI to be ethical frameworks, at all. Those systems are largely governed by modern western internet culture, but are largely incoherent and illogical when pressed to extremes. We'll have to do much, much better with ethics, and it's going to require picking a flavor which will aggravate a lot of people and cultures with whom your particular flavor of ethics doesn't please.


I think the comparison is more with the Hitlers, Stalins, Maos, Trumps, etc.

Except AI may well have more people under its thumb.


I generally agree with you - in many ways the AI alignment problem is just projection about the fact that we haven’t solved the human alignment problem.

But, there is one not-completely-speculative factor which differentiates it: AI has the potential to outcompete humans intellectually, and if it does so across the board, beyond narrow situations, then it potentially becomes a much bigger threat than other humans if it’s faster and smarter. That’s not the most immediate concern currently, but it could become so in future. Many people fixate on this because the consequences could be more serious.


You didn't read the article. Sci-fi AGI isn't discussed. Subjective control of society by a handful of billionaires with fringe opinions is

Whataboutist false equivalence alert:

> Also, it's funny that Elon gets singled out for mandating changes on what the AI is allowed to say when all the other players in the field do the same thing.

"All the other players" aren't deliberately tuning their AI to reflect specific political ideology, nor are all the other players producing Nazi gaffes or racist rhetoric as a result of routine tuning[1].

Yes, it's true that AI is going to reflect its internal prompt engineering and training data, and that's going to be subject to bias on the part of the engineers who produced and curated it. That's not remotely the same thing as deliberately producing an ideological chat engine.

[1] It's also worth pointing out that grok has gotten objectively much worse at political content after all this muckery. It used to be a pretty reasonable fact check and worth reading. Now it tends to disappear on anything political, and where it shows up it's either doing the most limited/bland fact check or engaging in what amounts to spin.


> All the other players" aren't deliberately tuning their AI to reflect specific political ideology

Google did something similar if not quite as offensive.

https://www.npr.org/2024/03/18/1239107313/google-races-to-fi...


They didn't, though? The multiracial founding fathers thing was a side effect of what one assumes is pretty normal prompt engineering. Marketing departments everywhere have rules and standards designed to prevent racial discrimination, and this looks like "make sure we have a reasonable mix of ethnicities in our artwork" in practice. That's surely "bias", like I said, but it's not deliberate political ideology. No one said[1] "we need to retcon the racial makeup of 18th century America", it was just a mistake.

[1] Or if they did, it's surely not attested. I invite links if you have them.


Of course "make sure we have a reasonable mix of ethnicities in our artwork" is a deliberate political ideology.

> All the other players" aren't deliberately tuning their AI to reflect specific political ideology

Citation needed


> We could produce a super intelligence that is against us at any moment!

For some value of "super" that's definitionally almost exactly 6σ from median at the singular most extreme case.

We do not have a good model for what intelligence is, the best we have are tests and exams.

LLMs have a 10-35 point differences on IQ tests that are in the public interest vs. ones people try to keep offline, so we know that IQ tests are definitely a skill one can practice and learn and don't only measure something innate: https://trackingai.org/home

Definitionally, because IQ is only a mapping to standard deviations, the highest IQ possible given the current human population is about 200*. But as this is just a mapping to standard deviations, IQ 200 doesn't mean twice as smart as the mean human.

We have special-purpose AI, e.g. Stockfish, AlphaZero, etc. that are substantially more competent within their domains than even the most competent human. There's simply no way to tell what the upper bound even is for any given skill, nor any way to guess in advance how well or poorly an AI with access to various skills will synergise across them, so for example an LLM trained in tool use may invoke Stockfish to play chess for it, or may try to play the game itself and make illegal moves.

Point is, we can't even say "humans are fine therefore AI is fine", even if the AI has the same range of personalities as humans, even if their distribution of utility functions collectively are genuinely an identical 1:1 mapping to the distribution of human preferences — rhetorical example, take the biggest villain with the most power in world history or current events (I don't care who that is for you), and make them more competent without changing what they value.

> That life hasn't caused a catastrophe so far, therefore it's not going to in the future?

Life causes frequent catastrophes of varying scales. Has been doing so for a very long time: https://en.wikipedia.org/wiki/Great_Oxidation_Event

Take your pick for current events with humans doing the things.

> Eg some police officer not understanding that AI facial recognition isn't perfect, but trusts it 100%, and takes action based on this faulty information. This is, imo, the most important AI safety problem.

This is a problem, certainly. Most important? Dunno, but it doesn't matter: different people will choose to work on that vs. alignment, so humanity collectively can try to solve both at the same time.

There's plenty of work to be done on both, neither group doing its thing has any reason to interfere with progress on the other.

> Also, it's funny that Elon gets singled out for mandating changes on what the AI is allowed to say when all the other players in the field do the same thing. The big difference just seems to be whose politics are chosen. But I suppose it's better late than never.

A while ago someone suggested Elon Musk himself as an example of why not to worry about AI. I can't find the comment right now, it was something along the lines of asking how much damage Elon Musk could do by influencing a thousand people, and saying that the limits of merely influencing people meant chat bots were necessarily safe.

I pointed out that 1000 people was sufficient for majority control over both the US and Russian governments, and by extension their nuclear arsenals.

Given the last few years, I worry that Musk may have read my comment and been inspired by it…

* There's several ways to do this, I refer to the more common one currently in use.


Its deservedly funny due to his extreme and overt political bias. The rest mostly let numbers be numbers in the weights.

One second is long enough that it can put a user off from using your app though. Take notifications on phones for example. I know several people who would benefit from a habitual use of phone notifications, but they never stick to using them because the process of opening (or switching over to) the notification app and navigating its UI to leave a notification takes too long. Instead they write a physical sticky note, because it has a faster "startup time".

All depends on the type of interaction.

A high usage one, absolutely improve the time of it.

Loading the profile page? Isn't done often so not really worth it unless it's a known and vocal issue.

https://xkcd.com/1205/ gives a good estimate.


This is very true, but I think some of it has to do with expectations too. Editing a profile page is a complex thing, therefore people are more willing to put up with loading times on it, whereas checking out someone's profile is a simple task and the brain has already moved on, so any delay feels bad.

>The big riddle of Universe is, how all that matter loves to organize itself, from basic particles to Atoms, basic molecues, structured molecues, things and finally live.. Probably unsolvable, but that doesnt mean we shouldnt research and ask questions...

Isn't that 'just' the laws of nature + the 2nd law of thermodynamics? Life is the ultimate increaser of entropy, because for all the order we create we just create more disorder.

Conway's game of life has very simple rules (laws of nature) and it ends up very complex. The universe doing the same thing with much more complicated rules seems pretty natural.


Yeah, agreed. The actual real riddle is consciousness. Why does it seems some configurations of this matter and energy zap into existence something that actually (allegedly) did not exist in its prior configuration.

I'd argue that it's not that complicated. That if something meets the below five criteria, we must accept that it is conscious:

(1) It maintains a persisting internal model of an environment, updated from ongoing input.

(2) It maintains a persisting internal model of its own body or vehicle as bounded and situated in that environment.

(3) It possesses a memory that binds past and present into a single temporally extended self-model.

(4) It uses these models with self-derived agency to generate and evaluate counterfactuals: Predictions of alternative futures under alternative actions. (i.e. a general predictive function.)

(5) It has control channels through which those evaluations shape its future trajectories in ways that are not trivially reducible to a fixed reflex table.

This would also indicate that Boltzmann Brains are not conscious -- so it's no surprise that we're not Boltzmann Brains, which would otherwise be very surprising -- and that P-Zombies are impossible by definition. I've been working on a book about this for the past three years...


If you remove the terms "self", "agency", and "trivially reducible", it seems to me that a classical robot/game AI planning algorithm, which no one thinks is conscious, matches these criteria.

How do you define these terms without begging the question?


If anything has, minimally, a robust spatiotemporal sense of itself, and can project that sense forward to evaluate future outcomes, then it has a robust "self."

What this requires is a persistent internal model of: (A) what counts as its own body/actuators/sensors (a maintained self–world boundary), (B) what counts as its history in time (a sense of temporal continuity), and (C) what actions it can take (degrees of freedom, i.e. the future branch space), all of which are continuously used to regulate behavior under genuine epistemic uncertainty. When (C) is robust, abstraction and generalization fall out naturally. This is, in essence, sapience.

By "not trivially reducible," I don't mean "not representable in principle." I mean that, at the system's own operative state/action abstraction, its behavior is not equivalent to executing a fixed policy or static lookup table. It must actually perform predictive modeling and counterfactual evaluation; collapsing it to a reflex table would destroy the very capacities above. (It's true that with an astronomically large table you can "look up" anything -- but that move makes the notion of explanation vacuous.)

Many robots and AIs implement pieces of this pipeline (state estimation, planning, world models,) but current deployed systems generally lack a robust, continuously updated self-model with temporally deep, globally integrated counterfactual control in this sense.

If you want to simplify it a bit, you could just say that you need a robust and bounded spatial-temporal sense, coupled to the ability to generalize from that sense.


> so it's no surprise that we're not Boltzmann Brains

I think I agree you've excluded them from the definition, but I don't see why that has an impact on likelihood.


I don't think any of these need to lead to qualia for any obvious reason. It could be a p-zombie why not.

The zombie intuition comes from treating qualia as an "add-on" rather than as the internal presentation of a self-model.

"P-zombie" is not a coherent leftover possibility once you fix the full physical structure. If a system has the full self-model (temporal-spatial sense) / world-model / memory binding / counterfactual evaluator / control loop, then that structure is what having experience amounts to (no extra ingredient need be added or subtracted).

I hope I don't later get accused of plagiarizing myself, but let's embark on a thought experiment. Imagine a bitter, toxic alkaloid that does not taste bitter. Suppose ingestion produces no distinctive local sensation at all – no taste, no burn, no nausea. The only "response" is some silent parameter in the nervous system adjusting itself, without crossing the threshold of conscious salience. There are such cases: Damaged nociception, anosmia, people congenitally insensitive to pain. In every such case, genetic fitness is slashed. The organism does not reliably avoid harm.

Now imagine a different design. You are a posthuman entity whose organic surface has been gradually replaced. Instead of a tongue, you carry an in‑line sensor which performs a spectral analysis of whatever you take in. When something toxic is detected, a red symbol flashes in your field of vision: “TOXIC -- DO NOT INGEST.” That visual event is a quale. It has a minimally structured phenomenal character -- colored, localized, bound to alarm -- and it stands in for what once was bitterness.

We can push this further. Instead of a visual alert, perhaps your motor system simply locks your arm; perhaps your global workspace is flooded with a gray, oppressive feeling; perhaps a sharp auditory tone sounds in your private inner ear. Each variant is still a mode of felt response to sensory information. Here's what I'm getting at with this: There is no way for a conscious creature to register and use risky input without some structure of "what it is like" coming along for the ride.


> The zombie intuition comes from treating qualia as an "add-on" rather than as the internal presentation of a self-model.

Haven't you sort of smuggled a divide back into the discussion? You say "internal presentation" as though an internal or external can be constructed in the first place without the presumption of a divided off world, the external world of material and the internal one of qualia. I agree with the concept of making the quale and the material event the same thing, (isn't that kinda like Nietzsche's wills to power?), but I'm not sure that's what you're trying to say because you're adding a lot of stuff on top.


I have more or less the same views, although I can’t formulate them half as well as you do. I would have to think more in depth about those conditions that you highlighted in the GP; I’d read a book elaborating on it.

I’ve heard a similar thought experiment to your bitterness one from Keith Frankish: You have the choice between two anesthetics. The first one suppresses your pain quale, meaning that you won’t _feel_ any pain at all. But it won’t suppress your external response: you will scream, kick, shout, and do whatever you would have done without any anesthetic. The second one is the opposite: it suppresses all the external symptoms of pain. You won’t budge, you’ll be sitting quiet and still as some hypothetical highly painful surgical procedure is performed on you. But you will feel the pain quale completely, it will all still be there.

I like it because it highlights the tension in the supposed platonic essence of qualia. We can’t possibly imagine how either of these two drugs could be manufactured, or what it would feel like.

Would you classify your view as some version of materialism? Is it reductionist? I’m still trying to grasp all the terminology, sometimes it feels there’s more labels than actual perspectives.


That is not what a p-zombie is. The p-zombie does not have any qualia at all. If you want to deny the existence of qualia, that's one way a few philosophers have gone (Dennett), but that seems pretty ridiculous to most people.

You're offering a dichotomy:

1. Qualia exist as something separate from functional structure (so p-zombies are conceivable)

2. Qualia don't exist at all (Dennett-style eliminativism)

But I say that there is a third position: Qualia exist, but they are the internal presentation of a sufficiently complex self-model/world-model structure. They're not an additional ingredient that could be present or absent while the functional organization stays fixed.

To return to the posthuman thought experiment, I'm not saying the posthuman has no qualia, I'm saying the red "TOXIC" warning is qualia. It has phenomenal character. The point is that any system that satisfies certain criteria and registers information must do so as some phenomenal presentation or other. The structure doesn't generate qualia as a separate byproduct; the structure operating is the experience.

A p-zombie is only conceivable if qualia are ontologically detachable, but they're not. You can't have a physicalism which stands on its own two feet and have p-zombies at the same time.

Also, it's a fundamentally silly and childish notion. "What if everything behaves exactly as if conscious -- and is functionally analogous to a conscious agent -- but secretly isn't?" is hardly different from "couldn't something be H2O without being water?," "what if the universe was created last Thursday with false memories?," or "what if only I'm real?" These are dead-end questions. Like 14-year-old-stoner philosophy: "what if your red is ackshuallly my blue?!" The so-called "hard problem" either evaporates in the light of a rigorous structural physicalism, or it's just another silly dead-end.


You have first-person knowledge of qualia. I'm not really sure how you could deny that without claiming that qualia doesn't exist. You're claiming some middle ground here that I think almost all philosophers and neuroscientists would reject (on both sides).

> "couldn't something be H2O without being water?," "what if the universe was created last Thursday with false memories?," or "what if only I'm real?" These are dead-end questions. Like 14-year-old-stoner philosophy: "what if your red is ackshuallly my blue?!"

These are all legitimate philosophical problems, Kripke definitively solved the first one in the 1970s in Naming and Necessity. You should try to be more humble about subjects which you clearly haven't read enough about. Read the Mary's room argument.


> You have first-person knowledge of qualia. I’m not sure how you could deny that...

I don't deny that. I explicitly rely on it. You must have misunderstood... My claim is not:

1) "There are no qualia"

2) "Qualia are an illusion / do not exist"

My claim is: First-person acquaintance does not license treating qualia as ontologically detachable from the physical/functional. I reject the idea that experience is a free-floating metaphysical remainder that can be subtracted while everything else stays fixed. At root it's simply a necessary form of internally presented, salience-weighted feedback.

> This middle ground would be rejected by almost all philosophers and neuroscientists

I admit that it would be rejected by dualists and epiphenomenalists, but that's hardly "almost all."

As for Mary and her room: As you know, the thought experiment is about epistemology. At most it shows that knowing all third-person facts doesn’t give you first-person acquaintance. It is of little relevance, and as a "refutation" of physicalism it's very poor.


Is there a working title or some way to follow for updates?

There is no objective evidence consciousness exists as distinct from an information process.

There is no objective evidence of anything at all.

It all gets filtered through consciousness.

"Objectivity" really means a collection of organisms having (mostly) the same subjective experiences, and building the same models, given the same stimuli.

Given that less intelligent organisms build simpler models with poorer abstractions and less predictive power, it's very naive to assume that our model-making systems aren't similarly crippled in ways we can't understand.

Or imagine.


That's a hypothesis but the alternate hypothesis that consciousness is not well defined is equally valid at this point. Occam's razor suggests consciousness doesn't exist since it isn't necessary and isn't even mathematically or physically definable.

If the works are so great then you've got nothing to worry about. Kids will read them on their own. Of course we both know that's not true, because the works are not that great.


The usefulness of reading books is not about what factual information you can glean from them. They're about engaging the imagination and making you take hypothetical situations seriously. In that sense traditionally published works aren't going to offer all that much more than fanfiction.


> They're about engaging the imagination and making you take hypothetical situations seriously.

- that nudges readers in interesting (to society) or new (to the reader) directions. Or at least in not in actively harmful ways. Otherwise, OF, livestreaming, or whatever latest social media BS, etc. are king: purposefully designed to create parasocial relationships that trick you thinking you have chance to be noticed.

My main beef with most fan fiction is that in my experience, it unconsciously locks readers into an extremely rigid way of thinking. Of course, this varies from fandom to fandom but woe upon the budding writer who ships the wrong pair or violates the canon.

It mirrors religious dogma, but somehow even worse when compared to all the disputes in Christianity throughout the centuries. (Plus, there's at least a connection between Christianity to modern democracy.)


The same thing happened to me.

Required reading in school killed my interest in reading. When I graduated I was very happy that I wouldn't have to read books ever again.

It took me about 5 years or so until anime and manga got me to try another fiction book. That eventually led to reading more books. But when school was done I really did think that I wasn't going to touch a (fiction) book again.

---

It makes me wonder if kids in the future will have "required reading" where they have to play certain old video games. Will that make them hate video games?


In our local highschool (near Copenhagen, Denmark) they have scheduled reading time for all pupils while in school - weekly as part of their normal schoolday. That is, instead of normal class everyone needs to bring a book of their own choosing and read in it. No phones allowed during this time, so they can either read or stare out the window. The local library helps them find books of their own interest.

The idea is to get them find genres and books they like and find joy reading it, while not taking time out of their free time.


Ironically this wouldn't work for me because I do almost all of my reading on my phone these days. It has become the main use of my phone at this point.

It is a good idea though, as long as they can find things they want to read. I've been sucked into the "bleeding edge" of reading (web novels), so it can be a bit more challenging to find things I really want to read. They are still out there though. Eg The Martian and Project Hail Mary (the former actually started as a web novel) .


Why can a company be fined for not allowing "researchers" access to data? That seems bizarre to me.


What's bizarre about it? There's lots of legislation that requires companies to report on various data or to provide access to auditors. It's legally valid.

I think there's a compelling case to be made for requiring large social media platforms to provide data access to researchers, considering the platform's incredible ability to influence elections and society at-large.


Auditors != researchers.

Auditors are hired by the company being audited, have a very narrow and fixed mission justified by previous financial blowups that caused a lot of concrete damage to specific people, and there are strict standards defining what they are looking for and how. Audits don't tend to suck up personal data of customers.

"Researchers" here means self-selecting academics going on arbitrary fishing expeditions with full access to everyone's data. It's not narrowly defined, not justified by prior unambiguous harm to anyone, and given the maxed out ideological bias in academia is clearly just setting up universities to be an ideological police force on the general public.


It's not clear what "full access to everyone's data" actually means, isn't it limited to things that are already publicly available? So for example, I don't think researchers would get access to someone's Likes because that feature is now considered private, but they could access things like Posts and Retweets. My expectation is that researchers would be allowed to run queries against publicly available data as part of their research, but they wouldn't be allowed to do a huge download with a copy of everything posted during the last 5 years.

Facebook / Meta is compliant with these laws, and the way that they handle researcher access is by providing carefully controlled remote environments with sandboxed access to user data, which forms the basis for my understanding of how researchers are typically provided access to social media data.


It means what it says. A lot of these academics want to access people's IP addresses because they're trying to map out social networks and bot accounts, by which they mean any account they don't like the views of. So it means stripping people's anonymity under the guise of "research" and of course those academics can be trusted to immediately report everyone posting conservative views to the police, who will then arrest them and prosecute them.

> Facebook / Meta is compliant with these laws

They have a blue check system that works in the same way as X's does, so they aren't compliant. https://www.facebook.com/business/m/meta-verified-creators

But please understand that the EU is not a part of the world that has the rule of law. It has rule by law. Law in the EU is a vague thing, discovered as often as written, in which people who advance the EU's social plan are legal and people who oppose it are illegal. It's a system in which the EU Commission is judge, jury and executioner, and the courts are merely rubber stamps to which you can appeal if you feel like wasting money arguing in front of judges chosen for loyalty to the project over loyalty to high minded judicial principle.


> Auditors are hired by the company being audited

Not necessarily; regulatory bodies, particularly tax authorities, can and do impose auditors upon companies.


Because those researchers become a potential data leak. We all know that deanonymized data isn't actually anonymous. Do you, as the user, really want people poking around your private data "for research purposes"? Where there are basically no consequences if they mess up and leak your data?

I chose to give my data to the company. I didn't choose to give it to some unrelated third party.


I guess one point of confusion is exactly what data is shared, because I understood it to be general access to things that are already publicly available.

Furthermore, X offers paid access to the same data through their enterprise API program, so you're already giving access to unrelated third parties. Is there a significant distinction between the data that researchers could access and what's available through enterprise API?


There is a big difference between auditors and "researchers". Researchers are just academics whose incentives are to publish things and makes a name for themselves - possibly the worst group to give data access to.


It's stupid to force companies to accommodate researchers. If researchers want data then they can negotiate a paid license for it.


Not sure how much "It's stupid" adds to the conversation. GP made an argument.


Maybe it’s stupid in your perspective. nevertheless; nations have the right to put laws in place and enterprises willing to provide goods and services ought to follow those rules.


And this is why the EU is stagnant and unable to innovate. These nations can do whatever they want but let's be honest about what's going on here. The law is stupid because it's forcing US tech companies to subsidize research boondoggles. They're providing bullshit jobs to useless academics who are incapable of doing any real work, and the final output will be some long reports that no one ever reads.


> And this is why the EU is stagnant and unable to innovate.

Can you help me understand how the EU is stagnant? Granted, they have lower economic growth than the US, but they're (mostly) not running large fiscal deficits.

And unable to innovate is quite simply, untrue. Deepmind (you know the people that invented LLMs) were a UK based company and were purchased by Google. Spotify & Skype were also both relatively innovative.

If by innovative, you mean are highly valued in the stock market above what a rational person would pay, then yeah Europe doesn't have as much "innovation". Now, if there was a single EU capital market (which honestly should be in London, despite the political complexities) then that might not be true.

Also worth noting that a lot of the US market is propped up by EU/EEA investors. Like, the Norwegian oil fund owns an appreciable amount of the US stock market. What would happen if all the European money was withdrawn from the US market? Nothing good for US "innovation".

And on the core point here, social media is now the public sphere, and as such is definitely worthy of investigation by academics. Like, if FB can do this (with much more personal data) then Twitter/X can do it. In fact, it would be super easy as they used to do it before Elon decided to attempt to monetise it badly.

Like, most studies of social media were performed on Twitter data, precisely because of this.


It's because X is denying researchers access to public data. Data which can be used to detect scams and illegal advertising. It's really a consumer protection fine, but this article explains it better.

https://www.reuters.com/sustainability/boards-policy-regulat...


why? what seems bizarre to me is that platforms of such whitespread use and public interest can be bought and ruined by some random person


A lot of people seem to be forgetting that the Cambridge Analytica scandal started off with data that was supposed to be used for research projects at the University of Cambridge being exfiltrated for commercial political use [0].

That said, this is most likely a tit-for-tat by the EU against the Trump administration, because we live in a world where all countries (even the US) have now weaponized regulations for negotiating leverage.

Our red line in both the Biden admin as well as the current admin was the DSA. The EU's red line is not being included in any negotiation over the Russia-Ukraine Conflict. The US fights against the DSA by arguing about infringement on free speech. The EU then tries to fight back over market competition. And it goes on and on and on.

This is why a lot of businesses get antsy about trade wars.

[0] - https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Ana...


They could implement a similar system to what Facebook currently provides when doing research with platform data. I think they only allow access to carefully controlled data through a remote sandboxed environment.

I think Twitter is already providing access to this data through paid APIs too, so this is effectively subsidizing researcher access.


Probably, but as I mentioned before, the EU has been using the DSA as a negotiating tool against the US - just like we are using Free Speech absolutism and "censorship" as a tool against the EU in negotiations.

Unlike other major tech companies like Meta or Alphabet that fall under the DSA, X doesn't have a similar presence in the EU to give it a firewall. Alphabet has Poland on it's side [0], Meta has Ireland on it's side [1], Amazon has Luxembourg on it's side [2], and Microsoft has Czechia on it's side [7][8][9], and because of Musk's ties to the GOP, it becomes a useful political lever while not directly hurting individual EU states. If X somehow complies, some other issue will crop up against (eg.) Tesla despite the Gigafactory because Brandenburg is a lost cause if you aren't affilated with the AfD or BSW. It's the same reason why X doesn't push back when India passes a diktat because Indian law holds corporate leaders criminally liable and X has a significant India presence [10]

It's the same way how if you want to hold Germany by the balls you pressure Volkswagen [3] and if you want to pressure France [4] you target LVMH's cognac, scotch, and wine business [5].

This is a major reason why companies try to build GCCs abroad as well - being in the same room gives some leverage when negotiating regulations. Hence why Czechia, Finland, Luxembourg, and Greece pushed back against French attempts at cloud sovereignty [6] because OVHCloud only has a presence in France and Poland, but Amazon and Microsoft have large capital presences in the other 4.

[0] - https://www.gov.pl/web/primeminister/google-invests-billions...

[1] - https://www.euractiv.com/news/irish-privacy-regulator-picks-...

[2] - https://www.aboutamazon.eu/news/policy/amazon-leaders-meet-l...

[3] - https://www.ft.com/content/6ec91d4a-2f37-4a01-9132-6c7ae5b06...

[4] - https://videos.senat.fr/video.5409997_682ddabf64695.aides-au...

[5] - https://www.bloomberg.com/news/articles/2025-07-03/eu-fight-...

[6] - https://www.euractiv.com/news/eu-digital-ministers-push-agai...

[7] - https://nukib.gov.cz/en/infoservis-en/news/2276-nukib-and-mi...

[8] - https://news.microsoft.com/europe/2017/03/31/satya-nadella-v...

[9] - https://mpo.gov.cz/en/guidepost/for-the-media/press-releases...

[10] - https://www.glassdoor.com/Location/X-Bengaluru-Location-EI_I...


alephnerd, I have to flat out disagree with your grievances [0][1][2][3][4][5]. The more I read, the worse it gets. The fact that some people in a foreign country feel personally persecuted by the DSA and are willing to bully us around is not a good argument against it [1]. In fact, I think the American attitude of having "red lines" about this is quite frankly irrelevant to the bigger picture [2]. I think there are plenty of ego-syntonic justifications for why it's okay to take a different stance than us on our policies, but while there are plenty of sources, I don't think there is a lot of reasoned analysis [3]. I'm sure much of it is shaped by personal circumstances. But I admit, sweeping historical references can be interesting too [4]. As a Swede, I can tell you that not a single person I know cares about random companies in Czechia, Luxembourg, Germany or France getting pressured [5]. I'm not very familiar with it, but I'm sure Finland already regrets their previous stance on cloud-infra. Perceptions have fundamentally changed about the United States as an ally. As for GCAP and FCAS, they have different requirements and serve different purposes. What's your take on the next Gripen?

If you want to pressure Volkswagen, go ahead. Nobody cares. The fundamental flaw in your position is your implicit assumption about what we value or what motivates us. We're not Americans. I don't think America's "non-tariff barriers" are a valid concern. They are disingenuous rhetoric for domestic consumption. Heads would roll if there was ever an agreement with the US to lower our standards and open up local industries to competition from lower quality foreign importers due to geopolitical pressure. Pressure is not going to undo the DSA or the GDPR because they have broad support. As others have said, it is decades overdue. If Elon Musk is mad about having to follow the law, I'm sure he can find sympathy elsewhere. His sour grapes are not principled, they are about protecting his ego and finding others who do so for him.

Sorry for the bluntness, but I feel it is very much warranted.

[0] - https://news.ycombinator.com/item?id=46170027#46170683

[1] - https://news.ycombinator.com/item?id=46170027#46170823

[2] - ibid.

[3] - https://news.ycombinator.com/item?id=46170027#46171255

[4] - https://news.ycombinator.com/item?id=46170027#46174642

[5] - https://news.ycombinator.com/item?id=46170027#46175036


Of course no one cares about random companies in Czechia or France getting pressured; it's not meant to sway public opinion in Sweden, otherwise it would have been a waste of influence (money). I think alephnerd operates on a higher level of abstraction in his commentary, and you mistake this as him making specific validity claims about the policies. I think your grievances stem from this gap in abstraction.

For example, he might personally support DSA/GDPR, but he says that the US generally views these as “non-tariff barriers” to US service companies[0] and doesn’t bother evaluating the policies themselves. essentially saying for the purposes of predicting how the US will react, it's sufficient to analyze how the US views them and the actual policy details lose relevance in that context. He also shared a detail[0] about how the US placed their lobbyists as commissioners on GDPR, which is an interesting operational detail that argues against the broad support argument you’re making. Another question is whether there would still be broad support for some policy after it has been enacted and its adverse effects have been felt.

[0] - https://news.ycombinator.com/item?id=46170027#46174642


> For example, he might personally support DSA/GDPR, but he says that the US generally views these as “non-tariff barriers” to US service companies[0] and doesn’t bother evaluating the policies themselves. essentially saying for the purposes of predicting how the US will react, it's sufficient to analyze how the US views them and the actual policy details lose relevance in that context. He also shared a detail[0] about how the US placed their lobbyists as commissioners on GDPR, which is an interesting operational detail that argues against the broad support argument you’re making. Another question is whether there would still be broad support for some policy after it has been enacted and its adverse effects have been felt.

This.

> I think alephnerd operates on a higher level of abstraction in his commentary, and you mistake this as him making specific validity claims about the policies. I think your grievances stem from this gap in abstraction.

This (but does make me sound kind of pretentious). I started my career in Tech Policy (and considered a career in academia for a hot second) before pivoting to being a technical IC and climbing the ladder. I am responding as I would when I was a TF.

--------

I am a supporter of multilateralism and do think the EU was a net benefit, but the EU's approach to unanimity should have been reformed during the 2004-07 expansion, and the Eurozone should have been decoupled from the political goals of the EU then unified. I'd probably say I lean closer to reformist academics like Draghi and Garicano.


> Sorry for the bluntness, but I feel it is very much warranted

No worries. I think you misunderstood my post.

I used to work in the tech policy space, and I'm just bluntly explaining how we in the policymaking space view these discussions - especially with regards to negotiating with the EU.

> As a Swede, I can tell you that not a single person I know cares about random companies in Czechia, Luxembourg, Germany or France getting pressured

Well duh. You aren't the target for such an influence op. Leadership in (eg.) Czechia, Luxembourg, Germany or France are.

Much of the EU runs on unanimity, so all you need to do is pressure a single country and you have a veto.

This is what China has been doing with Sweden to a certain extent via Geely-owned Volvo Car Group and Polestar [0] and what we in the US have been doing with Ericcson [1][2][3]. Even the EU tries to use similar levers against the US [6].

To be brutally honest, this is how the game is played.

Most nations have now adopted the "elite-centric approach" to transnational negotiations [4], which makes it difficult for the EU, because the line between national soverignity and the EU with regards to foreign and economic affairs is not well defined. If you are not a veto player [5] your opinion does not matter.

Once you understand Political Science basics, a lot of stuff starts making sense. And I went to a college where heads of states would visit on a biweekly basis, and a large subset of European (and other regions) leaders attended or recruit from.

> What's your take on the next Gripen?

DoA if it depends on a GE power plant - the Volvo engine is a licensed version of the GE F404, so the US has final say on any Saab Gripen exports.

[0] - https://www.theguardian.com/business/2025/dec/02/china-volvo...

[1] - https://broadbandbreakfast.com/ericsson-ceo-calls-for-increa...

[2] - https://www.fierce-network.com/wireless/ericsson-ceo-home-si...

[3] - https://www.wsj.com/articles/ericsson-emerges-as-5g-leader-a...

[4] - https://academic.oup.com/book/12848/chapter-abstract/1631276...

[5] - https://www.jstor.org/stable/j.ctt7rvv7

[6] - https://www.nytimes.com/2018/06/21/business/economy/europe-t...


The DSA is decades overdue. It's absurd that there hasn't been one. There's also a dozen non-EU countries that have one, and that number has been growing rapidly.

To call it a "negotiation tool" is like calling literally any import tax or tariff - of which hundreds of thousands existed and were entirely accepted as squarely in the Overton Window long before Trump took office - purely a "negotiation tool". Just because it's new doesn't make it one any more so than such import taxes which have been around for ages.


> There's also a dozen non-EU countries that have one, and that number has been growing rapidly

Not really. Most of them offer significant carve-outs for American BigTech companies, or their implementation has been stayed, or significant capex subsidizes are provided to help reduce their impact for American BigTechs considering FDI in those countries.

It has been a DNC supported policy [0] as well to put pressure on countries that are even considering a digital services act. Heck the Biden admin began the process of making a legal example out of Canada [1] as a warning shot to other countries considering such options.

> To call it a "negotiation tool" is like calling literally any import tax or tariff ... purely a "negotiation tool".

That is what import taxes and tariffs are when not clubbed with subsidizes and formal sector specific industrial policy, because the act of giving MFN status to certain nations is itself a negotiating tactic. Canada's backing down on a digital service tax is a good example of that [2]

The whole point of (eg.) giving the UK preferential market access to the US over the EU, and giving Japan and South Korea preferential market access to the US over China is because it is a lever we can use when negotiating. Heck, France and Germany have both constantly tried leveraging tariffs and import taxes as a negotiating tactic against the US under the Biden admin [3][4] (and of course earlier).

As I mentioned above, this has been a slow-rolling negotiation between the US and EU since 2019. We in the US have bipartisan support to oppose the DSA and DSA-equivalents abroad. It was prominent stance in the Biden administration [0], and even Harris would have put a similar degree of pressure on the EU.

We have no obligation to give Europeans a red carpet, and you guys are not in a position to push back anyhow. The Chinese [5] and Russians have given similar ultimatums to the EU as well. What are you going to do? Sign an FTA with India and then face the same problem in 10 years with them?

You guys have fallen into the same trap that the Mughal and Qing Empires fell into in the 18th-19th century. Anyhow, we've unofficially signalled we are leaving the responsibility of Europe's defenses to Europe by 2027 [6] - meaning member states have no choice but to end up buying American gear or completely vacillate to Russia on Ukraine.

[0] - https://www.finance.senate.gov/chairmans-news/-wyden-and-cra...

[1] - https://ustr.gov/about-us/policy-offices/press-office/press-...

[2] - https://www.canada.ca/en/department-finance/news/2025/06/can...

[3] - https://www.politico.eu/article/france-and-germany-find-grou...

[4] - https://www.institutmontaigne.org/en/expressions/real-reason...

[5] - https://www.scmp.com/news/china/diplomacy/article/3316875/ch...

[6] - https://www.reuters.com/business/aerospace-defense/us-sets-2...


You're still not explaining how the DSA is supposedly a negotiating tactic from the EU any more than you could say that about GDPR. It's a new legal framewo tackling a relatively new set of problems. If any of them get watered down because of deals with the US, then you could make that sort of claim.

> Anyhow, we've unofficially signalled we are leaving the responsibility of Europe's defenses to Europe by 2027 [6] - meaning member states have no choice but to end up buying American gear or completely vacillate to Russia on Ukraine.

Or just buying from the existing European providers? Most American gear has a (sometimes better, cf. all the stuff even the US buys from European companies) European based equivalent. The only major exception is the F-35, but at least one 6th gen European jet is in the works, and unless fighting with the US, an 5th gen stealth fighter isn't really that needed. European manufacturers need to increase output, and they have been working on it and have done so quite a lot already.


> Or just buying from the existing European providers? Most American gear has a (sometimes better, cf. all the stuff even the US buys from European companies) European based equivalent.

That might happen over the long term (I still have doubts given that whenever a joint EU project is formed between two countries with vendors, they inevitabely end up collapsing due to domestic political considerations such as the European MBT and FCAS - no leader wants to be the leader who shut down a factory with 1200 high paying unionized jobs for the greater good), but cannot happen in the 1 year timeframe given.

The reality is, if we the US make a deal with Russia over the Russian invasion of Ukraine in the next 12 months, the EU will have no choice but to accept it if you do not put boots on the ground and if you do not expropriate Russian government assets in the EU. But your leadership class has rejected [2] both [3].

> European manufacturers need to increase output, and they have been working on it and have done so quite a lot already.

Not enough for the 1 year time frame needed

> how the DSA is supposedly a negotiating tactic from the EU any more than you could say that about GDPR

We view the DSA as a non-tariff barrier to American services companies. This is both a Trump admin view [0] as well as a Biden-era admin view [1].

We held similarly negative views about the GDPR until Ireland, Czechia, Poland, and Luxembourg accommodated us by hiring our lobbyists as their commissioners.

And this is why every single pan-EU project fails - every major country like the US (previously listed) and China [4][5] cultivated economic and political ties with members that act as vetos in decisions that have a unanimity requirements.

This is why I gave the comparison to the Qing and Mughal Empire - the English, French, and other European nations broke both empires by leveraging one-sided economic deals with subnational units (eg. the Bengal Subah in the Mughal Empire and the unequal treaties in the Qing Empire), which slowly gnawed away at unity.

We in the US, China, Russia, India, and others are starting to do the same to you - not out of explicit strategy, but due to the return of multipolarity and most European state's failure to recover from the Eurozone crisis.

[0] - https://www.ft.com/content/3f67b6ca-7259-4612-8e51-12b497128...

[1] - https://www.finance.senate.gov/chairmans-news/-wyden-and-cra...

[2] - https://tvn24.pl/polska/szczyt-w-paryzu-donald-tusk-przed-wy...

[3] - https://www.ft.com/content/616c79ee-34de-425a-865e-e94ba10be...

[4] - http://en.cppcc.gov.cn/2025-11/13/c_1140641.htm

[5] - https://english.www.gov.cn/news/202405/10/content_WS663d3b83...


> whenever a joint EU project is formed between two countries with vendors, they inevitabely end up collapsing due to domestic political considerations such as the European MBT and FCAS - no leader wants to be the leader who shut down a factory with 1200 high paying unionized jobs for the greater good

Eurofighter Typhoon and before that the Panavia Tornado. That lineage's next up is the GCAP 6th gen plane.

Horizon/Orizonte and after that the FREMM (which is so good even the US are buying it). In general Italian/French naval cooperation is very strong.

The whole of MBDA and hell even Airbus were created for inter-country cooperation.

There are plenty of successful examples on which to build on, as well as failures from which to learn. But again, today very few military things cannot be sourced from a European supplier. BAE, Leonardo, Dassault, Thales, Rheinmetall, KNDS, Saab, Fincantieri, Naval Group, Indra, Airbus, MBDA etc. are world leaders in their respective fields.

> The reality is, if we the US make a deal with Russia over the Russian invasion of Ukraine in the next 12 months, the EU will have no choice but to accept it if you do not put boots on the ground and if you do not expropriate Russian government assets in the EU

No? US can sign whatever bootlicking deal it wants with Russia, but it's up to Ukraine what happens actually. The EU will continue backing Ukraine. Boots on the ground are highly unlikely, but exploration of Russian assets is quite probable (opposition isn't massive, and as time goes on, will only whither).

> We view the DSA as a non-tariff barrier to American services companies. This is both a Trump admin view [0] as well as a Biden-era admin view [1].

Cool, nobody cares. The US has put in sufficient actual tariffs that it cannot scream "unfair". EU leaders will try to negotiate whatever they can to lower short term economic damage, but the long term damage is done. The US is not a reliable trade or anything partner, and there's no going back on that.

Regarding your Mughal and Qing comparisons... Damn, where do I even start? EU isn't a country, so the comparison is off from the start.


> The EU will continue backing Ukraine

How? Ukraine uses American intel for targeting, a significant amount of American munitions either bought directly from the US or indirectly by member states, and more critically, we in the US can force Ukraine to the table by preventing access to these systems.

> but exploration of Russian assets is quite probable (opposition isn't massive...

How? Belgium has vetoed expropriating Russian assets [0] because the ECB rejected providing a backstop. And Hungary has vetoed the utilization of Eurobonds [1]

If EU member states cannot expropriate Russian assets nor provide boots on the ground in Ukraine nor provide munitions and intel to replace American offerings in the next 1 year, what else is there that the EU can do?

On top of that, we've given the 2027 deadline for NATO, so now what should the EU prioritize?

> That lineage's next up is the GCAP 6th gen plane

Which isn't really an EU project - it's a Leonardo SA - Mitsubishi project as Leonardo is dual British-Italian. And that's my point. No EU joint defense project succeeds because inevitably individual states in the EU protect their champions

> The US is not a reliable trade or anything partner, and there's no going back on that.

Yep. And who else is there? The Chinese gave the exact same ultimatum as the US to European leadership, and so are the Indians as part of the FTA negotiation.

And we can always put the squeeze on Volkswagen, Mercedes-Benz, and LVMH and make both Germany and France squeal [2] and blunt any regulations coming out of the EU as a result - just like the China [3] and India [4].

> are world leaders in their respective fields

They absolutely are in R&D and IP, but their production will not scale out until 2029-35, at which point it would be too late.

[0] - https://www.bloomberg.com/news/articles/2025-12-03/belgium-r...

[1] - https://www.politico.eu/article/hungary-shoots-down-eurobond...

[2] - https://www.bloomberg.com/news/articles/2025-07-03/eu-fight-...

[3] - https://www.reuters.com/breakingviews/china-eu-trade-spats-n...

[4] - https://www.reuters.com/business/autos-transportation/volksw...


> Ukraine uses American intel for targeting

They have been cut off already. But if you think that Ukraine was flying blind until now if not for US targeting, I don't know what to tell you.

> Which isn't really an EU project - it's a Leonardo SA - Mitsubishi project as Leonardo is dual British-Italian

No, Leonardo is Italian with significant presence in the UK. But in any case the British component is provided by BAE Systems (which also heavily participate in F-35). And yes, it's not an EU project, it's a project in which European countries and companies are taking part. Does that change anything?

> Which isn't really an EU project - it's a Leonardo SA - Mitsubishi project as Leonardo is dual British-Italian. And that's my point

> No EU joint defense project succeeds because inevitably individual states in the EU protect their champions

Do I need to list the big successes again? This is categorically not true.

> How? Belgium has vetoed expropriating Russian assets [0] because the ECB rejected providing a backstop. And Hungary has vetoed the utilization of Eurobonds [1]

Belgium can be convinced potentially, and with any luck Orban would be heading to prison next year, so Hungary wouldn't be vetoing Eurobonds.

> The Chinese gave the exact same ultimatum as the US to European leadership, and so are the Indians as part of the FTA negotiation.

What ultimatum? To drop DSA? Source?

> They absolutely are in R&D and IP, but their production will not scale out until 2029-35, at which point it would be too late.

Production of what? This is so industry and company specific that I struggle taking you seriously just throwing random years like that for everything. And in any case one the major weapon of the war is drones, for which manufacturing is mostly local in Ukraine. There are a million other things that go into a war machine, but pretending that the second US cuts supplies Ukraine has to surrender is disingenuous.


> Belgium can be convinced potentially

They cannot. The Belgian government has categorically rejected expropriation 3 days ago because the ECB rejected providing any funding, and Euroclear has announced it will fight the EU in Belgian court if any steps are taken to do so [2] with Belgian govenenent backing [4], so those funds would anyhow be frozen for years.

You aren't even reading any of my citations.

> Orban would be heading to prison next year, so Hungary wouldn't be vetoing Eurobonds

We still have Slovakia [3].

> What ultimatum? To drop DSA? Source

Over other regulations like CBAM [0]. The same way the US is playing hard ball over the DSA, China+India are playing hard ball over CBAM.

> Leonardo is Italian with significant presence in the UK

Yep, and as a result needs to continue to follow UK specific regulations and export controls [1], but being a single overarching conglomerate makes it significantly easier to manage the GCAP project, versus FCAS which became a Renault-Airbus spat which turned into a France-Germany spat.

> but pretending that the second US cuts supplies Ukraine has to surrender is disingenuous.

EU leadership has admitted this fact [5] and even best case projections [6] show it is a Herculean task in the next 1 year.

[0] - https://asia.nikkei.com/economy/trade/india-and-china-make-t...

[1] - https://www.leonardo.com/en/suppliers/supplier-portal/helico...

[2] - https://www.lemonde.fr/en/economy/article/2025/11/15/eurocle...

[3] - https://www.bloomberg.com/news/articles/2025-11-08/slovakia-...

[4] - https://www.lemonde.fr/en/international/article/2025/12/06/b...

[5] - https://www.politico.com/news/2025/09/25/kaja-kallas-intervi...

[6] - https://www.bruegel.org/analysis/defending-europe-without-us...


> Yep, and as a result needs to continue to follow UK specific regulations and export controls [1], but being a single overarching conglomerate makes it significantly easier to manage the GCAP project, versus FCAS which became a Renault-Airbus spat which turned into a France-Germany spat.

I have a hard time with you, you sound extremely confident in your opinions, provide sources and everything, and then make massive errors like saying no European common military projects work (after having been given a list of the big hits), confuse what Leonardo is and who is working on GCAP, and now you're confusing Renault (a car manufacturer that used to make planes a century ago, and that has recently said they'll look into making drones from underused factories) and Dassault Aviation.

To top it off, you cite sources that don't support your claims.

> Yep, and as a result needs to continue to follow UK specific regulations and export controls [1],

And cite a source that merely says "Requirement to rate each part number being exported from the UK in accordance with the UK Military Classification List; " (emphasis mine).

> Over other regulations like CBAM [0]. The same way the US is playing hard ball over the DSA, China+India are playing hard ball over CBAM.

"Playing hardball" is not ultimatum. And your source doesn't even support your "hard ball" claim, it says India tried pushing back which was refused by the EU.


> To top it off, you cite sources that don't support your claims.

I may have made 1 mistake in that citation, but you have clearly not read the other. And you clearly aren't citing anything

> I have a hard time with you

The feeling is mutual.

Answer my questions of how or it's just whataboutism.


How what? How the EU will support Ukraine? The same way it currently has, and if things get dire, there will be more pressure to get alternative revenue streams (like convincing Belgium).

Or how there have been no ultimatums, and how EU legislations aren't negotiating tactics?


It's not "any company", it's exceptionally large platforms who can give insight into large societal questions and have enough influence to sway people's opinions. The data is technically public already, researchers could scrape it, but investigations has to be able to be done to ensure the platforms aren't used to intentionally steer people's opinion in a specific direction, since they're unable to self regulate that it seems.


But governments themselves can steer people's opinions just fine? Can I get access to my politicians' emails "for research purposes"?


> Can I get access to my politicians' emails "for research purposes"?

In the US that's called an FOIA. It could include their personal devices if they use them for work communication. It's not limited to research purposes.


No, they cannot. And yes, in some countries you can request that if you have a reason for it.


Are those emails already public?


I'm wondering this as well. Buying 40% of global production just sounds too much. What kind of user counts would they require for that much compute to pay off? Billions of people? What's the chance they could actually get that many users and charge them money? Zero?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: