MANPADS are designed to be used against small CAS aircraft. Attacking large transport aircraft effectively requires a considerably larger air defense system. That also assumes you can move a MANPADS within range; the US already controls a large military airfield on Greenland.
Ironically, the US military has historically had more military personnel deployed in Greenland than Denmark. The US has continuously operated a military base at Thule for the better part of a century.
These kinds of joint exercises are pretty common and largely symbolic.
(The wikipedia page about this contains blatant partisan propaganda. Gross.)
Even among engineering fields routine handling of diverse and messy unit systems (e.g. chemical engineering) are relatively uncommon. If you work in one of these domains, there is a practiced discipline to detect unit conversion mistakes. You can do it in your head well enough to notice when something seems off but it requires encyclopedic knowledge that the average person is unlikely to have.
A common form of this is a press release that suggests a prototype process can scale up to solve some planetary problem. In many cases you can quickly estimate that planetary scale would require some part of the upstream inputs to be orders of magnitude larger than exists or is feasible. The media doesn't notice this part and runs with the "save the planet" story.
This is the industrial chemistry version of the "in mice" press releases in medicine. It is an analogue to the Gell-Mann amnesia effect.
They are likely referring to the scope of fine-grained specialization and compile-time codegen that is possible in modern C++ via template metaprogramming. Some types of complex optimizations common in C++ are not really expressible in Rust because the generics and compile-time facilities are significantly more limited.
As with C, there is nothing preventing anyone from writing all of that generated code by hand. It is just far more work and much less maintainable than e.g. using C++20. In practice, few people have the time or patience to generate this code manually so it doesn't get written.
Effective optimization at scale is difficult without strong metaprogramming capabilities. This is an area of real strength for C++ compared to other systems languages.
Again, can you provide an example or two? Its hard to agree or disagree without an example.
I think all C++ wild template stuff can be done via proc macros. Eg, in rust you can add #[derive(Serialize, Deserialize)] to have a highly performant JSON parser & serializer. And thats just lovely. But I might be wrong? And maybe its ugly? Its hard to tell without real examples.
Specialization isn’t stable in Rust, but is possible with C++ templates. It’s used in the standard library for performance reasons. But it’s not clear if it’ll ever land for users.
> As with C, there is nothing preventing anyone from writing all of that generated code by hand. It is just far more work and much less maintainable than e.g. using C++20.
It's also still less elegant, but compile time codegen for specialisation is part of the language (build system?) with build.rs & macros. serde makes strong use of this to generate its serialisation/deserialisation code.
People's expectations are not constrained by the license. They are free to exercise a sense of entitlement beyond the terms of the contract and empirically they often do. The license does not prohibit them from engaging with the authors or maintainers for any reason whatsoever, including requesting free labor.
You could perhaps add a clause in the license that restricts this behavior but then it would no longer be FOSS.
They are free to have a sense of entitlement or to try and engage with the project maintainers/owners but there is nothing that obligates them to reciprocate anything at all.
The main performance difference between Rust, C, and C++ is the level of effort required to achieve it. Differences in level of effort between these languages will vary with both the type of code and the context.
It is an argument about economics. I can write C that is as fast as C++. This requires many times more code that takes longer to write and longer to debug. While the results may be the same, I get far better performance from C++ per unit cost. Budgets of time and money ultimately determine the relative performance of software that actually ships, not the choice of language per se.
I've done parallel C++ and Rust implementations of code. At least for the kind of performance-engineered software I write, the "unit cost of performance" in Rust is much better than C but still worse than C++. These relative costs depend on the kind of software you write.
I like this post. It is well-balanced. Unfortunatley, we don't see enough of this in discussions of Rust vs $lang. Can you share a specific example of where the "unit cost of performance" in Rust is worse than C++?
I generally agree with your take, but I don't think C is in the same league as Rust or C++. C has absolutely terrible expressivity, you can't even have proper generic data structures. And something like small string optimization that is in standard C++ is basically impossible in C - it's not an effort question, it's a question of "are you even writing code, or assembly".
Yes, it is the difference between "in theory" and "in practice". In practice, almost no one would write the C required to keep up with the expressiveness of modern C++. The difference in effort is too large to be worth even considering. It is why I stopped using C for most things.
There is a similar argument around using "unsafe" in Rust. You need to use a lot of it in some cases to maintain performance parity with C++. Achievable in theory but a code base written in this way is probably going to be a poor experience for maintainers.
Each of these languages has a "happy path" of applications where differences in expressivity will not have a material impact on the software produced. C has a tiny "happy path" compared to the other two.
C++ features exist for a reason but it may not be a reason that is applicable to their use case. For example, C++ has a lot of features/complexity that are there primarily to support low-level I/O intensive code even though almost no one writes I/O intensive code.
I don't see why C++ would be materially better than Zig for this particular use case.
The long-term view of LIDAR was not so much that it was expensive, though it was at the time. The issue is that it is susceptible to interference if everyone is using LIDAR for everything all the time and it is vulnerable to spoofing/jamming by bad actors.
For better or worse, passive optical is much more robust against these types of risks. This doesn't matter much when LIDAR is relatively rare but that can't be assumed to remain the case forever.
Doesn’t mean they’re failing because of interfering lidar though. If it’s something like them failing due to the road being blocked or something, it makes sense they’d fail together. Assuming they’re on the same OS, why would one know how to handle that situation and another not?
I am just some schmoe, but optics alone can be easily spoofed as any fan of the Wile E. Coyote has known for decades. [0]
What's crazy to me is that anyone would think that anything short of ASI could take image based world understanding to true FSD. Tesla tried to replicate human response, ~"because humans only have eyes" but largely without even stereoscopic vision, ffs.
But optical illusions are much less of an issue because humans understand them and also suffer from them. That makes them easier to detect, easier to debug, and much less scary to the average driver.
Sure, someone can put up a wall painted to look like a road, but we have about a century of experience that people will generally not do that. And if they do it's easy to understand why that was an issue, and both fixing the issue (removing the mural) and punishing any malicious attempt at doing this would be swift
> and punishing any malicious attempt at doing this would be swift
Is this a joke? Graffiti is now punishable and enforced by whom exactly? Who decides what constitutes an illegal image? How do you catch them? What if vision-only FSD sees a city-sanctioned brick building's mural as an actual sunset?
So you agree that all we need is AGI and human-equal sensors for Tesla-style FSD, but wait... plus some "swift" enforcement force for illegal murals? I love this, I have had heath issues recently, and I have not laughed this hard for a while. Thank you.
Hell, at the last "Tesla AI Day," Musk himself said ~"FSD basically requires AGI" - so he is well aware.
Intentionally trying to create traffic accidents is illegal. This isn't an FSD-thing. If you try to intentionally get humans to crash their cars you are going to get into trouble. I don't see how this suddenly becomes OK when done to competent FSD (not that I'd count Tesla among them)
If I understand your argument correctly, then posting a sign that it is incorrect.. like a wrong way highway on-ramp sign, would be illegal? That sounds correct.
But what if your city hired you to paint a sunset mural on a wall, and then a vision-only system killed a family of four by driving into it, during some "edge case" lighting situation?
I would like to think that we would apply "security is an onion" to our physical safety as well. Stereo vision + lidar + radar + ultrasonic? Would that not be the least that we could do as technologists?
That was autopilot not FSD. Autopilot is a simple ADAS system similar to Toyota Safety sense or all the other garbage ADAS systems from Honda, Kia, Toyota, GM etc. FSD passed this test with flying colors
everyone uses cellphone that transmit on the same frequency. they don't seem to cause interference. once enough lidar enters real word use. there will be regulation to make them work with each other.
Completely different problem domains. A mobile phone is interacting with a fixed point (i.e. cell tower) that coordinates and manages traffic across cell phones to minimize interference. LIDAR is like wifi, a commons that can be polluted at will by arbitrary actors.
LIDAR has much more in common with ordinary radar (it is in the name, after all) and is similarly susceptible to interference.
No, LIDAR is relatively trivial to render immune to interference from other LIDARs. Look at how dozens of GPS satellites share the same frequency without stepping on each others' toes, for instance: https://en.wikipedia.org/wiki/Gold_code
Like GPS, LIDAR can be jammed or spoofed by intentional actors, of course. That part's not so easy to hand-wave away, but someone who wants to screw with road traffic will certainly have easier ways to do it.
> No, LIDAR is relatively trivial to render immune to interference from other LIDARs.
For rotating pulsed lidar, this really isn't the case. It's possible, but certainly not trivial. The challenge is that eye safety is determined by the energy in a pulse, but detection range is determined by the power of a pulse, driving towards minimum pulse width for a given lens size. This width is under 10 ns, and leaning closer to 2-4 ns for more modern systems. With laser diode currents in the tens of amps range, producing a gaussian pulse this width is already a challenging inductance-minimization problem -- think GaN, thin PCBs, wire-bonded LDs etc to get loop area down. And an inductance-limited pulse is inherently gaussian. To play any anti-interference games means being able to modulate the pulse more finely than that, without increasing the effective pulse width enough to make you uncompetitive on range. This is hard.
I think we may have had this discussion before, but from an engineering perspective, I don't buy it. For coding, the number of pulses per second is what matters, not power.
Large numbers of bits per unit of time are what it takes to make two sequences correlate (or not), and large numbers of bits per unit of time are not a problem in this business. Signal power limits imposed by eye safety requirements will kick in long after noise limits imposed by Shannon-Hartley.
> For coding, the number of pulses per second is what matters, not power.
I haven't seen a system that does anti-interference across multiple pulses, as opposed to by shaping individual pulses. (I've seen systems that introduce random jitter across multiple pulses to de-correlate interference, but that's a bit different.) The issue is you really do get a hell of a lot of data out of a single pulse, and for interesting objects (thin poles, power lines) there's not a lot of correlation between adjacent pulses -- you can't always assume properties across multiple pulses without having to throw away data from single data-carrying pulses.
Edit: Another way of saying this -- your revisit rate to a specific point of interference is around 20 Hz. That's just not a lot of bits per unit time.
> Signal power limits imposed by eye safety requirements will kick in long after noise limits imposed by Shannon-Hartley.
I can believe this is true for FMCW lidar, but I know it to be untrue for pulsed lidar. Perhaps we're discussing different systems?
I haven't seen a system that does anti-interference across multiple pulses...
My naive assumption would be that they would do exactly that. In fact, offhand, I don't know how else I'd go about it. When emitting pulses every X ns, I might envision using a long LFSR whose low-order bit specifies whether to skip the next X-ns time slot or not. Every car gets its own lidar seed, just like it gets its own key fob seed now.
Then, when listening for returned pulses, the receiver would correlate against the same sequence. Echoes from fixed objects would be represented by a constant lag, while those from moving ones would be "Doppler-shifted" in time and show up at varying lags.
So yes, you'd lose some energy due to dead time that you'd otherwise fill with a constant pulse train, but the processing gain from the correlator would presumably make up for that and then some. Why wouldn't existing systems do something like this?
I've never designed a lidar, but I can't believe there's anything to the multiple-access problem that wasn't already well-known in the 1970s. What else needs to be invented, other than implementation and integration details?
Edit re: the 20 Hz constraint, that's one area where our assumptions probably diverge. The output might be 20 Hz but internally, why wouldn't you be working with millions of individual pulses per frame? Lasers are freaking fast and so are photodiodes, given synchronous detection.
I suggest looking at a rotating lidar with an infrared scope... it's super, super informative and a lot of fun. Worth just camping out in SF or Mountain View and looking at all the different patterns on the wall as different lidar-equipped cars drive by.
A typical long range rotating pulsed lidar rotates at ~20 Hz, has 32 - 64 vertical channels (with spacing not necessarily uniform), and fires each channel's laser at around 20 kHz. This gives vertical channel spacing on the order of 1°, and horizontal channel spacing on the order of 0.3°. The perception folks assure me that having horizontal data orders of magnitude denser than vertical data doesn't really add value to them; and going to a higher pulse rate runs into the issue of self-interference between channels, which is much more annoying to deal with then interference from other lidars.
If you want to take that 20 kHz to 200 kHz, you first run into the fact that there can now be 10 pulses in flight at the same time... and that you're trying to detect low-photon-count events with an APD or SPAD outputting nanoamps within a few inches of a laser driver putting generating nanosecond pulses at tens of amps. That's a lot of additional noise! And even then, you have an 0.03° spacing between pulses, which means that successive pulses don't even overlap at max range with a typical spot diameter of 1" - 2" -- so depending on the surfaces you're hitting, on their continuity as seen by you, you still can't really say anything about the expected time alignment of adjacent pulses. Taking this to 2 MHz would let you guarantee some overlap for a handful of pulses, but only some... and that's still not a lot of samples to correlate. And of course your laser power usage and thermal challenges just went up two orders of magnitude...
It solves some rare edge cases where the destruction of the moved-from object must be deferred -- the memory is still live even if the object is semantically dead. Non-destructive moves separate those concerns.
There is a related concept of "relocatable" objects in C++ where the move is semantically destructive but the destructor is never called for the moved-from object.
C++ tries to accommodate a lot of rare cases that you really only see in low-level systems code. There are many features in C++ that seem fairly useless to most people (e.g. std::launder) but are indispensable when you come across the specific problem they were intended to solve.
As someone who has actually had to launder pointers before, I would characterize gremlins like std::launder as escape hatches to dig your way out of dilemmas specific to C++ that the language was responsible for burying you under in the first place.
I've worked on similar systems for oil & gas that combined hyperspectral imaging and LIDAR. The analysis of data collected by drones was fully automated. It was at least as effective as humans at detecting anomalies (something which was thoroughly verified prior to adoption).
The more thorough coverage, potential issues being detected much earlier, and increased automation greatly reduced the total manpower required. Humans only came into the picture when the drones found a problem that needed mitigation. Humans have long been the bottleneck for finding operational risks and issues before they turn into a headline. The more humans you can remove from that loop the bigger the win.
reply