Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s amazing tech, it’s just a solution looking for a problem.

It feels a bit like the original Segway’s over-engineered solution versus cheap Chinese hoverboards, then the scooters and e-bikes that took over afterwards.

Why would I be paying all this money for this realistic telepresence when my shitbox HP laptop from Walmart has a perfectly serviceable webcam?



I used my VP extensively recently when working remotely. It's not glamorous, but I used Screen Sharing with a Macbook that grants you a virtual ultrawide monitor.

Once you're already in VR, it's nice to not have to break out for a meeting, and that's where Personas fit in.

It's not a killer app carrying the product, it's a necessary feature making sure there's not a gap in workflow.


Ah, right! Because you can’t videoconference with the headset on.

Thank you! Now I get it!

So it’s sort of a stopgap solution before the ar glasses are small enough to do actual video calls without looking silly?


You're thinking of a world where people would still use a computer with a webcam pointed at their face while doing video calls. For me personally, I'm seeing a world where the headset is all that we need. So no, Persona is not a stopgap solution, it's an end in itself, and in its current state it's already pretty damn good.


Actually I'm thinking of a world where the masses accept an AR headset once it's as light as typical eye glasses. And before most people have these, the calls will be video. But I would be happy to be wrong!


I can imagine for certain niche use cases it really is the killer app though. Like couples in long distance relationships, certain kinds of consultants etc.


I think this explanation makes the situation sound even worse.

The vision pro’s overall productivity solution is inferior to existing, cheaper technology, and it has to be supplemented by a solution to a problem created by its own design.

Essentially you’re saying that after putting on a double headband device that wrecks my hair, gets me sweaty, strains my neck with weight, and fucks up my makeup, I now have to use a workaround fake avatar because the tech bros who made this product had to say “oh shit, if you have a headset on you can’t be on camera!”

For $3500 I can be in real reality and be surrounded by higher resolution professional monitors and just show my real self on camera instead.


I have one and don't really use it for working remotely, but it really shines for media consumption. However, I agree with your points around it being heavy and not for everyone. The device definitely needs to be lighter, and most people wanting to use VR for media consumption could likely just buy a cheaper VR headset.

I think overall it probably remains a niche category. I don't see it becoming as popular as smart watches or anything like that. I do hope that Apple continues to invest in it though as it is a really cool technology.


> For $3500 I can be in real reality and be surrounded by higher resolution professional monitors and just show my real self on camera instead.

Some people frequently want to do that sort of work while away from their desk.


Many use cases come to mind. If (retinal?) identities were private, encrypted, and “anonymized” in handshake:

web browsing without captchas, anubis, bot tests, etc. (“human only” internet, maybe like Berners-Lee’s “semantic web” idea [1][2])

Non “anonymized”:

non-jury court and arbitration appearances (with expansion of judges to clear backlogs [3])

medical checkups and social care (eg neurocognitive checkups for elderly, social services checkins esp. children, checkins for depressed or isolated needing offwork social interactions, etc.)

bureaucratic appointments (customer service by humans, DMV, building permits, licenses, etc.)

web browsing for routine tasks without logins (banks, email, etc)

[1] <https://www.newyorker.com/magazine/2025/10/06/tim-berners-le...> [2] <https://newtfire.org/courses/introDH/BrnrsLeeIntrnt-Lucas-Nw...> [3] <https://nysfocus.com/2025/05/30/uncap-justice-act-new-york-c...>


Let’s run down your use cases:

Human-only Internet: why choose this implementation over something simpler? Surely there’s a simpler way to prove you’re human that doesn’t involve 3D avatar construction on a head-worn display that screws up your hair and makeup. [1] E.g., an Apple Watch-like device can verify you have a real pulse and oxygen in your blood.

Court: solution is already in place, which is showing up to a physical courtroom. Clearing backlogs can be done without a technological solution, it’s more of a process and staffing problem. Moving the judges from a court to a home office doesn’t magically make them clear cases faster.

Medical checkups: phone selfie camera

Bureaucratic appointments: solution in place, physical building, or many of these offer virtual appointments already over a webcam.

Web browsing without logins: passkeys, FaceID, fingerprint

[1] yet another male-designed tech bro product that never considered the concerns of the majority of the population.


You raise fair points. I'd also prefer a simpler solution for a human-only internet, but nothing has really worked so far. Bloomberg issued secure cards with fingerprint pads that you held up to the monitor to retrieve credentials to their system, so maybe a simpler physical authenticator could work at scale. I'm not sure how secure a pulsometer would be, but hacking an apple headset chip and retinal pattern seem harder.

Court: disagree in part. More judges are needed to address the severe backlogs, but as an example NYS judges oppose expansion (see [3] from previous post). A lot of calendar time is spent appearing before judges around a city (they're not all in one area) for motion hearings and the like despite all documents being electronically submitted. Also, there are frequent reschedulings when one party can't physically appear. Some state judges allow teleconference, but a lot don't. Appellate and federal courts rarely.

Checkups and social services: some secure way of monitoring client interactions and outcomes is needed. In Los Angeles, the homeless services agency has been criticized by a federal judge for incompetence [1] and more than half of the child-prostitutes in a notorious corridor were found to be "missing" from the foster system [2]. Maybe headsets are not the best answer, but govt agencies and social service NGOs need to record evidence of their efforts for accountability.

[1] <https://www.latimes.com/california/story/2025-03-31/los-ange...> [2] <https://www.nytimes.com/2025/10/26/magazine/sex-trafficking-...>


I disagree, because it answers a pretty simple question: How to be present in a video call when you're using the headset.

To me it would be a shortcoming of the device if I couldn't show me and the thing I'm working on at the same time.


You have to back up from that question. “How to be present in a video call” is already an answered question.

The “when you’re using the headset” part is the issue. Why are we using the headset? What are the benefits? Why am I making these tradeoffs like messing up my hair, putting a heavy device on my head, messing up my makeup, etc.

This is like saying “The Segway had advanced self-leveling to solve the problem of how to balance when you’re on an upright two wheel device”.

But why are you on an upright two wheel device? Why not just add a third wheel? Why not ride a bicycle? Why not ride a scooter?

The solution is really cool and technologically advanced but it doesn’t actually solve anything besides an artificially introduced problem.


Not really, because this misses the premise of why the device itself is useful.

VR/AR headsets are useful for working on and demonstrating many things that we've had to compromise to fit into a 2D paradigm. Being able to be present with that 3D model has clear advantages over using, for example, a mouse with a 2D equivalent or a 3D projection.

Having to justify how the 3rd dimension is useful is probably a conversation where one party is not engaging in good faith.

The segway analogue is also pretty poor considering how useful self-balancing mobility devices have proven to be - including those which only possess a single wheel.


These are nice words that don’t reflect reality.

By most accounts the Vision Pro hasn’t even cracked a million sales. And that’s the best productivity-focused headset on the market.

You can say that this is a really amazing paradigm shift but if it was people would be lining up to buy it.


You posit:

> Why would I be paying all this money for this realistic telepresence when my shitbox HP laptop from Walmart has a perfectly serviceable webcam?

I gave a pretty straightforward answer for why this feature would exist in this product. People sometimes on this forums ask legitimate questions.

It's pretty clear you weren't, rather you're seeking an opportunity to merely push some tired agenda, likely tied to some personal vendetta, and you're doing a pretty piss-poor job of it.


You say that me making you justify the third dimension is bad faith arguing, but you never even attempted to justify it. You actually do have to justify it because so many 3D technologies have been market duds. VR gaming sputtered into decline, 3D televisions died off, glasses-free 3D is nowhere to be found anymore after the 3DS and that crazy Red smartphone…you actually very much do have to justify that there’s demand for this new paradigm.

> VR/AR headsets are useful for working on and demonstrating many things

What things?

> that we've had to compromise to fit into a 2D paradigm.

What compromises?

> Being able to be present with that 3D model has clear advantages over using, for example, a mouse with a 2D equivalent or a 3D projection.

What advantages?

I think if this was even a niche representation of the future we’d see specialized companies with 3D-oriented software like Autodesk jumping all over the the Vision Pro specifically, but they seem to be nowhere to be found. All the key players in the industry besides Meta have basically bailed, including Microsoft and Google shutting down commercial/industrial solutions that had previously been touted as successful.

I have no vendetta here, I just think that full immersion VR was the wrong play for productivity and general computing. I think that the full immersion VR market is dying and that solutions like Meta Ray-Bans and VITURE glasses are way more palatable because they are way more “normal,” including the way they eschew these moonshot paradigm-shifting technologies that might actually work very well, but nobody asked for.

Nobody wants to be a 3D avatar and work inside a headset where your view of the outside world is desaturated by cameras because it’s cringe and weird.

As a side note I will also point out that if you use a Vision Pro with a MacBook to use the secondary screen functionality (required for writing code or running apps outside the App Store) you’re basically doing the exact same thing as VITURE glasses except you paid 10x more and your battery life sucks. And you can just join a standard conference call on your glasses and essentially look normal.


Why do we have video call meetings when people mostly just listen and the information is carried via audio?

Why do we have 4K monitors when 1920x1080 is perfectly fine for 99.999% of use cases?

If you look at the world through this lens called "serviceability" you'll think everything is a solution looking for a problem.


> when 1920x1080 is perfectly fine for 99.999% of use cases

A lot of people here work with text all day every day and we would rather work with text that looks like it came out of a laser printer than out of a fax machine.


Of all places, HN should not be the one to casually conflate resolution and DPI!


The comments silently imply that they are talking about the same screen size, so 1920x1200 vs 4K is indeed a conversation about DPI.


I read their comment in the exact opposite way, and that your comment is exactly their point.


The conflation was that 1920x1080 is a automatically poor clarity so that's why 4k is needed (at the same size). I.e. there is no resolution that is clear or unclear in and of itself, but that's how it is discussed.

One person talks about a laptop, another talks about their big coding desktop monitor, a third talks about a TV they use. None agree how much 1080p clarity makes sense for usage because the only thing quoted is resolution. This drives the assumption everyone is talking about the same sizes and viewing distances based on the resolution, which is almost never the case (before the conversation even gets to the age old debate of how much clarity is enough).

I'm sure if you ask the original commenter, they don't mean 1080p looks great for reading books at 34" just as much as GP wouldn't mean to compare screens of different sizes either.


Of course that's what I meant. It wouldn't make any sense otherwise.


But who's going to use such a tiny display that would make 1080p look good?


E.g. 1080p on a 15" laptop is still sharper than 4k on a 32" desktop monitor. People do work on both modalities, they talk to the one they use, chaos ensues.


And I am immensely unhappy with my 4K@32. 4K@27 is more tolerable... really miss the 5K@27 I had (other than it was a "smart" monitor which annoyed the hell out of me).


The implicit bit is that some of us also like to work with decently sized screens.


A 24" 1080p monitor is perfectly fine for working with text of any kind. I still use mine at home, even after a decade.

As others said, resolution is not everything. DPI and panel quality matters a lot.

A good lower resolution panel is better than a lower quality larger panel. Uniformity, backlight color, color rendering quality, DPI... all of them matters.

--

This comment has been written on a 28" 1440p monitor.


My theory is that people complaining about text on low resolution displays are using Macs. Apple has seriously gimped the text rendering on low-dpi displays essentially just downscaling a high-resolution render of the screen rather than doing proper resolution aware text hinting.

For some reason people then blame their old displays rather than apple for this.


Makes sense.

I sometimes connect the same 24" monitor (an ASUS VZ249Q) to my M1 MacBook via USB to DP (so no intermediate electronics), and the display quality feels inferior to KDE, for example.

Same monitor allows for unlimited working for hours without eye fatigue when driven from my Linux machine. I have written countless lines of code and LaTeX documents on that panel. It rivals the comfort of my HP EliteDisplay.


Yes we are! Macs don't play well with low dpi screens. However on high dpi screens they are better than anything else.


> However on high dpi screens they are better than anything else.

As a Mac user, I find this arguable. Many of the color correction comes from the fact that Macs contain ICC profiles for tons of monitors. OTOH, if the monitor is already has accurate color rendering out of the box (e.g.: Dell UltraSharp, HP EliteDisplay), Linux (esp. KDE) has very high display quality on HiDPI monitors, too.


Unless you are using a tiny 4k monitor (>9") its not going to be laser print quality.


The comment you're replying to made use of a simile, which is a figure of speech using "like" or "as" that constructs a non-literal comparison for rhetorical effect.


A 21" 4k monitor is around the same resolution as a fax, so it was not really clear to me that it wasn't a literal comparison.


"A lot of people are in meetings all day, and we would rather look at something that looks like we're there in person than at a limited webcam."


This depends a lot on whether you really want to be in these meetings, and what you're supposed to do in them.

The first part is obvious, for the second part if you're looking at slides and docs during the whole meeting, getting a super high fidelity view of all the other participants also looking (probably) at the slides doesn't help in any way.

I mean, Google Meet has a spotlight view exactly for this reason.


"We have this amazing revolutionary tech, and there only thing we can think of is sitting in meetings all day, working with Excel sheets, and answering emails"


> and the information is carried via audio?

Because it's not. Facial expressions and body language carry gigantic amounts of information.

So many misunderstandings arise when the channel is audio-only. E.g. if a majority of people in a meeting are uneasy with something, they can see it on each others' faces, realize they're not alone, and bring it up. When it's audio-only, everyone thinks they're the only one with concerns and so maybe it's not worth interrupting what they incorrectly assume to be the general consensus over audio.


These analogies don’t compare well. Your examples don’t demonstrate an extreme tradeoff like you get with the Vision Pro.

Why do we have video calls? Because a webcam costs $1-5 to put into a laptop and bandwidth is close enough to free.

Why do we have 4K monitors? Because they only cost a small amount more than 1080p monitors and make the image sharper with not a whole lot of downsides (you can even bump them down to 1080p if you have a difficult time driving the resolution). I paid $400 for my 4K 150Hz gaming monitor so going with 1080p high refresh rate VRR would have only saved me $200 or so.

Serviceability for purpose is a spectrum and the Vision Pro is at the wrong end of it.

For more than the price of three 4K OLED 144Hz monitors, you get to don a heavy headset that messes up your hair, makes you sweaty, screws up your makeup, and you get less resolution and workspace than the monitors. Your battery lasts an hour so it’s inferior to a laptop with an external portable monitor or two. It’s actually harder to fit into a backpack than a laptop plus portable monitors since it’s not flat.

Then you have to use some complicated proprietary technology [1] to make a 3D avatar of yourself to overcome the fact that you now have a giant headset on your head and look like an idiot if you were to go on camera.

You can’t do a bunch of PC stuff on it because it’s basically running iPadOS.

This is not the same as “why are we bothering with 4K?”

[1] What will you do if Apple starts charging money for this feature?


I actually think about this a lot, and I could argue both sides of this. On the one hand, you could look at your list of examples as obvious examples of modern innovation/improvement that enrich our lives. On the other, you could take it as a fascetious list that proves the point of GP, as one other commenter apparently already has.

I often think how stupid video call meetings are. Teams video calls are one of the few things that make every computer I own, including my M1 MPB, run the fans at full tilt. I've had my phone give me overheat warnings from showing the tile board of bored faces staring blankly at me. And yeah, honestly, it feels like a solution looking for a problem. I understand that it's not, and that some people are obsessed for various reasons (some more legitimate than others) with recreating the conference room vibe, but still.

And with monitors? This is a far more "spicy" take, but I think 1280x1024 is actually fine. Even 1024x768. Now, I have a 4K monitor at home, so don't get me wrong: I like my high DPI monitor.

But I think past 1024x768, the actual productivity gains from higher resolutions begins to rapidly dwindle. 1920x1080, especially in "small" displays (under 20 inches) can look pretty visually stunning. 4K is definitely nicer, but do we really need it?

I'm not trying to get existential with this, because what do we really "need"? But I think that, objectively, computing is divided into two very broad eras. The first era, ending around the mid 2000s, was marked by year-after-year innovation where 2-4 years brought new features that solved _real problems_, as in, features that gave users new qualitative capabilities. Think 24-bit color vs 8-bit color, or 64-bit vs 32-bit (or even 32-bit vs 16-bit). Having a webcam. Having 5+ hours of battery life on a laptop, with a real backlit AMLCD display. Having more than a few gigabytes of internal storage. Having a generic peripheral bus (USB/firewire). Having PCM audio. Having 3D hardware acceleration...

I'm not prepared to vigorously defend this thesis ;-) but it seems at about 2005-ish, the PC space had reached most of these "core qualitative features". After that, everything became better and faster, quantitatively superior versions of the same thing.

And sometimes yeah, it can feel both like it's all gone to waste on ludicrously inefficient software (Teams...), and sometimes, like modern computing did become a solution in search of a problem, in order to keep selling new hardware and software.


> But I think past 1024x768, the actual productivity gains from higher resolutions begins to rapidly dwindle.

Idk man, I do lile seeing multiple windows at once. Browser, terminal, ...


My only counter point to your resolution argument is that 1440p is where I’m happy because of 2 words: real estate. Also 120hz for sure. Above that meh.

I edit video for a tech startup. High high high volume. I need 2-3 27+”1440p screens to really feel like I’ve got the desktop layout I need. I’m running an NLE (which ideally has 2 monitors on its own but I can live on 1), slack, several browser windows with HubSpot and Trello et al., system monitoring, maybe a DAW or audacity, several drives/file windows opens, a text editor for note taking, a PDF/email window with notes for an edit, terminal, the list goes on.

At home I can’t live without my 3440x1440p WS + 1440p second monitor for gaming and discord + whatever else I’ve got going. It’s ridiculous but one monitor, especially 1080p, is so confining. I had this wonderful 900p gateway I held on to until about 2 years ago. It was basically a tv screen, which was nice but just became unnecessary once I got yet another free 1080p IPS monitor from someone doing spring cleaning. I couldn’t go back. It was so cramped!

This is a bit extreme: but our livestream computer is 3 monitors plus a 4th technically: a 70” TV we use for multiview out of OBS.

I need space lol


4K monitors are better and more comfortable.

On the other hand, video calls are worse and less comfortable than audio calls.


I live half way across the world from my folks so I don’t see them often. I’d love something that gives me a greater sense of presence than a video call can give.


Do you believe that seeing a computer generated picture of them is more lifelike than an actual video of them talking to you live?


By firsthand reports of AVP users, it is.. apparently it feels like a real presence in your space, like hanging out in person.. and their recollections of the conversations weren’t of calls but of visits. The main downside being that there are so few other friend/family users, because it’s prohibitively expensive, niche, and geeky, and those that do these VR calls still do them infrequently because it’s a hassle to break out the device if you don’t use it regularly, and uncomfortable to wear for too long, let alone they typically need to coordinate calls in advance.

Still, if I were to have a long-distance relationship with a tolerant partner, or one of us traveled frequently or for long periods, I would be tempted to consider these so we could watch a show or movie and hang out despite the distance.


I always viewed the current generations of 'cheap Chinese hovebords' etc being a direct descendant of the Segway, and that Kamen and his believers weren't quite as ridiculous as we thought them at the time - they were just ahead of their time and expecting too much from too low a point in the technology curve.


They had the right idea but over-engineered the solution.

They could never cut the price down because of it. The knockoffs used much simpler ways to balance yourself, including just changing the form factor to something more conventional that doesn’t even need balance correction (scooters and e-bikes).


I'm curious about practical application in everyday life of these avatars - but in the real life, not examples provided by marketing department. With that price Vision Pro still feels like a toy for wealthy people, or perhaps for CEOs of companies who can afford conferences in virtual environment. But then exactly, why? Majority of world tested during pandemic video calls, conferences and all sorts of other activities like virtual crowds for tv programs (pretty sure British panel shows shown grids of people as a substitute for studio audience). News services were inviting their guests by video calls when Skype was still around.


yes and to a degree which i find particularly interesting. its never going to happen because of your example

i prefer working in my vp and see a possible world where vp makes my remote team collaborate as if were in the office, from the comfort of the most ergonomic location in my house

it solves this problem and 0.0001% of people are dorks like me who try and say, "they did it" while the rest of the world keeps going to work as before

all of the tech problems were solvable. people simply dont want to put a thing on their face and i think thats unsolvable


I would not describe creating an experience that feels like you are in the room with a group of people, even allowing cross talk, is a solution looking for a problem. I think it's the thing everyone slowing dying on Zoom calls wishes they could have.


I disagree. Many of us don't use a headset regularly or carry it with us like a phone or laptop; it is an express inconvenience to use, with only marginal benefits. Businesses won't want one if webcams still do the trick, and users might respond positively but are always priced-out of owning one.

If I'm doing work at my desk and I get a Zoom call, there is a 0.00% chance I will go plug in my Vision Pro to answer it. I'm just going to open the app and turn on my webcam, spatial audio be damned.


Oh no, they wish to have fewer useless meetings.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: