• AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    This is the best summary I could come up with:


    We were expecting — and we got — a lot of talk about generative AI this morning when Honor CEO George Zhao took the stage at Qualcomm’s 2023 Snapdragon Summit.

    But the announcement of its Honor 6 flagship came with a surprising detail: it includes a feature that lets you interact with the device using your eyes.

    The keynote briefly featured a rendering of what this technology will look like, showing a woman looking at her phone with a snippet of the Uber app running at the top of the screen — something like a Live Activity.

    The “multi-modal” descriptor seems to indicate that gaze is just one input in the system, so it could be coupled with other gestures to work reliably —perhaps like how we’ve seen PSVR 2 games use eye-tracking to highlight things before you click to confirm.

    All of that aside, it’s nice to see device OEMs pushing for advances in how we use our phones that don’t begin and end with an AI chatbot.

    Honor hasn’t said specifically when the Magic 6 will ship, but Qualcomm says that phones with its new flagship chipset will start arriving in the coming weeks.


    The original article contains 447 words, the summary contains 196 words. Saved 56%. I’m a bot and I’m open source!

  • tsonfeir@lemm.ee
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    8 months ago

    A feature no one needs. (Except those with disabilities, and I think they already have tools for this)

    • It’s a first step, though. Imagine a time when you can look at a point on the screen and trigger a mouse event. Maybe there’s a separate button to click but the point is your fat, grubby finger isn’t blocking your view, or you’re not having to constantly perform hand-eye tests with a mouse.

      I’ve wanted good eye-tracking technology for nearly a decade, because it’d be so vastly superior to current technology. It has to be pixel accurate, though, and that’s always held it back. Sounds like this might be making some progress.