• huf [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    4 days ago

    mm yes, i’d love to have a UI for the computer where the same input will produce different results each time (and makes half of it up on the spot) and you can never ever trust anything it outputs but have to check literally every single pixel carefully.

    as opposed to entering a cli command you’re familiar with and checking just the little bit of the output that is relevant to you, because you are familiar with the output format and it’s stable.

    i like having extra cognitive load, it helps.

      • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        4 days ago

        You wouldn’t be the target audience for this. Vast majority of people using computers aren’t technical, and they very much would prefer just being able to use natural language. The whole hallucination thing is also much less of a problem when the model is simply used as an interface to access services, pull data from them, and present it. In fact, you can already try this sort of functionality with DeepSeek right now, where it has a canvas, and you can ask it to pull some data from the internet and render it. I’m always amazed how technical people are utterly unable to step out of their shoes and imagine what a non technical person might prefer to use.

        • alexei_1917 [mirror/your pronouns]@hexbear.netOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          I dunno. I still think the types of graphical tools we have today are better than a chatbot for an interface. I often struggle with explaining what I want in a way that such a tool would actually return anything useful. I might try it once or twice, but I’d probably go back to a standard graphical interface pretty quick. I’m not a “terminal junkie” or particularly technical, but I absolutely hate the idea of a computer you have to talk to like it’s a person, I suck at dealing with people.

          • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
            link
            fedilink
            arrow-up
            3
            ·
            4 days ago

            Sure, different people like different things. My original point was that just being able to use natural language would be a benefit for non technical users. Most people struggle with complex UIs or achieving tasks where they have to engage multiple apps. Being able to just explain what you want the way you would to another person would lower the barrier significantly. For technical users, we already have tools that we can leverage, but if MCP services started becoming a common way to build apps, then we’d get benefits from that as well.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      4 days ago

      People who keep parroting this clearly never bothered actually seeing how these tools work in practice. Go play with DeepSeek canvas and look at how it can render the data you give it. Meanwhile, what you as a technical user prefer is vastly different from what an average person wants.

      • huf [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        yeah sure i’ll probably check it out one of these days, but i’ve never yet seen this technology do the same thing twice when you give it the same input twice…

        • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
          link
          fedilink
          arrow-up
          4
          ·
          4 days ago

          I haven’t seen this to be a problem when you’re using it to pull data from existing sources with tools like MCPs. Specifically, the content stays stable even if there are minor presentation variations. That’s sufficient to be useful in most scenarios. Like if you get it to pull some JSON from a service and render a table or graph it, the content of the presentation will not change.

        • m532@lemmygrad.ml
          link
          fedilink
          arrow-up
          1
          ·
          4 days ago

          When I gave Qwen 2.5 VL Instruct the exact same input twice, it produced the exact same output.