• BananaTrifleViolin@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    3
    ·
    18 days ago

    This is why “AI” should be avoided at all cost. It’s all bullshit. Any tool that “hallucinates” - I. E. Is error strewn - is not fit for purpose. Gaming the AI is just the latest example of the crap being spewed by these systems.

    The underlying technology has its uses but its niche and focused applications, nowhere near as capable or as ready as the hype.

    We don’t use Wikipedia as a primary source because it has to be fact checked. AI isn’t anywhere as near accurate as Wikipedia.so why use it?

    • brbposting@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      18 days ago

      The underlying technology has its uses

      Yes indeed agreed.

      Sometimes BS is exactly what I need! Like, hallucinated brainstorm suggestions can work for some workflows and be safe when one is careful to discard or correct them. Copying a comment I made a week ago:

      I don’t love it for summarization. If I read a summary, my takeaway may be inaccurate.

      Brainstorming is incredible. And revision suggestions. And drafting tedious responses, reformatting, parsing.

      In all cases, nothing gets attributed to me unless I read every word and am in a position to verify the output. And I internalize nothing directly, besides philosophy or something. Sure can be an amazing starting point especially compared to a blank page.

    • Benjaben@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      14
      ·
      edit-2
      17 days ago

      Gotta tell you, you made a fairly extreme pronouncement against a very general term / idea with this:

      “AI” should be avoided at all cost

      Do you realize how ridiculous this sounds? It sounds, to me, like this - “Vague idea I poorly understand (‘AI’) should be ‘avoided’ (???) with disregard for any negative consequences, without considering them at all”

      Cool take you’ve got?

      Edit to add: whoops! Just realized the community I’m in. Carry on, didn’t mean to come to the precise wrong place to make this argument lol.

      • prototype_g2@lemmy.ml
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        17 days ago

        Listen, I know that the term “AI” has been, historically, used to describe so many things to the point of having no meaning, but I think, given the context, it is pretty obvious what AI they are referring to.

        • Benjaben@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          17 days ago

          Well, fair enough, folks seem to agree with you and that commenter. I’m not being deliberately uncharitable, “avoid AI at all costs” seems both poorly defined and hyperbolic to me, even given the context. Scams and inaccuracy are a problem in lots of situations, Google search results have been getting increasingly bad to the point of unusable for a while now (I’d argue long before LLM saturation), and I’ve personally been getting mileage with some LLMs, already at kind of an early stage, over wading through every crappy search result.

          I wouldn’t call myself an enthusiast or on the hype train, I work in the industry. But it’s clearly useful, while clearly having many tradeoffs (energy use maybe much worse than inaccuracy / scam potential), and “avoid at all cost” is silly to me. But cheers, happy to simply disagree!

  • perviouslyiner@lemmy.world
    link
    fedilink
    arrow-up
    27
    ·
    edit-2
    17 days ago

    Wait until you hear about the AI’s programming abilities!

    It “knows” that a Python program starts with some lines like: from (meaningless package name) include *

    If you can register the package name it invents, your code could be running on some of the world’s biggest companies’ internal servers!

    • cynar@lemmy.worldM
      link
      fedilink
      English
      arrow-up
      6
      ·
      17 days ago

      Ironically, that is possibly one of the few legit uses.

      Doctors can’t learn about every obscure condition and illness. This means they can miss the symptoms of them for a long time. An AI that can check for potential matches to the symptoms involved could be extremely useful.

      The provisio is that it is NOT a replacement for a doctor. It’s a supplement that they can be trained to make efficient use of.

      • DaPorkchop_@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        17 days ago

        Couldn’t that just as easily be solved with a database of illnesses which can be filtered by symptoms?

        • cynar@lemmy.worldM
          link
          fedilink
          English
          arrow-up
          4
          ·
          17 days ago

          That requires the symptoms to be entered correctly, and significant effort from (already overworked) doctors. A fuzzy logic system that can process standard medical notes, as well as medical research papers would be far more useful.

          Basically, a quick click, and the paperwork is scanned. If it’s a match for the “bongo dancing virus” or something else obscure, it can flag it up. The doctor can now invest some effort into looking up “bongo dancing virus” to see if it’s a viable match.

          It could also do it’s own pattern matching. E.g. if a particular set of symptoms is often followed 18-24 hours later by a sudden cardiac arrest. Flagging this up could be completely false. However, it could key doctors in on something more serious happening, before it gets critical.

          An 80% false positive is still quite useful, so long as the 20% helps and the rest is easy for a human to filter.

        • Duamerthrax@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          17 days ago

          In either case, a real doctor would be reviewing the results. Nobody is going to authorize surgeries or prescription meds from AI alone.

  • Blaster M@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    17 days ago

    and that’s why I always go straight to the company website to find that info instead of googling it

  • LANIK2000@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    17 days ago

    Yet again tech companies are here to ruin the day. LLM are such a neat little language processing tool. It’s amazing for reverse looking up definitions (where you know the concept but can’t remember some dumb name) or when looking for starting points or want to process your ideas and get additional things to look at, but most definitely not a finished product of any kind. Fuck tech companies for selling it as a search engine replacement!

    • jj4211@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      17 days ago

      It is great at search. See this awesome example I hit just today from Google’s AI overview:

      Housing prices in the United States dropped significantly between 2007 and 2020 due to the housing bubble and the Great Recession:

      2007: The median sales price for a home in the first quarter of 2007 was $257,400. The average price of a new home in September 2007 was $240,300.

      2020: The average sales price for a new home in 2020 was $391,900.

      See, without AI I would have thought housing prices went up between 2007, and that $391,900 was a bigger number than $257,400.

  • njm1314@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    17 days ago

    Would that make Google liable? I mean that wouldn’t be a case of users posting information that would be a case of Google posting information in that case wouldn’t it? So it seems to me they’d be legally liable at that point.

  • Etterra@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    16 days ago

    That’s why you always get it from their website. Never trust a LLM to do a search engine’s job.

  • OsrsNeedsF2P@lemmy.ml
    link
    fedilink
    arrow-up
    10
    arrow-down
    2
    ·
    17 days ago

    Honestly I wanted to write a snug comment about “But but it even says AI can sometimes make mistakes!”, but after clicking through multiple links and disclaimers I can’t find Google actually admitting that

  • FenrirIII@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    18 days ago

    Same happened to my wife. She gave them enough info that they threatened to call and cancel her flight unless she paid them. Never did cancel it

  • hark@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    18 days ago

    They gave up working search with algorithms that are easier to reason about and correct for with a messy neural network that is broken in so many ways and basically impossible to generally correct while retaining its core characteristics. A change with this many regressions should’ve never been pushed to production.

  • answersplease77@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    17 days ago

    Google has been sponsoring scammers as first search results since its creation. Google has caused hundreds of millions of dollars in losses to people, and need to be sued for it.

  • GeneralInterest@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    16 days ago

    I get paranoid enough about making sure I’m clicking the correct search result and not some scam. I hope I would avoid any AI answers but yeah, to many people it could be confusing.

  • ArnaulttheGrim@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    16 days ago

    Having worked with AI and AI products in my last job before I was let go I can say this:

    Out of the box AI is very good at the following:

    1. Mundane very simple binary/boolean tasks. Is this a yes/no. Can I find a piece of information that I was told is here based on your statement? Etc
    2. Condensing very complex processes into very simplistic things - NOTE you will lose a lot of information based on this action unless you refine a statement.
    3. Making overarching summaries - kinda similar to 2 but also its own thing, think more creating a summary of a book.

    Programmed AI - read machine learning, because you are still telling it how to interpret things - can be good at (depending how good you are at telling it what it should do):

    1. Interpreting meaning in a statement.
    2. Understanding if - then constructs.
    3. Deducing plausible outcomes.

    ALL AI struggles at:

    1. Interpreting real vs fake (thats why you literally teach it how to understand what a spot light is with your captcha)
    2. Understanding complexity in speech and tonal differences - I am SO happy to be here /s
    3. Thinking on its own - using collected data to make an inference that it was not directly programmed to understand

    The big craze over AI totally was misunderstood. AI is best to be thought of as Automated Intelligence and the word Artificial at its current state is a complete misnomer.

    This is just one example of people having been mislead by the name to not fully understand what is up with AI.