Artificial intelligence is worse than humans in every way at summarising documents and might actually create additional work for people, a government trial of the technology has found.

Amazon conducted the test earlier this year for Australia’s corporate regulator the Securities and Investments Commission (ASIC) using submissions made to an inquiry. The outcome of the trial was revealed in an answer to a questions on notice at the Senate select committee on adopting artificial intelligence.

The test involved testing generative AI models before selecting one to ingest five submissions from a parliamentary inquiry into audit and consultancy firms. The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions with a focus on ASIC mentions, recommendations, references to more regulation, and to include the page references and context.

Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts. Then, a group of reviewers blindly assessed the summaries produced by both humans and AI for coherency, length, ASIC references, regulation references and for identifying recommendations. They were unaware that this exercise involved AI at all.

These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.

  • SkyNTP@lemmy.ml
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    7
    ·
    3 months ago

    LLMs == AGI was and continues to be a massive lie perpetuated by tech companies and investors that people still have not woken up to.

    • jaybone@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      3 months ago

      The fact that we even had to start using the term AGI when in common parlance AI always meant the same up until recently, shows how goal posts are being moved.

      • AFK BRB Chocolate@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        What people mean by AI has been changing for as long as the term has been used. When I was studying CS in the 80s, people said the holy grail was giving a computer printed English text and having it read it aloud. It wasn’t much later that OCR and text to speech software was commonplace.

        Generally, when people say AI, they mean a computer doing something that normally takes a human, and that bar goes up all the time.

        • AA5B@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          It might also be a question of how we define “intelligence”. We really don’t have a clear definition and it’s a moving target as we find out more

          • “reading aloud is something only a person can do. It requires intelligence”. Here’s a computer doing it. “Oh, that’s not really intelligence, is it”
  • ArbitraryValue@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    4
    ·
    edit-2
    3 months ago

    The important thing here isn’t that the AI is worse than humans. It’s than the AI is worth comparing to humans. Humans stay the same while software can quickly improve by orders of magnitude.

    • WalnutLum@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      LLMs as they stand are already approaching the improvement flatline portion of the sigma curve due to marginal data requirements increasing exponentially.

      It’s a known problem in the actual AI research field that nobody in private industry likes to talk about.

      If it scores 40% this year it’ll marginally increase by 10% next year then 5% 3 years later and so on.

      AI doesn’t follow Moore’s law.

      • ArbitraryValue@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        3 months ago

        So far “more data” has been the solution to most problems, but I don’t think we’re close to the limit of how much useful information can be learned from the data even if we’re close to the limit of how much data is available. Look at the AIs that can’t draw hands. There are already many pictures of hands from every angle in their training data. Maybe just having ten times as many pictures of hands would solve the problem, but I’m confident that if that was not possible then doing more with the existing pictures would also work.* Algorithm design just needs some time to catch up.

        *I know that the data that is running out is text data. This is just an analogy.

    • krashmo@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      3 months ago

      Theoretically that’s true. Can you tell techbros and the media to shut up about AI until it happens though?

  • maegul (he/they)@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    3 months ago

    Not a stock market person or anything at all … but NVIDIA’s stock has been oscillating since July and has been falling for about a 2 weeks (see Yahoo finance).

    What are the chances that this is the investors getting cold feet about the AI hype? There were open reports from some major banks/investors about a month or so ago raising questions about the business models (right?). I’ve seen a business/analysis report on AI, despite trying to trumpet it, actually contain data on growing uncertainties about its capability from those actually trying to implement, deploy and us it.

    I’d wager that the situation right now is full a lot of tension with plenty of conflicting opinions from different groups of people, almost none of which actually knowing much about generative-AI/LLMs and all having different and competing stakes and interests.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      3 months ago

      “What are the chances…”

      Approximately 100%.

      That doesn’t mean that the slide will absolutely continue. There may be some fresh injection of hype that will push investor confidence back up, but right now the wind is definitely going out of the sails.

      The core issue, as the Goldman - Sachs report notes, is that AI is currently being valued as a trillion dollar industry, but it has not remotely demonstrated the ability to solve a trillion dollar problem.

      No one selling AI tools is able to demonstrate with confidence that they can be made reliable enough, or cheap enough, to truly replace the human element, and without that they will only ever be fun curiosities.

      And that “cheap enough” part is critical. It is not only that GenAI is deeply unreliable, but also that it costs a truly staggering amount of money to operate (OpenAI are burning something like $10 billion a year). What’s the point in replacing an employee you pay $10 an hour to handle customer service issues with a bot that costs $5 for every reply it generates?

      • kautau@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 months ago

        Yeah we are on the precipice of a massive bubble about to burst because, like the dot com bubble magic promises are being made by and to people who don’t understand the tech as if it is some magic that will net incredible profits just by pursuing it. LLMs have great applications in specific things, but they are being thrown in every direction to see where they will stick and the magic payoff will come

        • Voroxpete@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          3 months ago

          The problem is that even the specific things they’re good at, they don’t do well enough to justify spending actual money on. And when I say “actual money”, I’m not talking about the hilariously discounted prices AI companies are offering in an effort to capture an audience.

          A bot that can do a job reasonably well, but still needs a human to check their work is, from an employment perspective, still an employee, just now with some very expensive helper software. And because of the inherent unreliability of LLMs, a problem that many top figures in the industry are finally admitting may never be solved, they will always need a human to check their work. And that human has to be competent enough to do the job without the AI, in order to figure out where and how it went wrong.

          GenAI was supposed to put us all out of work, and maybe one day it will, but the current state of the technology isn’t remotely close to being good enough to do that. It turns out that while bots can very effectively look and sound like humans, they’re not remotely capable of thinking like humans, and that actually matters when your chatbot starts promising customers discounts that don’t actually exist, to name one real example. What was treated as being the last ten percent is actually looking more and more like ninety-nine percent of the work in terms of creating something that can effectively replace a human being.

          (As an aside, I can’t help but feel that a big part of this epic faceplant arises from Silicon Valley fully ingesting the bullshit notion of “unskilled labour”. Turns out working the drive thru at McDonald’s is a more complicated job than people think, including McDonald’s themselves. We’ve so undervalued the skills of vast swathes of our population that we were easily deluded into thinking they could all be replaced by simple machines. While some of those tasks certainly can, and will, be automated, there are some human elements - especially in conflict resolution - that are really hard to replace)

    • Optional@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 months ago

      What are the chances that this is the investors getting cold feet about the AI hype?

      Investors have proven over and over they’re credulous idiots who understand sweet fuck-all about technology and will throw money at whatever’s in their face. Creepy Sam and the Microshits will trot out some more useless garbage and prize a few more billion out of the market in just a little while.

    • atrielienz@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      3 months ago

      NVIDIA has been having a lot of problems with their 13th/14th gen CPU’s degrading. They are also embroiled in an anti-trust investigation. That coupled with the “growing pains of generative AI” has caused them a lot of problems where 2 months ago they were one of the world’s most valuable companies.

      Some of it is likely the die-off of the AI hype but their problems are farther reaching than the sudden AI boom.

  • UnderpantsWeevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    3 months ago

    Are we talking 10% worse and 95% cheaper? Or 50% worse and 10% cheaper? Or 90% worse and 95% cheaper?

    Because that last one is good enough for fiscal conservatives. Hell, the second one is good enough for fiscal conservatives.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    edit-2
    3 months ago

    Meanwhile, here’s an excerpt of a response from Claude Opus on me tasking it to evaluate intertextuality between the Gospel of Matthew and Thomas from the perspective of entropy reduction with redactional efforts due to human difficulty at randomness (this doesn’t exist in scholarship outside of a single Reddit comment I made years ago in /r/AcademicBiblical lacking specific details) on page 300 of a chat about completely different topics:

    Yeah, sure, humans would be so much better at this level of analysis within around 30 seconds. (It’s also worth noting that Claude 3 Opus doesn’t have the full context of the Gospel of Thomas accessible to it, so it needs to try to reason through entropic differences primarily based on records relating to intertextual overlaps that have been widely discussed in consensus literature and are thus accessible).

  • DarkCloud@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    3 months ago

    “AI” or Large Language Models, are designed by definition to give averaged answers. So they’re not just averaging on the text you give them, they’re averaging it with all general text of the training model, to create a probabilistically average result based on all of it.

    There’s no way around this, because it’s simply how such systems work. It’s their lifeblood to produce a “best guess” across large amounts of training data …which is done by averaging out all that language. A large amount of language… Hence the name.

  • AA5B@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    3 months ago

    Artificial intelligence is worse than humans in every way at summarizing documents

    In every way? How about speed? The goal is to save human time so if AI is faster and the summary is good enough, then it is a success. I guarantee it is faster. Much faster.

    • loonsun@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      3 months ago

      If you make enough mistakes, speed is a detriment not a benefit. Increasing speed allows you to produce more summaries but if you still need to correct and edit them all you’ve done is add a step where a human has to still read the document to the level where they could summarize it and edit the AI summary. Therefore the bottleneck of a human reading the document and working on a summary is still there. It would only potentially make it slightly easier if the corrections needed are small and obvious.

  • Melvin_Ferd@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    3 months ago

    Here is the summary by AI

    The article suggests AI is worse than humans at summarizing documents, based on one outdated trial. But really, Crikey is just feeling threatened. AI is evolving fast, and its ability to handle vast amounts of data without the human biases Crikey often exhibits is undeniable. While they nitpick AI’s limitations, they ignore how much better it will get—probably even better than their reporters. Maybe they’re just jealous that AI could do in seconds what takes humans hours!

  • masquenox@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    3 months ago

    Artificial intelligence is worse than humans in every way

    As if capitalists have ever cared about that…