Microsoft’s LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that’s inaccurate or misleading.

LinkedIn thus takes after its parent, which recently revised its Service Agreement to make clear that its Assistive AI should not be relied upon.

LinkedIn, however, has taken its denial of responsibility a step further: it will hold users responsible for sharing any policy-violating misinformation created by its own AI tools.

The relevant passage, which takes effect on November 20, 2024, reads:

Generative AI Features: By using the Services, you may interact with features we offer that automate content generation for you. The content that is generated might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes. Please review and edit such content before sharing with others. Like all content you share on our Services, you are responsible for ensuring it complies with our Professional Community Policies, including not sharing misleading information.

In short, LinkedIn will provide features that can produce automated content, but that content may be inaccurate. Users are expected to review and correct false information before sharing said content, because LinkedIn won’t be held responsible for any consequences.

The platform’s Professional Community Policies direct users to “share information that is real and authentic” – a standard to which LinkedIn is not holding its own tools.

  • fluxion@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 month ago

    If companies don’t trust their own AI on their own sites then they are pushing a shitty unvetted algorithm and hiding behind the word “AI” to avoid accountability for their own software bugs. If we want AI to be anything other than trash then companies need to be held accountable just like with any other software they produce.

  • andallthat@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 month ago

    Socials and the Internet in general would be a much better place if people stopped believing and blindly resharing everything they read, AI-generated or not.

  • billbasher@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 month ago

    I used it for awhile to find a job recently. It’s all recruiters contacting you that have no idea what your skill set is so they just end up wasting your time

  • JeeBaiChow@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    Expect more of this stuff in the future. What’s the point of generating thousands of articles if you have to spend thousands of man hours vetting snf proofing the damned things? Is this even considered ‘work’ anymore?

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    I would like to join politicians and corporations in divorcing the conventional relationship between my actions and their consequences.

    Where to I sign up?

  • Jesus@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    Honestly, the AI information might be better than most of the dog shit insights people post on that platform.

  • MonkderVierte@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 month ago

    And in the same breath they complain about poisoned (AI-generated) input. Garbage out, garbage in.

  • EnderMB@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Has anyone seen the comments on popular stories on LinkedIn lately? The site is overrun by AI scammers.

    • Evotech@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      LinkedIn is fucking impossible to read. I don’t know anyone who actually did anything else than update their resume