Unfortunately, due to the complexity and specialized nature of AVX-512, such optimizations are typically reserved for performance-critical applications and require expertise in low-level programming and processor microarchitecture.

    • Finadil@lemmy.world
      link
      fedilink
      arrow-up
      17
      ·
      19 days ago

      Relevant section:

      Intel made waves when it disabled AVX-512 support at the firmware level on 12th-gen Core processors and later models, effectively removing the SIMD ISA from its consumer chips.

  • thingsiplay@beehaw.org
    link
    fedilink
    arrow-up
    14
    ·
    19 days ago

    There is an issue, though: Intel disabled AVX-512 for its Core 12th, 13th, and 14th Generations of Core processors, leaving owners of these CPUs without them. On the other hand, AMD’s Ryzen 9000-series CPUs feature a fully-enabled AVX-512 FPU so the owners of these processors can take advantage of the FFmpeg achievement.

    Intel can’t stop the L.

    As for the claims and benchmarking, we need to see how much it actually improves. Because the 94x performance boost is compared to baseline when no AVX or SIMD is used (if I understand the blog post correctly). So I wonder how much the handwritten AVX-512 assembler code improves over an AVX-512 code written in C (or Rust maybe?). The exact hardware used to benchmark this is not disclosed either, unfortunately.

    • zod000@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      18 days ago

      Someone else in the comments mentioned it is about 40% faster than the AVX-2 code and slightly more than twice as fast as the SSE3 code. That’s still a nice boost, but hopefully no one was relying on the radically slow unoptimized baseline.

      • thingsiplay@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        18 days ago

        But my question is, how much faster is it that its written in assembly rather than “high” level language like C or Rust. I mean if the AVX-512 code was written in C, would it be 40% faster than AVX-2?

  • collapse_already@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    ·
    18 days ago

    As someone who has done some hand coding of AVX-512, I appreciate their willingness to take this on. Getting the input vectors setup correctly for the instructions can be a hassle, especially when the input dataset is not an even multiple of 64.

  • Papamousse@beehaw.org
    link
    fedilink
    arrow-up
    5
    ·
    19 days ago

    I worked in the media broadcasting, we had an internal lib to scale/convert whatever format in real time, and it went from basic operation, to SSE3, to AVX512, to CUDA, and yes crafting some functions/loops wit assembly can give an enormous boost.

  • ganymede@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    19 days ago

    nice.

    can usually get a pretty good performance increase with hand writing asm where appropriate.

    don’t know if its a coincidence, but i’ve never seen someone who’s good at writing assembly say that its never useful.

      • ganymede@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        19 days ago

        from the article it’s not clear what the performance boost is relative to intrinsics (its extremely unlikely to be anything close to 94x lol), its not even clear from the article if the avx2 implementation they benchmarked against was instrinsics or handwritten either. in some cases avx2 seems to slightly outperform avx-512 in their implementation

        there’s also so many different ways to break a problem down that i’m not sure this is an ideal showcase, at least without more information.

        to be fair to the presenters they may not be the ones making the specific flavour of hype that the article writers are.

          • ganymede@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            19 days ago

            yes, as i said

            from the article it’s not clear what the performance boost is relative to intrinsics

            (they don’t make that comparison in the article)

            so its not clear exactly how handwritten asm compares to intrinsics in this specific comparison. we can’t assume their handwritten AVX-512 asm and instrinics AVX-512 will perform identically here, it may be better, or worse.

            also worth noting they’re discussing benchmarking of a specific function, so overall performance on executing a given set of commands may be quite different depending what can and can’t be unrolled and in which order for different dependencies.