I’ve been experimenting with AV1 using FFmpeg with SVT-AV1 for 2 years. I’ve encoded quite a lot of my videos in AV1 by now, mostly just animated content.

AV1 is really good for an open source project, no doubt about that. But after so long using it, I can safely say that it is really just good for storage saving with excellent quality-speed tradeoff, however, it lacks fidelity. My major discontent with AV1 has been how the encoder blurs some details completely out even when setting crf as low as 14 whereas HEVC doesn’t at all. Edit: Also in some instances, particularly with non-animated videos, AV1 performed way worse than HEVC which I believe is due to it doing a poor job in varied and difficult scenes.

At first, I thought AV1 is only better for animated videos but later I found its really just any video so I’ve switched back to using HEVC for storage and decided to use AV1 only with preset 6 and fast decode on for mobile devices.

I don’t mean to say that AV1 is bad, it does provide better quality than HEVC for sure but I wouldn’t call that an upgrade when HEVC still has the major edge in fidelity.

It makes sense for VOD services to make use of it but personally, I wouldn’t use it for anything except quick and super low bitrate encoding… for now.

  • H:S@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    My major discontent with AV1 has been how the encoder blurs some details completely out

    That’s the main reason I did not personally switch to AV1 yet as well. (Second reason being that my laptop struggled with playback too much.) Last time I tested it was 2 years back or so, and only using libaom, so I definitely hoped it would be better by now. I was so hyped about Daala (and then AV1) all those years back, so it’s a bit disappointing how it turned out to be amazing and “not good enough” at the same time. :)

    Losing details seems to be a common problem with “young” encoders. HEVC had similar problems for quite some time; I remember many people preferred x264 for transparent encodings, because HEVC encoders tended to blur fine-grained textures even at high bitrates. It may still be true even today; I didn’t really pay attention to the topic for the last few years.

    IIRC, it has to do mainly with perceptual optimizations: x264 was tweaked over many years to look good, even if it hurts objective metrics like PSNR or SSIM. On the other hand, new encoders are optimized for those metrics first, because that’s how you know if a change you made to the code helped or made things worse.

    I suppose only when the encoder reaches maturity and you know it preserves as much real detail as possible, then you can go wild and start adding fake detail or allocating bits to areas more important to subjective quality. I’m sure some (many? most?) of such techniques are already supported and used by AV1 encoders, but getting the most of it may still take some time.