I’ve always known that AV1 and Opus are more efficient than HEVC/VP9 and MP3/Vorbis, but exactly how is this achieved? Is it just a matter of more efficient compression?
Yes, it’s just more advanced compression as more and more techniques are discovered and fine-tuned.
To give a feeling of how they work, some of the simpler tricks they use are:
Find a polynomial function that closely matches a sound curve - storing 3183x^3 - 1/13847x^2 + 11x - 9 takes less space than one hundred consecutive frequency numbers.
Cut out sounds that humans don’t hear. After a sharp clap, we don’t hear certain frequencies for a few tens of milliseconds.
There are tons more, most too advanced to explain in a lemmy comment. :)
Isn’t it also partly that as processing power increased, you could do more sophisticated compression/decompression in real time compared to previously, allowing these more complex compression algorithms to actually be viable?
I.e. they actually knew how to do it before, they just didn’t have the power to implement it
It’s a combination of that. Compression technology and technology in general are built upon the successes and ideas of the previous generation. What can happen is a bunch of methods are created and the popularity of them dictates their future. Eventually those compression algorithms are put into hardware allowing future devices an easier time using it.
So essentially experimental algorithms are adopted by the industry and power users and whatever ends up winning that popularity contest gets added to weak devices such as netbooks and TVs.
We’re currently at the point where AV1 is starting to be deployed in hardware and streaming services have been switching to it for efficiency.
Processing power in general has improved, but there’s also specific sections of the cpu/gpu built dedicated to en/de-coding in those codecs. (in newer hardware).
Less reliance on software figuring out which stream of commands to send the cpu/gpu to get the desired results, and more just handing the cpu/gpu some file data then receiving the output.
Unrelated question. What’s a video card that can transcode AV1? I want to add more AV1s to my jellyfin server but i’d need at least a nvidia RTX 4060 if i want AV1 transcoding >_<
Unironically?
Intel Arc GPU isn’t bad for a transcoding card: https://youtu.be/uShvhV2ZZCA
Intel Arc A380. It’s super cheap and supports encoding AV1 through Intel QuickSync. It’s what I have in my Jellyfin server.
Any of the Intel arc cards can do it. If you are wanting a budget option the a380 is a low power option. However don’t expect to use it for gaming. If you also want gaming then the a750/a770 are better choices but don’t expect quite the same level of compatibility with games that you get from nvidia
There’s an even lower power a310 now, I think it’s even low enough power to be powered solely by the pcie slot
WDYM by “add more AV1s to my jellyfin”?
As in have AV1-encoded source files and have jellyfin transcode from AV1 to more compatible formats or to have jellyfin transcode existing videos (no matter what format) to AV1 in real-time?
For the former, you shouldn’t need any special GPU whatsoever as a CPU can decode AV1 just fine but a GPU capable of AV1 decode could help. That’s Ampere, Navi 2 and tiger lake or later.
The latter isn’t very useful right now as a device capable of decoding AV1 must be quite modern and those are usually in a position to just decode the original without any need for any transcoding.
I meant The former
Amd’s 7000 series can encode AV1 too. Most people probably go for Arc cards