Hey everyone, I’m building a new server to run Jellyfin (with a few other services like Pi-hole) and I’m stuck on GPU or CPU transcoding.

My main concern is smooth 4K HDR transcoding for 1 stream. I’ve been reading mixed advice online – some people say a strong CPU with good single-core performance can handle it, while others recommend a dedicated GPU.

Should I focus my budget (~$1000AUD/$658USD) on a good CPU, or spend some of it on a dedicated GPU?

  • SmallBorg@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    8 months ago

    I also struggled to get it to work with an Intel Celeron N5100. To get it to work I followed the instructions in the Jellyfin documentation. It seems like there are some additional steps for some versions of Intel CPUs, could it be that yours is affected? After enabling “Low-Power Encoding” it worked as expected.

      • SmallBorg@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 months ago

        Yes, I have it installed as a Docker container in a Debian 12 machine. My Docker compose file is something like this:

          jellyfin:
            container_name: jellyfin
            image: jellyfin/jellyfin
            group_add:
              - "105" 
              - "44"
              - "102"
            devices:
              - /dev/dri/renderD128:/dev/dri/renderD128
              - /dev/dri/card0:/dev/dri/card0
        

        The group numbers were obtained following Jellyfin’s documentation.

        You also need to configure Jellyfin from Menu > Playback. In “Hardware acceleration” I selected “Intel QuickSync (QSV)”. I have selected all but AV1 (not supported by my CPU) from “Enable hardware decoding from:” and from “Hardware encoding options:” I have enable all 3 of them.