![]() You could potentially use OCR to get SRT subtitles (which are more commonly supported) from the image based DVD/Blu-ray subs but it's not perfect, so you end up with errors, and you lose things like the font and positioning of the original subs, which at best loses some information and at worst you end up with the subs covering important onscreen info. If you're working with DVD/Blu-ray rips, they'll have image based subtitles, that won't be supported on most clients, so you'll either need to burn them in beforehand (which means you can't turn them off or have more than 1 subtitle option), or you'll need to transcode to burn them in as you play the file. But you can also run into limitations regarding subtitles. Obviously newer formats like h.265, VP9, and AV1 are more efficient, so you can have smaller files while maintaining a similar level of video quality (technically, you can direct stream these formats, but support may vary from client to client, and is less universal than h.264). However, limiting yourself to direct streaming can also be quite limiting. As these days it's compatible with damn near everything, so you can get away with direct streaming everything, meaning that you don't need to worry about server specs or usage as the CPU/GPU isn't really used. H.264 is often considered to be pretty good for Plex/emby/jellyfin/etc. If your Jellyfin server does not support hardware acceleration, but you have another machine that does, you can leverage rffmpeg to delegate the transcoding to another machine.The "best" settings for whatever situation are dependent on your use case and preferences. The hardware acceleration is available immediately for media playback. Supported codecs need to be indicated by checking the boxes in Enable hardware decoding for and Hardware encoding options. Select a valid hardware acceleration method from the drop-down menu and a device if applicable. Hardware acceleration options can be found in the Admin Dashboard under the Transcoding section of the Playback tab. The current state of hardware acceleration support in FFmpeg can be checked on the rpi-ffmpeg repository. Jellyfin will fallback to software de/encoding for those usecases. This decision was made because Raspberry Pi is currently migrating to a V4L2 based hardware acceleration, which is already available in Jellyfin but does not support all features other hardware acceleration methods provide due to lacking support in FFmpeg. ![]() ![]() Video Scaling & Format conversion (optional)Īs of Jellyfin 10.8 hardware acceleration on Raspberry Pi via OpenMAX OMX was dropped and is no longer available. The transcoding pipeline usually has multiple stages, which can be simplified to: Raspberry Pi Video4Linux2 (V4L2, Linux only) Intel/AMD Video Acceleration API (VA-API, Linux only) The supported and validated video hardware acceleration (HWA) methods are: It enables the Jellyfin server to access the fixed-function video codecs, video processors and GPGPU computing interfaces provided by vendor of the installed GPU and the operating system. The Jellyfin server uses a modified version of FFmpeg as its transcoder, namely jellyfin-ffmpeg. The Jellyfin server can offload on the fly video transcoding by utilizing an integrated or discrete graphics card ( GPU) suitable to accelerate this workloads very efficiently without straining your CPU. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |