

First seeing this on my home feed on Jan 7th. Relieved to find the post is 5 days old…


First seeing this on my home feed on Jan 7th. Relieved to find the post is 5 days old…


You mention frigate specifically. Were you running this on the system when the drive failed, or is this a future endeavour?
I bring this up because I also use frigate, and for some time I was running with a misconfigured docker compose that drove my SSD wearout to 40% in a matter of months.
Make sure that the tmpfs is configured per the frigate documentation and example config. If misconfigured like mine was, all of that IO is on disk. I believe the ramdisk is used for temp storage of camera streams, until an event occurs and the corresponding clip is committed to disk.
Good luck!


Pulling around 200W on average.
The options I’m looking at have PCIe 4 and seem to be gen 2? Epyc 7282 or 7302.
I think this is where I’m headed. Is there anything to consider with Threadripper vs Epyc? I’m seeing lots of CPU/MOBO/RAM combo’s on ebay for 2nd gen Epyc’s. Many posts on reddit confirming the legitimacy of particular sellers, plus paypal buy protections have me tempted.
Thanks, I’ll need to have a look at how the chipset link works, and how the southbridge combines incoming PCIe lanes to reduce the number of connections from 24 in my example, to the 4 available. Despite this though, and considering these devices are typically PCIe 3.0, operating at the maximum spec, they could swamp the link with 3x the data it has bandwidth for (24x3.0 is 23.64GB/s, vs 4x4.0 being 7.88GB/s).
I hadn’t considered AMD, really only due to the high praise I’m seeing around the web for QuickSync, and AMD falling behind both Intel and nvidia in hwaccel. Certainly will consider if there’s not a viable option with QS anyway.
And you’re right, the south bridge provides additional PCIe connectivity (AMD and Intel), but bandwidth has to be considered. Connecting a HBA (x8), 2x m.2 SSD (x8), and 10Gb NIC (x8) over the same x4 link for something like a TrueNAS VM (ignoring other VM IO requirements), you’re going to be hitting the NIC and HBA and/or SSD (think ZFS cache/logging) at max simultaneously, saturating the link resulting in a significant bottleneck, no?
Just on point A. You can configure the maximum number of conflicts allowed for each folder.
I was running into conflicts with obsidian notes. Reduced the max conflicts on those folder to zero, problem gone.
It’s in the folder specific advanced settings.