I went back through Wikipedia’s current events from June 2025 and put together a rough timeline:
- 2025-06-12: IAEA declares Iran in breach of its nuclear non-proliferation obligations.
- 2025-06-13: Israel attacks Iran targeting nuclear facilities and senior government officials. Iran retaliates with missles and drones.
- 2025-06-17: Donald Trump calls for Iran’s unconditional surrender. Iran refuses.
- 2025-06-22: The US announces it has bombed multiple nuclear facilities in Iran – particularly including sites at Fordow and Natanz which are deep underground – using its unusual ~30000 pound (~12000 kg) Massive Ordnance Penetrator bombs. Iran retaliates by firing missles at a US base in Qatar and by voting to close the Strait of Hormuz.
- 2025-06-24: The US and Qatar mediate a ceasefire between Israel and Iran. Donald Trump says Israel and Iran “don’t know what the fuck they are doing” as the countries continue fighting initially. It seems to be holding so far after a rocky start though?
I’ve been trying to figure out a related sort of video streaming setup for work (without Owncast, but with a similar sort of 24/7 goal plus other considerations) and have been looking into using ffmpeg’s capabilities to output either HLS or DASH segments + manifests. (FFMPEG can do both but I don’t know which would be better for my needs yet.) The sources I’m working with are RTSP/RTP instead of RTMP and I only need streaming to browser clients currently – although it working with VLC naturally by pointing it to the manifest is nice.
HLS and DASH work by having videos split into small chunks that can be downloaded over HTTP, so just replacing the manifest allows for continuous streaming (the client pulls it repeatedly) without the server needing to maintain a continuous connection to the client.(Fan out to CDNs works naturally since the video chunks are just files that can be served by any web server.)
It should be possible to do some creative things by either creating / modifying the manifests myself with scripting or by piping chunks into another instance of ffmpeg from a script. (I’ve done something similar using
-f image2pipe
in the past, but that was for cases where I need to do things like create a video from an image gallery dynamically.) That’s as far as I’ve gotten with it myself though.I don’t know what the right answer is either, but I’m also interested in finding out and hopeful you get additional responses.