if you could pick a standard format for a purpose what would it be and why?
e.g. flac for lossless audio because…
(yes you can add new categories)
summary:
- photos .jxl
- open domain image data .exr
- videos .av1
- lossless audio .flac
- lossy audio .opus
- subtitles srt/ass
- fonts .otf
- container mkv (doesnt contain .jxl)
- plain text utf-8 (many also say markup but disagree on the implementation)
- documents .odt
- archive files (this one is causing a bloodbath so i picked randomly) .tar.zst
- configuration files toml
- typesetting typst
- interchange format .ora
- models .gltf / .glb
- daw session files .dawproject
- otdr measurement results .xml
That is not what 96khz means. It doesn’t just mean it can store frequencies up to that frequency, it means that there are 96,000 samples every second, so you capture more detail in the waveform.
Having said that I’ll give anyone £1m if they can tell the difference between 48khz and 96khz. 96khz and 192khz should absolutely be used for capture but are absolutely not needed for playback.
It means it can capture any frequency up to half the sample rate, perfectly. The “extra detail” in the waveform is higher frequencies beyond the range of human hearing
That is what it means. Any detail in the waveform that is not captured by a 48kHz sample rate is due to frequencies that humans can’t hear.
this is a misconception about how waves are reconstructed. each sample is a single point in time. But the sampling theorem says that if you have a bunch of discrete samples, equally spaced in time, there is one and only one continuous solution that would hit those samples exactly, provided the original signal did not contain any frequencies above nyquist (half the sampling rate). Sampling any higher than that gives you no further useful information. There is stil only one solution.
tldr: the reconstructed signal is a continuous analog signal, not a stair step looking thing
deleted by creator