I’ve been wondering—why don’t we have an AI model that can take any piece of music, compress it into a super small “musical script” with parameters, and then generate it back so it sounds almost identical to the original? Kind of like MIDI or sheet music but way more detailed, capturing all the nuances. With modern AI, it seems like this should be possible. Is it a technical limitation, or are we just not thinking about it?
We already have quite good methods for compression, both lossy and lossless. Any “AI” method would have to reliably beat the benchmarks set by those. Seems like that hasn’t happened yet, though there definitely is research in that direction.
because AI can’t help but mess with shit. I’ve tried giving stuff like gpt an image, and telling it to do nothing to the image and give it back. poof… it needed to add crap or change things for no reason.
tools to compress music already exist. use those, they work.
Data compression pretty much already works like this. The most likely reason is that data compression given current encoding paradigms is already about as good as it can be. The kinds of parameter notations you speak of would probably not result in a greater compression than existing algorithms
Not AI, but maybe a midi file (or another format that holds instrument playback data) that uses the same instruments? You don’t need AI to do this


