A list of puns related to "Lossless Compression"
I put a similar question at gimp subreddit but I guess this is more related sub for avif.
I see gimp plug-in only gives me "nearly lossless" option and quality sliderbar. If I set the slider at 100 and uncheck "nearly lossless", will it give me lossless result from the raw picture?
This may be a very silly question, but is there any downside to using Lossless Compression instead of Uncompressed when shooting RAW?
I have a D500 and always shoot 14 bit RAW (mainly for flexibility when processing images). I shoot sports, aviation and landscapes.
Any time I look into the Pros and Cons of Lossless Compression, the only thing I can find is the benefit of smaller files, but never any downsides. So help me out Reddit, what am I missing?
I just want to preface, for this post, I do not care if the computation required to decode something is high. Lets say my goal is have the highest efficiency for archiving data, and i can afford to locally decrypt whatever is needed.
So i understand some of the basic lossless compression ideas. For example if you have AAAAAAAAB you could just say (8A)B or something. But how complicated and robust have people attempted to make lossless compression algorithms?
For example, could you represent a string of data purely as numbers, and then take the square root of that number to shrink the size? Here is an example. Say you have a string representing 1000000. You could then represent that as 1000:1 (where 1 represents the number of times the original number was square-rooted). Or if you had 100000000, you could have 100:2. Of course in practice youd probably have to break up your code into many numbers, but the idea still stands.
So back to my question, has any super powerful lossless compression algorithms been created? What is the strongest one out there?
100mb to <15mb wikipedia excerpt.
From website: "This compression contest is motivated by the fact that being able to compress well is closely related to acting intelligently, thus reducing the slippery concept of intelligence to hard file size numbers."
I hope people put their mind to this. The most important area of science is the one that can read & understand all the other areas of science, nearly instantly.
I have started downloading YouTube videos and a lot of them. I have a folder full of ones that I would like to keep but won't watch often. Is there a way to losslessly compress that folder really well. I don't mind if it takes ages and I have to keep the program running overnight. I also don't mind paying for some high end software. I don't need to encrypt or have a password or anything. How can I get the data as small as possible?
What do you use for image compression?
I recently found ImageOptim for mac (or online) and it's changed my game. I will never upload another image online without first running it through this.
Combined with iResize I'm able to format images for the web in seconds.
How do you all deal with image compression for the web? Let me know your tips and strategies!
I just listened to the second George Hotz episode, and I'm very intrigued by one particular idea that was raised.
At around 2:36:50, Lex brings up the Hutter Prize. This quote is part of George's response:
>"It's lossless compression. And... that is equivalent to intelligence."
I'm not sure I fully understand what he means by that statement. Is he saying that the act of losslessly compressing data necessitates intelligence in some fundamental way?
I've googled "lossless compression implies intelligence" and similar phrases, but nothing seems to yield much and I've read about the Hutter Prize. I understand some of the computational similarities between text compression and AI, and from a conceptual standpoint I can understand why efficient compression of natural language text would be a hard problem.
But I would really like to go further and understand the logic behind all of this. What are the specifics of the connection between natural language compression and general intelligence? For example, can this same concept be extrapolated onto other AI problems outside of the space of natural language? Is object recognition in a digital image processing algorithm a compression algorithm in some ways, which serves to compress the entire pixel contents of an image down to a single classifier string? This wouldn't be lossless, which is what makes me think that I'm framing all of this incorrectly in my mind.
Would greatly appreciate any insight from others or any recommended reading on the topic. Thanks!
EDIT: Seems like this post is continually gathering a slow flow of new attention over time, as more people listen to the episode. I personally found Lex's episode with Hutter to be especially helpful in understanding this topic, and would recommend it to anyone else who's curious.
I saw this explanation of lossless compression on 4chan.
Take a sequence of 3 numbers that has length L ---> 1/3 + 1/3 + 1/3 = L
and after a lossless compression algorithm is used that length must remain the same in order to be lossless but this doesn't reduce the file size ----> 1/4 + 1/3 + 5/12 = L <---- one box is compressed and one box is stretched (2/3 of the inputs are altered in some way)
Now take any sequence of 5 binary numbers that has length L and try to map this set to the set of any sequence of 6 binary numbers that have a length less than L.
In other words, this is a mapping of a set with 2^n element(inputs) to a set with 2^m elements(outputs) where 2^n > 2^m.
This leads to a surjective function because there will be at least 2 cases where 2 strings of 5 bits will map to 1 string of 6 bits. This means that there is no way to reverse the function from the output back to the original inputs, therefore there is no way to produce a lossless algorithm that decreases the file size of an image/video, etc.
Release 1.8 brings bug fixes, improved performance, better parallelism and better compression at level 1 & 5.
See https://github.com/flanglet/kanzi-go for code and performance numbers.
Warning: The bitstream format has changed (and may change until release 2.0). Also, always keep a backup of your files.
PeaZip, as other archive manager applications, is a lossless file compressor - even if it implements some routines to convert, resize, and compress graphic files using both lossy and lossless algorithms.
Lossless compression means no information is lost in encoding the larger uncompressed content into the smaller compressed content, so after decompression the resulting output will be 1:1 identical to the original uncompressed content. This makes this class of algorithms suited to encode data that is not resilient to modifications, in which even a single different byte would not be acceptable (as executables, databases, documents, and so on).
Read more about similarities and difference of lossless and lossy compression.
Hereβs a link to the latest blog updates!
https://dolphin-emu.org/blog/2020/07/05/dolphin-progress-report-may-and-june-2020/
Would this be beneficial to our netplay community?
Is there a lossless audio codec with a better compression ratio than FLAC? Or is FLAC still as good as it gets? If not, what other lossless audio codecs are comparable (in terms of their compression ratio)?
I just did an update and users are getting errors that could only make sense if lossy compression was being applied to my asset files.
Research paper at: https://arxiv.org/abs/1905.06845
Implementation on GitHub: https://github.com/fhkingma/bitswap
> The bits-back argument suggests that latent variable models can be turned into lossless compression schemes. Translating the bits-back argument into efficient and practical lossless compression schemes for general latent variable models, however, is still an open problem. Bits-Back with Asymmetric Numeral Systems (BB-ANS), recently proposed by Townsend et al. (2019), makes bits-back coding practically feasible for latent variable models with one latent layer, but it is inefficient for hierarchical latent variable models. In this paper we propose Bit-Swap, a new compression scheme that generalizes BB-ANS and achieves strictly better compression rates for hierarchical latent variable models with Markov chain structure. Through experiments we verify that Bit-Swap results in lossless compression rates that are empirically superior to existing techniques
I have been reading this research paper, and have had a look at the implementation. The demo runs as described in the paper. However, I am having trouble wrapping my head around the inference pipeline, as I mostly implement models in Keras while this is a Torch implementation. I would really appreciate if anyone could explain the paper/implementation in slightly simpler terms. Thanks :)
After reading about Shannon's entropy and source-coding theory, it seems like there's no way to progress further in lossless compression. We've already hit the limit with things like Huffman coding. Is my understanding correct?
Hi,
I would like to know what would be the easiest way to visualize the difference lossless audio has over compressed version of the same file. The idea is to visually represent the amount of lost information when compressing in a lossy format like MP3.
I tried 'Analysis' option in Audacity, but I'm unable to notice any significant difference. Trying to graph the uncompressed data in python takes a long time and freezes my system.
What are my options ?
Does anybody know of any algorithms that provide lossless compression of 2-D coordinates? Also would you happen to know the compression rate? Let's say you have 9 points on a 2-D plane, each with their respective X, Y coordinates. Currently that means that 18 different numbers have to be stored. Are there any algorithms that will consistently (i.e. regardless of if there are patterns or correlations in the data) reduce this amount of numbers?
Seems like you get the benefits of raw with a smaller file. Am I missing something?
Not sure if this is the right subreddit. if not please redirect me to the best one.
I am using peazip to compress folders that have many files (doc, pdf, images) and other already compressed files so that it may be smaller.
I am confused what is the best lossless compression format is. I checked the peazip's benchmark but I got confused with the wording.
https://preview.redd.it/xb572qgfc8v41.png?width=1071&format=png&auto=webp&s=53deb1da7d3273d5eebbe3f4af4db2b5eb513c1d
Also when I try to compress some folders, the output is the exact same size. (e.g. 34gb folder to a 34gb compressed version). Am I using the peazip software incorrectly? Or is there a particular lossless compression format I should choose?
Have we hit the known limits for things like text, image, and video lossless compression? Can those be calculated somehow?
With games like RDR2 ad CoD MW being over 150GB in size, the games will soon become too big and you will have to have separate HDD for every game. It is understandable, because the game graphics get detailed more and more every year, but imho the growth in size is not sustainable.
So devs should really stop being "lazy" and start considering ways of maintaing the graphical fidelity but not being wasteful of resources. Sure, games as a streaming service are big thing and will probably take off, but the problem will remain, if you will have to download 10GB of data just to enter one room and move around it.
This was something that always bothered me and I wanna find out the answer. In FL Studio, when you export as .flac , you can choose between 8 different levels of compression, the more compression you apply the more it reduces its file size. However, flac is supposed to be lossless compression as far as I know and sound exactly identical to .wav , basically perfect audio, hence lossless.
If so, then why is there the option of having multiple levels of compression in the first place, if there would be no benefit for your file taking up more space in the 1-7 levels of space and the biggest level of flac compression always being the best objectively period? Is there something I'm missing?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.