Trigrad is a novel image compression algorithm that uses triangle gradients (barycentric coordinates) to compress. This differs heavily from other compression techniques such as JPEG, which typically use signal functions as the basis for their compression.
How well does it work?
Not really well enough to try to push onto people. Current results show that Trigrad can only beat something like JPEG with optimal conditions.
How does it look?
One of the best tests of a compression algorithm is how its artifacts look. Here's a comparison of a mostly uncompressed image and one compressed with Trigrad, using 3,000 samples. (Left: Uncompressed. Right: Compressed.)
Of course, with a higher number of samples, a better quality output can be produced. The same image with 100,000 samples starts to look a lot more accurate. (Left: Uncompressed. Right: Compressed.)
How does it work?
This is where Trigrad gets interesting. Throughout development, I've made sure to produce as many visualizations as I can for the purposes of debugging. This will allow me to explain the process of compression and decompression visually.
This step is vital. An edge detection filter (currently sobel) is used to produce a table of values. These values dictate the chance that a sample will be formed later on.
Sampling (~100,000 samples)
This is a visualization of what samples are taken during the sampling step. You can easily see how more samples have formed around the areas of interest thanks to the last step.
At this stage, the triangles are rasterized and filled using Barycentric coordinates. This visualization shows each channel representing a different value in the barycentric coordinates.
Now, the pixels will be filled based on their Barycentric coordinates. The default and most obvious option is to fill the pixels smoothly based on their distance. However, more interesting options are available. This is after being filled using a dithering algorithm. It almost looks like it could be a Photoshop filter. This is after being filled with a nearest-neighbour type algorithm. Nice and polygonal.
This is a simple visualization of the effect of how many samples are picked.
After posting this to HN, I was alerted to some related work.
Fast Polygonal Approximation of Terrains and Height Fields
 Previously was much larger than the output truly was due to human error.
 Improvements to the saving algorithm reduced this from 532KB to 333KB.
 This is not the best comparison, as the different files may have aimed for a different standard of quality. Take with a grain of salt.