Here are some ideas
See original GitHub issueHi everyone, I’ve been busy, which is why I haven’t been able to check-in as often. So first of all I want to thank everyone for their contributions and a special thanks to @Theelgirl and @dobrosketchkun since I know you two have been putting in a lot of work into the program. Your work does not go unnoticed.
I have some ideas I want to share and get some feedback.
- Increasing storage capacity. Originally, the program was made with the simple thought that 1-bit (black and white) pixels would allow the program to be more efficient at data retrieval against compression algorithms. But of course, using a single pixel to represent a bit, leaves a lot to be desired as far as maximizing storage goes. I think we can still achieve the same logic by adding the option for another type of encoding (in addition to 1-bit color) by using colors to represent a set of 2 bits per pixel thus doubling the storage capacity while still keeping the colors simple enough to guard against compression. Here is the math/logic:
Using the RGB color spectrum, the simplest colors are red, green, and blue. This means that they are easy to distinguish from one another even if compression changes them a bit.
With that in mind, using binary numbers we can double storage by storing 2 bits in a single color, thus giving us a possibility of 2^2 or 4 different combinations. Then we can assign a color to each combination and use black or white for the remaining color:
00: Black
10: Red
01: Green
11: Blue
Then as far as decoding goes, the logic would be the same as for black and white. We would check which color the values is closest to and assume that color:
For example, if the pixel is (255,12,30), then the color must be Red (bin: 10), since the pixel contains more red than anything else.
I haven’t done the research yet, but we might also be able to take advantage of an alpha channel using the RGBA color spectrum, but I would assume that A might not be as easy to guard against compression.
-
Adding some sort of error checking /correction algorithm Even though, the current implementation of fvid seems to work the times I have tried it. It’s not perfect and there have been some reports of it not working with certain files. Because of this, I think we should look into adding some error checking or correction. A simple implementation might be adding a parity check where we add an extra bit or set of bits for every byte (8 bits) if the number is odd or even. However, that doesn’t fix the data, it only tells us that the data is wrong and that assumes that the data was decoded correctly, which are probably too many assumptions for it to be a good solution. So I am open to hearing if anyone has any suggestions on this.
-
GUI Recently, the program has been getting more attention (because of a TikTok I made lol) and I have seen more requests for a GUI. I know @dobrosketchkun made a GUI for the program, but I have not seen that implemented yet. I have not made a GUI for a Python program before, so I was doing some research and was thinking of building one using PyQT, or Kivy, but since @dobrosketchkun already did some work using Tkinter, I rather their work not go to waste. In addition, I think it would be cool to make this optional, so maybe during the install process or with a different package.
-
Changing the License MIT was the original option since that is the most open license I know and I like the idea of sharing the source code and allowing people to do whatever they want with the program, but I don’t want the work contributed by others to be “taken advantage of” for lack of a better phrase. With the amount of time contributors have put into their work, I would like the program and any copies to remain open source. So I propose, we change the license to GPL V3, but of course I am open to suggestions.
Everything here is just a suggestion, I want to hear what others have to say.
Issue Analytics
- State:
- Created 3 years ago
- Comments:34 (18 by maintainers)
Top GitHub Comments
Thanks for the offer, however this project has been unofficially superseded by https://github.com/MeViMo/youbit. That repo is better in essentially every way.
@AlfredoSequeida Actually, with a block size of 8, Zfec is applied every 8 bytes. Zfec simply expands the data by 25% using the formula bloat% = MVAL/KVAL (not sure how it recovers using the original symbols but eh). In this case, with KVAL of 4 and MVAL of 5, that means that for every 5 blocks of data, we need 4 to reconstruct the original. Since it bloated the block to 10 symbols, it’s easily divisible by 5.