Description: 自适应编码
本发明的目标是生成一种方法:对数组压缩和解压缩编码,特别是二进制数组;对数组整体的编码技术;展现了一个通过极限熵得到的特别的压缩比率。
-Adaptive Coding goal of the invention is to generate a Method : array compression coding, in particular binary array; a whole array of coding techniques; demonstrated a limit entropy through the special compression ratio. Platform: |
Size: 105472 |
Author:11111111 |
Hits:
Description: 哈夫曼编码,基于MFC开发。哈夫曼编码为熵编码中的最佳方法之一,广泛用于各种数据压缩技术中。-Huffman coding, based on the MFC. Huffman coding for entropy coding one of the best ways, for a wide variety of data compression technology. Platform: |
Size: 201728 |
Author:canny06 |
Hits:
Description: This file contains a new and improved version of the Huffman coder, (June 29. 2001). The name is Huff06.m. There are also some additional files which are helpful when using Matlab for data compression: quantizer, different variants of run-length-encoding and end-of-block coding in Mat2Vec, and a program which do JPEG-like entropy coding. A complete compression example is shown in TestMat2Vec.m. This file is all you need for Huffman coding in MatLab.-This file contains a new and improved version of on of the Huffman coder, (June 29. 2001). The name is Huff06.m. There are also some additional files which are helpful wh en using Matlab for data compression : quantizer. different variants of run-length-encoding an d end-of-block coding in Mat2Vec. and a program which do JPEG-like entropy coding . A complete compression example is shown in Tes tMat2Vec.m. This file is all you need for Huffma n coding in MatLab. Platform: |
Size: 25600 |
Author:金金 |
Hits:
Description: 使用matlab所開發的jpeg工具,裡面完全沒有使用到matlab的函示,包含predition,quantization都是自己所撰寫的,針對lena做壓縮,但最後沒有做entropy的過程-Use matlab developed jpeg tool, which did not use the letter to the matlab that contains predition, quantization are written by himself, for lena to do compression, but in the end did not make the process of entropy Platform: |
Size: 15360 |
Author:楊於 |
Hits:
Description: MPEG-2视频压缩编码.主要包括DCT变换,对DCT变换得到的系数进行量化,和熵编码,运动估计,运动补偿等内容.可参考书籍《Visual C++实现MPEG/JPEG编解码技术》来进行学习。-MPEG-2 video compression encoding. Mainly include the DCT transform, the DCT transform coefficients to quantify, and entropy coding, motion estimation, motion compensation and so on. May refer to the book Platform: |
Size: 219136 |
Author:周里 |
Hits:
Description: 运动估计是视频编码的关键技术,其最基本的原理是利用相邻帧间的时间
相关性,通过预测来减少时间冗余度。在实际编码中,为了节省码率,并不传
输每一帧的全部数据,而是利用运动估计求出每一帧与其预测参考帧之间的差
值。运动估计越准确,差值的分布越趋近与零,差值块的能量越小,经过变换、
量化和熵编码后所产生的码流的比特位率也越少。因此,运动估计搜索的准确
程度直接影响到了编码的压缩性能。
-Motion estimation is the key to video encoding technology, its basic principle is the use of the adjacent time frame relevant, through the forecast period to reduce redundancy. In the actual code, in order to save bit rate and does not transfer all the data of each frame, but the use of motion estimation for each frame obtained with the prediction of the difference between the reference frame. Motion estimation more accurate, more closer to the margin and the distribution of zero, the smaller the energy difference between blocks, after transform, quantization and entropy coding of the code produced by the bit stream bit rate is less. Therefore, the motion estimation search accuracy of the direct impact on the performance of the compression coding. Platform: |
Size: 17408 |
Author:罗鹏 |
Hits:
Description: The Shannon entropy is a measure of the average information content one is missing when one does not know the value of the random variable. The concept was introduced by Claude E. Shannon in his 1948 paper "A Mathematical Theory of Communication".
Shannon s entropy represents an absolute limit on the best possible lossless compression of any communication, under certain constraints: treating messages to be encoded as a sequence of independent and identically-distributed random variables, Shannon s source coding theorem shows that, in the limit, the average length of the shortest possible representation to encode the messages in a given alphabet is their entropy divided by the logarithm of the number of symbols in the target alphabet.
Platform: |
Size: 2048 |
Author:jeremy |
Hits:
Description: 高通滤波器,带通滤波器,数字FIR滤波器-The Shannon entropy is a measure of the average information content one is missing when one does not know the value of the random variable. The concept was introduced by Claude E. Shannon in his 1948 paper "A Mathematical Theory of Communication".
Shannon s entropy represents an absolute limit on the best possible lossless compression of any communication, under certain constraints: treating messages to be encoded as a sequence of independent and identically-distributed random variables, Shannon s source coding theorem shows that, in the limit, the average length of the shortest possible representation to encode the messages in a given alphabet is their entropy divided by the logarithm of the number of symbols in the target alphabet.
Platform: |
Size: 1024 |
Author:jeremy |
Hits:
Description: 视频信号熵编码源程序
熵编码法是一种进行无损数据压缩的技术,在这个技术中一段文字中的每个字母被一段不同长度的比特(Bit)所代替。与此相对的是LZ77或者LZ78等数据压缩方法,在这些方法中原文的一段字母列被其它字母取代。
每个字母按照其出现的可能性所获得的最佳比特数取决于熵。
本程序不仅提供源程序,还提供相应的实例。-Entropy coding the video signal source for entropy coding method is a lossless data compression technology in the technology section of the text in each letter is a bit different lengths (Bit) replaced. In contrast the LZ77 or LZ78 data compression method, etc., in these methods section of the original letters column is replaced by other letters. Each letter appears in accordance with its possibilities to obtain the optimal number of bits depends on the entropy. This procedure provides not only source, but also provide examples. Platform: |
Size: 286720 |
Author:安慰安防 |
Hits:
Description: 数据挖掘之基于信息熵的算法研究,A research on entropy of information compression(EMC-GA)-A research on entropy of information compression(EMC-GA) Platform: |
Size: 1310720 |
Author:zhoubo |
Hits:
Description: 熵编码(entropy encoding)是一类利用数据的统计信息进行压缩的无语义数据流之无损编码。本章先介绍熵的基本概念,然后介绍香农-范诺(Shannon-Fano)编码、哈夫曼(Huffman)编码、算术编码(arithmetic coding)、行程编码(RLE)和LZW编码等常用的熵编码方法。 哈夫曼编码建议了一种将位元进位成整数的算法,但这个算法在特定情况下无法达到最佳结果。为此有人加以改进,提供最佳整数位元数。这个算法使用二叉树来设立一个编码。这个二叉树的终端节点代表被编码的字母,根节点代表使用的位元。除这个对每个要编码的数据产生一个特别的表格的方法外还有使用固定的编码表的方法。比如加入要编码的数据中符号出现的机率符合一定的规则的话就可以使用特别的变长编码表。这样的编码表具有一定的系数来使得它适应实际的字母出现机率。-Entropy coding (entropy encoding) is a kind of use data of statistical information compression without semantic data flow of nondestructive coding. This chapter first introduces the basic concept of entropy, and then introduce Shannon- occupies (Shannon- Fano) coding, Huffman (Huffman) coding, arithmetic coding (arithmetic coding), stroke encoding (RLE) and LZW encoding used entropy coding method. Huffman encoding suggest a bit carry into integer algorithm, but this algorithm in specific cases not to achieve the best results. Therefore some improved, providing the best integer bit number. This algorithm using binary tree to set up a code. The binary tree terminal node representing the code letters, on behalf of the root node used bits. In addition to this for each to coded data to create a special form of the method and use fixed code table method. Such as join to coding data symbols appear probability accord with certain rules words can use special variable ChangBian chart. This code Platform: |
Size: 18688000 |
Author:杨飞帆 |
Hits:
Description: 音频无损压缩程序,加入随机噪声,利用前向预测算出预测误差,再用熵编码,组帧,最后解码-Lossless audio compression program, by adding random noise to the prediction error is calculated using the forward prediction, and then entropy coding, framing, and finally decoding Platform: |
Size: 6532096 |
Author:李鹏飞 |
Hits:
Description: Huffman coding and Compression algorithm c++ code
In computer science and information theory, a Huffman code is an optimal prefix code found using the algorithm developed by David A. Huffman while he was a Ph.D. student at MIT, and published in the 1952 paper A Method for the Construction of Minimum-Redundancy Codes .[1] The process of finding and/or using such a code is called Huffman coding and is a common technique in entropy encoding, including in lossless data compression. The algorithm s output can be viewed as a variable-length code table for encoding a source symbol (such as a character in a file). Huffman s algorithm derives this table based on the estimated probability or frequency of occurrence (weight) for each possible value of the source symbol. -Huffman coding and Compression algorithm c++ code
In computer science and information theory, a Huffman code is an optimal prefix code found using the algorithm developed by David A. Huffman while he was a Ph.D. student at MIT, and published in the 1952 paper A Method for the Construction of Minimum-Redundancy Codes .[1] The process of finding and/or using such a code is called Huffman coding and is a common technique in entropy encoding, including in lossless data compression. The algorithm s output can be viewed as a variable-length code table for encoding a source symbol (such as a character in a file). Huffman s algorithm derives this table based on the estimated probability or frequency of occurrence (weight) for each possible value of the source symbol. Platform: |
Size: 2048 |
Author:stoulod |
Hits:
Description: Zstandard, or zstd as short version, is a fast lossless compression algorithm, targeting real-time compression scenarios at zlib-level and better compression ratios. It's backed by a very fast entropy stage, provided by Huff0 and FSE library. Platform: |
Size: 1310720 |
Author:demoth |
Hits: