Sunday, 4 October 2020

Data Compression

 




Data compression, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any given pressure is lost or without loss. Reduces compression without bit loss by identifying and eliminating statistical redundancy. No information is lost in compression without loss.

 

Reduces lost compression of bits by removing unnecessary or less important information. Usually, the device that compresses the data is referred to as an encoder, and the device that does the inverting (decompression) process is referred to as a decoder.

 

Data Compression The process of reducing the size of a data file is often referred to as data compression. In the context of data transmission, it is called source encryption; Encoding performed in the data source before it is stored or transmitted. Source coding is not to be confused with channel coding, error detection, and delivery or line coding, and means of data binding on a signal.

 

Compression is beneficial because it reduces the resources required to store and transfer data. Computing resources are consumed in the compression and decompression processes.

 

The data compression is subject to a space-time complexity trade-off. For example, a video compressor may need expensive hardware in order for the audio and video to be decompressed quickly enough to display while decompressing it, and the option to fully decompress the audio and video before viewing it may be inconvenient or need additional storage space.

 

The design of data compression plots includes trade-offs between various factors, including the degree of compression, the amount of distortion introduced (when using lost data compression), and the computational resources required to compress and decompress the data.

 

No data loss

Usually, lossless data compression uses a statistical redundancy algorithm to represent the data without losing any information so that the process is reversible.

 

Lossless compression is possible because most realistic data show statistical repeatability. For example, an image may contain areas of color that do not change across multiple pixels; Instead of encoding “red pixel, red pixel, data may be encoded as“ 279 أحمر red pixels. ”This is a prime example of run-length coding; there are lots of schemes to reduce the file size by eliminating redundancy.

 

Among the most popular algorithms for lossless storage are the LZ-compression methods. DEFLATE is a variation of LZ and is an optimizer for compression velocity and compression ratio, but compression can even be slow. In the mid-1980s, after work by Terry Welch, the LZW algorithm quickly became the preferred method for most general-purpose compression systems.

 

LZW is used in GIF images, programs like PKZIP, and devices such as modems. LZ methods use the table-based compression model in which table entries are replaced with redundant data strings. For most LZ methods, this table is dynamically generated from the previous data in the input. Often the table itself may be HFMAN coding.

 

Base-based codes can even compress such a very frequent entry very effectively, for example, collecting biological data of the same or closely related types, a large collection of duplicate documents, Internet archiving, etc.

 

The primary function of syntax-based notation is to create context-free rules for deriving a single string. Other practical rule compression algorithms include Scotcher's algorithm and Re-Pair.

No comments:

Post a comment








Contact Us

Name

Email *

Message *