source: imaps-frontend/node_modules/fflate/README.md@ 79a0317

main
Last change on this file since 79a0317 was 79a0317, checked in by stefan toskovski <stefantoska84@…>, 3 days ago

F4 Finalna Verzija

  • Property mode set to 100644
File size: 26.7 KB
Line 
1# fflate
2High performance (de)compression in an 8kB package
3
4## Why fflate?
5`fflate` (short for fast flate) is the **fastest, smallest, and most versatile** pure JavaScript compression and decompression library in existence, handily beating [`pako`](https://npmjs.com/package/pako), [`tiny-inflate`](https://npmjs.com/package/tiny-inflate), and [`UZIP.js`](https://github.com/photopea/UZIP.js) in performance benchmarks while being multiple times more lightweight. Its compression ratios are often better than even the original Zlib C library. It includes support for DEFLATE, GZIP, and Zlib data. Data compressed by `fflate` can be decompressed by other tools, and vice versa.
6
7In addition to the base decompression and compression APIs, `fflate` supports high-speed ZIP file archiving for an extra 3 kB. In fact, the compressor, in synchronous mode, compresses both more quickly and with a higher compression ratio than most compression software (even Info-ZIP, a C program), and in asynchronous mode it can utilize multiple threads to achieve over 3x the performance of virtually any other utility.
8
9| | `pako` | `tiny-inflate` | `UZIP.js` | `fflate` |
10|-----------------------------|--------|------------------------|-----------------------|--------------------------------|
11| Decompression performance | 1x | Up to 40% slower | **Up to 40% faster** | **Up to 40% faster** |
12| Compression performance | 1x | N/A | Up to 25% faster | **Up to 50% faster** |
13| Base bundle size (minified) | 45.6kB | **3kB (inflate only)** | 14.2kB | 8kB **(3kB for inflate only)** |
14| Decompression support | ✅ | ✅ | ✅ | ✅ |
15| Compression support | ✅ | ❌ | ✅ | ✅ |
16| ZIP support | ❌ | ❌ | ✅ | ✅ |
17| Streaming support | ✅ | ❌ | ❌ | ✅ |
18| GZIP support | ✅ | ❌ | ❌ | ✅ |
19| Supports files up to 4GB | ✅ | ❌ | ❌ | ✅ |
20| Doesn't hang on error | ✅ | ❌ | ❌ | ✅ |
21| Dictionary support | ✅ | ❌ | ❌ | ✅ |
22| Multi-thread/Asynchronous | ❌ | ❌ | ❌ | ✅ |
23| Streaming ZIP support | ❌ | ❌ | ❌ | ✅ |
24| Uses ES Modules | ❌ | ❌ | ❌ | ✅ |
25
26## Demo
27If you'd like to try `fflate` for yourself without installing it, you can take a look at the [browser demo](https://101arrowz.github.io/fflate). Since `fflate` is a pure JavaScript library, it works in both the browser and Node.js (see [Browser support](https://github.com/101arrowz/fflate/#browser-support) for more info).
28
29## Usage
30
31Install `fflate`:
32```sh
33npm i fflate # or yarn add fflate, or pnpm add fflate
34```
35
36Import:
37```js
38// I will assume that you use the following for the rest of this guide
39import * as fflate from 'fflate';
40
41// However, you should import ONLY what you need to minimize bloat.
42// So, if you just need GZIP compression support:
43import { gzipSync } from 'fflate';
44// Woo! You just saved 20 kB off your bundle with one line.
45```
46
47If your environment doesn't support ES Modules (e.g. Node.js):
48```js
49// Try to avoid this when using fflate in the browser, as it will import
50// all of fflate's components, even those that you aren't using.
51const fflate = require('fflate');
52```
53
54If you want to load from a CDN in the browser:
55```html
56<!--
57You should use either UNPKG or jsDelivr (i.e. only one of the following)
58
59Note that tree shaking is completely unsupported from the CDN. If you want
60a small build without build tools, please ask me and I will make one manually
61with only the features you need. This build is about 31kB, or 11.5kB gzipped.
62-->
63<script src="https://unpkg.com/fflate@0.8.2"></script>
64<script src="https://cdn.jsdelivr.net/npm/fflate@0.8.2/umd/index.js"></script>
65<!-- Now, the global variable fflate contains the library -->
66
67<!-- If you're going buildless but want ESM, import from Skypack -->
68<script type="module">
69 import * as fflate from 'https://cdn.skypack.dev/fflate@0.8.2?min';
70</script>
71```
72
73If you are using Deno:
74```js
75// Don't use the ?dts Skypack flag; it isn't necessary for Deno support
76// The @deno-types comment adds TypeScript typings
77
78// @deno-types="https://cdn.skypack.dev/fflate@0.8.2/lib/index.d.ts"
79import * as fflate from 'https://cdn.skypack.dev/fflate@0.8.2?min';
80```
81
82
83If your environment doesn't support bundling:
84```js
85// Again, try to import just what you need
86
87// For the browser:
88import * as fflate from 'fflate/esm/browser.js';
89// If the standard ESM import fails on Node (i.e. older version):
90import * as fflate from 'fflate/esm';
91```
92
93And use:
94```js
95// This is an ArrayBuffer of data
96const massiveFileBuf = await fetch('/aMassiveFile').then(
97 res => res.arrayBuffer()
98);
99// To use fflate, you need a Uint8Array
100const massiveFile = new Uint8Array(massiveFileBuf);
101// Note that Node.js Buffers work just fine as well:
102// const massiveFile = require('fs').readFileSync('aMassiveFile.txt');
103
104// Higher level means lower performance but better compression
105// The level ranges from 0 (no compression) to 9 (max compression)
106// The default level is 6
107const notSoMassive = fflate.zlibSync(massiveFile, { level: 9 });
108const massiveAgain = fflate.unzlibSync(notSoMassive);
109const gzipped = fflate.gzipSync(massiveFile, {
110 // GZIP-specific: the filename to use when decompressed
111 filename: 'aMassiveFile.txt',
112 // GZIP-specific: the modification time. Can be a Date, date string,
113 // or Unix timestamp
114 mtime: '9/1/16 2:00 PM'
115});
116```
117`fflate` can autodetect a compressed file's format as well:
118```js
119const compressed = new Uint8Array(
120 await fetch('/GZIPorZLIBorDEFLATE').then(res => res.arrayBuffer())
121);
122// Above example with Node.js Buffers:
123// Buffer.from('H4sIAAAAAAAAE8tIzcnJBwCGphA2BQAAAA==', 'base64');
124
125const decompressed = fflate.decompressSync(compressed);
126```
127
128Using strings is easy with `fflate`'s string conversion API:
129```js
130const buf = fflate.strToU8('Hello world!');
131
132// The default compression method is gzip
133// Increasing mem may increase performance at the cost of memory
134// The mem ranges from 0 to 12, where 4 is the default
135const compressed = fflate.compressSync(buf, { level: 6, mem: 8 });
136
137// When you need to decompress:
138const decompressed = fflate.decompressSync(compressed);
139const origText = fflate.strFromU8(decompressed);
140console.log(origText); // Hello world!
141```
142
143If you need to use an (albeit inefficient) binary string, you can set the second argument to `true`.
144```js
145const buf = fflate.strToU8('Hello world!');
146
147// The second argument, latin1, is a boolean that indicates that the data
148// is not Unicode but rather should be encoded and decoded as Latin-1.
149// This is useful for creating a string from binary data that isn't
150// necessarily valid UTF-8. However, binary strings are incredibly
151// inefficient and tend to double file size, so they're not recommended.
152const compressedString = fflate.strFromU8(
153 fflate.compressSync(buf),
154 true
155);
156const decompressed = fflate.decompressSync(
157 fflate.strToU8(compressedString, true)
158);
159const origText = fflate.strFromU8(decompressed);
160console.log(origText); // Hello world!
161```
162
163You can use streams as well to incrementally add data to be compressed or decompressed:
164```js
165// This example uses synchronous streams, but for the best experience
166// you'll definitely want to use asynchronous streams.
167
168let outStr = '';
169const gzipStream = new fflate.Gzip({ level: 9 }, (chunk, isLast) => {
170 // accumulate in an inefficient binary string (just an example)
171 outStr += fflate.strFromU8(chunk, true);
172});
173
174// You can also attach the data handler separately if you don't want to
175// do so in the constructor.
176gzipStream.ondata = (chunk, final) => { ... }
177
178// Since this is synchronous, all errors will be thrown by stream.push()
179gzipStream.push(chunk1);
180gzipStream.push(chunk2);
181
182...
183
184// You should mark the last chunk by using true in the second argument
185// In addition to being necessary for the stream to work properly, this
186// will also set the isLast parameter in the handler to true.
187gzipStream.push(lastChunk, true);
188
189console.log(outStr); // The compressed binary string is now available
190
191// The options parameter for compression streams is optional; you can
192// provide one parameter (the handler) or none at all if you set
193// deflateStream.ondata later.
194const deflateStream = new fflate.Deflate((chunk, final) => {
195 console.log(chunk, final);
196});
197
198// If you want to create a stream from strings, use EncodeUTF8
199const utfEncode = new fflate.EncodeUTF8((data, final) => {
200 // Chaining streams together is done by pushing to the
201 // next stream in the handler for the previous stream
202 deflateStream.push(data, final);
203});
204
205utfEncode.push('Hello'.repeat(1000));
206utfEncode.push(' '.repeat(100));
207utfEncode.push('world!'.repeat(10), true);
208
209// The deflateStream has logged the compressed data
210
211const inflateStream = new fflate.Inflate();
212inflateStream.ondata = (decompressedChunk, final) => { ... };
213
214let stringData = '';
215
216// Streaming UTF-8 decode is available too
217const utfDecode = new fflate.DecodeUTF8((data, final) => {
218 stringData += data;
219});
220
221// Decompress streams auto-detect the compression method, as the
222// non-streaming decompress() method does.
223const dcmpStrm = new fflate.Decompress((chunk, final) => {
224 console.log(chunk, 'was encoded with GZIP, Zlib, or DEFLATE');
225 utfDecode.push(chunk, final);
226});
227
228dcmpStrm.push(zlibJSONData1);
229dcmpStrm.push(zlibJSONData2, true);
230
231// This succeeds; the UTF-8 decoder chained with the unknown compression format
232// stream to reach a string as a sink.
233console.log(JSON.parse(stringData));
234```
235
236You can create multi-file ZIP archives easily as well. Note that by default, compression is enabled for all files, which is not useful when ZIPping many PNGs, JPEGs, PDFs, etc. because those formats are already compressed. You should either override the level on a per-file basis or globally to avoid wasting resources.
237```js
238// Note that the asynchronous version (see below) runs in parallel and
239// is *much* (up to 3x) faster for larger archives.
240const zipped = fflate.zipSync({
241 // Directories can be nested structures, as in an actual filesystem
242 'dir1': {
243 'nested': {
244 // You can use Unicode in filenames
245 '你好.txt': fflate.strToU8('Hey there!')
246 },
247 // You can also manually write out a directory path
248 'other/tmp.txt': new Uint8Array([97, 98, 99, 100])
249 },
250
251 // You can also provide compression options
252 'massiveImage.bmp': [aMassiveFile, {
253 level: 9,
254 mem: 12
255 }],
256 // PNG is pre-compressed; no need to waste time
257 'superTinyFile.png': [aPNGFile, { level: 0 }],
258
259 // Directories take options too
260 'exec': [{
261 'hello.sh': [fflate.strToU8('echo hello world'), {
262 // ZIP only: Set the operating system to Unix
263 os: 3,
264 // ZIP only: Make this file executable on Unix
265 attrs: 0o755 << 16
266 }]
267 }, {
268 // ZIP and GZIP support mtime (defaults to current time)
269 mtime: new Date('10/20/2020')
270 }]
271}, {
272 // These options are the defaults for all files, but file-specific
273 // options take precedence.
274 level: 1,
275 // Obfuscate last modified time by default
276 mtime: new Date('1/1/1980')
277});
278
279// If you write the zipped data to myzip.zip and unzip, the folder
280// structure will be outputted as:
281
282// myzip.zip (original file)
283// dir1
284// |-> nested
285// | |-> 你好.txt
286// |-> other
287// | |-> tmp.txt
288// massiveImage.bmp
289// superTinyFile.png
290
291// When decompressing, folders are not nested; all filepaths are fully
292// written out in the keys. For example, the return value may be:
293// { 'nested/directory/structure.txt': Uint8Array(2) [97, 97] }
294const decompressed = fflate.unzipSync(zipped, {
295 // You may optionally supply a filter for files. By default, all files in a
296 // ZIP archive are extracted, but a filter can save resources by telling
297 // the library not to decompress certain files
298 filter(file) {
299 // Don't decompress the massive image or any files larger than 10 MiB
300 return file.name != 'massiveImage.bmp' && file.originalSize <= 10_000_000;
301 }
302});
303```
304
305If you need extremely high performance or custom ZIP compression formats, you can use the highly-extensible ZIP streams. They take streams as both input and output. You can even use custom compression/decompression algorithms from other libraries, as long as they [are defined in the ZIP spec](https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT) (see section 4.4.5). If you'd like more info on using custom compressors, [feel free to ask](https://github.com/101arrowz/fflate/discussions).
306```js
307// ZIP object
308// Can also specify zip.ondata outside of the constructor
309const zip = new fflate.Zip((err, dat, final) => {
310 if (!err) {
311 // output of the streams
312 console.log(dat, final);
313 }
314});
315
316const helloTxt = new fflate.ZipDeflate('hello.txt', {
317 level: 9
318});
319
320// Always add streams to ZIP archives before pushing to those streams
321zip.add(helloTxt);
322
323helloTxt.push(chunk1);
324// Last chunk
325helloTxt.push(chunk2, true);
326
327// ZipPassThrough is like ZipDeflate with level 0, but allows for tree shaking
328const nonStreamingFile = new fflate.ZipPassThrough('test.png');
329zip.add(nonStreamingFile);
330// If you have data already loaded, just .push(data, true)
331nonStreamingFile.push(pngData, true);
332
333// You need to call .end() after finishing
334// This ensures the ZIP is valid
335zip.end();
336
337// Unzip object
338const unzipper = new fflate.Unzip();
339
340// This function will almost always have to be called. It is used to support
341// compression algorithms such as BZIP2 or LZMA in ZIP files if just DEFLATE
342// is not enough (though it almost always is).
343// If your ZIP files are not compressed, this line is not needed.
344unzipper.register(fflate.UnzipInflate);
345
346const neededFiles = ['file1.txt', 'example.json'];
347
348// Can specify handler in constructor too
349unzipper.onfile = file => {
350 // file.name is a string, file is a stream
351 if (neededFiles.includes(file.name)) {
352 file.ondata = (err, dat, final) => {
353 // Stream output here
354 console.log(dat, final);
355 };
356
357 console.log('Reading:', file.name);
358
359 // File sizes are sometimes not set if the ZIP file did not encode
360 // them, so you may want to check that file.size != undefined
361 console.log('Compressed size', file.size);
362 console.log('Decompressed size', file.originalSize);
363
364 // You should only start the stream if you plan to use it to improve
365 // performance. Only after starting the stream will ondata be called.
366 // This method will throw if the compression method hasn't been registered
367 file.start();
368 }
369};
370
371// Try to keep under 5,000 files per chunk to avoid stack limit errors
372// For example, if all files are a few kB, multi-megabyte chunks are OK
373// If files are mostly under 100 bytes, 64kB chunks are the limit
374unzipper.push(zipChunk1);
375unzipper.push(zipChunk2);
376unzipper.push(zipChunk3, true);
377```
378
379As you may have guessed, there is an asynchronous version of every method as well. Unlike most libraries, this will cause the compression or decompression run in a separate thread entirely and automatically by using Web (or Node) Workers. This means that the processing will not block the main thread at all.
380
381Note that there is a significant initial overhead to using workers of about 50ms for each asynchronous function. For instance, if you call `unzip` ten times, the overhead only applies for the first call, but if you call `unzip` and `zlib`, they will each cause the 50ms delay. For small (under about 50kB) payloads, the asynchronous APIs will be much slower. However, if you're compressing larger files/multiple files at once, or if the synchronous API causes the main thread to hang for too long, the callback APIs are an order of magnitude better.
382```js
383import {
384 gzip, zlib, AsyncGzip, zip, unzip, strFromU8,
385 Zip, AsyncZipDeflate, Unzip, AsyncUnzipInflate
386} from 'fflate';
387
388// Workers will work in almost any browser (even IE11!)
389// All of the async APIs use a node-style callback as so:
390const terminate = gzip(aMassiveFile, (err, data) => {
391 if (err) {
392 // The compressed data was likely corrupt, so we have to handle
393 // the error.
394 return;
395 }
396 // Use data however you like
397 console.log(data.length);
398});
399
400if (needToCancel) {
401 // The return value of any of the asynchronous APIs is a function that,
402 // when called, will immediately cancel the operation. The callback
403 // will not be called.
404 terminate();
405}
406
407// If you wish to provide options, use the second argument.
408
409// The consume option will render the data inside aMassiveFile unusable,
410// but can improve performance and dramatically reduce memory usage.
411zlib(aMassiveFile, { consume: true, level: 9 }, (err, data) => {
412 // Use the data
413});
414
415// Asynchronous streams are similar to synchronous streams, but the
416// handler has the error that occurred (if any) as the first parameter,
417// and they don't block the main thread.
418
419// Additionally, any buffers that are pushed in will be consumed and
420// rendered unusable; if you need to use a buffer you push in, you
421// should clone it first.
422const gzs = new AsyncGzip({ level: 9, mem: 12, filename: 'hello.txt' });
423let wasCallbackCalled = false;
424gzs.ondata = (err, chunk, final) => {
425 // Note the new err parameter
426 if (err) {
427 // Note that after this occurs, the stream becomes corrupt and must
428 // be discarded. You can't continue pushing chunks and expect it to
429 // work.
430 console.error(err);
431 return;
432 }
433 wasCallbackCalled = true;
434}
435gzs.push(chunk);
436
437// Since the stream is asynchronous, the callback will not be called
438// immediately. If such behavior is absolutely necessary (it shouldn't
439// be), use synchronous streams.
440console.log(wasCallbackCalled) // false
441
442// To terminate an asynchronous stream's internal worker, call
443// stream.terminate().
444gzs.terminate();
445
446// This is way faster than zipSync because the compression of multiple
447// files runs in parallel. In fact, the fact that it's parallelized
448// makes it faster than most standalone ZIP CLIs. The effect is most
449// significant for multiple large files; less so for many small ones.
450zip({ f1: aMassiveFile, 'f2.txt': anotherMassiveFile }, {
451 // The options object is still optional, you can still do just
452 // zip(archive, callback)
453 level: 6
454}, (err, data) => {
455 // Save the ZIP file
456});
457
458// unzip is the only async function without support for consume option
459// It is parallelized, so unzip is also often much faster than unzipSync
460unzip(aMassiveZIPFile, (err, unzipped) => {
461 // If the archive has data.xml, log it here
462 console.log(unzipped['data.xml']);
463 // Conversion to string
464 console.log(strFromU8(unzipped['data.xml']))
465});
466
467// Streaming ZIP archives can accept asynchronous streams. This automatically
468// uses multicore compression.
469const zip = new Zip();
470zip.ondata = (err, chunk, final) => { ... };
471// The JSON and BMP are compressed in parallel
472const exampleFile = new AsyncZipDeflate('example.json');
473zip.add(exampleFile);
474exampleFile.push(JSON.stringify({ large: 'object' }), true);
475const exampleFile2 = new AsyncZipDeflate('example2.bmp', { level: 9 });
476zip.add(exampleFile2);
477exampleFile2.push(ec2a);
478exampleFile2.push(ec2b);
479exampleFile2.push(ec2c);
480...
481exampleFile2.push(ec2Final, true);
482zip.end();
483
484// Streaming Unzip should register the asynchronous inflation algorithm
485// for parallel processing.
486const unzip = new Unzip(stream => {
487 if (stream.name.endsWith('.json')) {
488 stream.ondata = (err, chunk, final) => { ... };
489 stream.start();
490
491 if (needToCancel) {
492 // To cancel these streams, call .terminate()
493 stream.terminate();
494 }
495 }
496});
497unzip.register(AsyncUnzipInflate);
498unzip.push(data, true);
499```
500
501See the [documentation](https://github.com/101arrowz/fflate/blob/master/docs/README.md) for more detailed information about the API.
502
503## Bundle size estimates
504
505The bundle size measurements for `fflate` on sites like Bundlephobia include every feature of the library and should be seen as an upper bound. As long as you are using tree shaking or dead code elimination, this table should give you a general idea of `fflate`'s bundle size for the features you need.
506
507The maximum bundle size that is possible with `fflate` is about 31kB (11.5kB gzipped) if you use every single feature, but feature parity with `pako` is only around 10kB (as opposed to 45kB from `pako`). If your bundle size increases dramatically after adding `fflate`, please [create an issue](https://github.com/101arrowz/fflate/issues/new).
508
509| Feature | Bundle size (minified) | Nearest competitor |
510|-------------------------|--------------------------------|-------------------------|
511| Decompression | 3kB | `tiny-inflate` |
512| Compression | 5kB | `UZIP.js`, 2.84x larger |
513| Async decompression | 4kB (1kB + raw decompression) | N/A |
514| Async compression | 6kB (1kB + raw compression) | N/A |
515| ZIP decompression | 5kB (2kB + raw decompression) | `UZIP.js`, 2.84x larger |
516| ZIP compression | 7kB (2kB + raw compression) | `UZIP.js`, 2.03x larger |
517| GZIP/Zlib decompression | 4kB (1kB + raw decompression) | `pako`, 11.4x larger |
518| GZIP/Zlib compression | 5kB (1kB + raw compression) | `pako`, 9.12x larger |
519| Streaming decompression | 4kB (1kB + raw decompression) | `pako`, 11.4x larger |
520| Streaming compression | 5kB (1kB + raw compression) | `pako`, 9.12x larger |
521
522## What makes `fflate` so fast?
523Many JavaScript compression/decompression libraries exist. However, the most popular one, [`pako`](https://npmjs.com/package/pako), is merely a clone of Zlib rewritten nearly line-for-line in JavaScript. Although it is by no means poorly made, `pako` doesn't recognize the many differences between JavaScript and C, and therefore is suboptimal for performance. Moreover, even when minified, the library is 45 kB; it may not seem like much, but for anyone concerned with optimizing bundle size (especially library authors), it's more weight than necessary.
524
525Note that there exist some small libraries like [`tiny-inflate`](https://npmjs.com/package/tiny-inflate) for solely decompression, and with a minified size of 3 kB, it can be appealing; however, its performance is lackluster, typically 40% worse than `pako` in my tests.
526
527[`UZIP.js`](https://github.com/photopea/UZIP.js) is both faster (by up to 40%) and smaller (14 kB minified) than `pako`, and it contains a variety of innovations that make it excellent for both performance and compression ratio. However, the developer made a variety of tiny mistakes and inefficient design choices that make it imperfect. Moreover, it does not support GZIP or Zlib data directly; one must remove the headers manually to use `UZIP.js`.
528
529So what makes `fflate` different? It takes the brilliant innovations of `UZIP.js` and optimizes them while adding direct support for GZIP and Zlib data. And unlike all of the above libraries, it uses ES Modules to allow for partial builds through tree shaking, meaning that it can rival even `tiny-inflate` in size while maintaining excellent performance. The end result is a library that, in total, weighs 8kB minified for the core build (3kB for decompression only and 5kB for compression only), is about 15% faster than `UZIP.js` or up to 60% faster than `pako`, and achieves the same or better compression ratio than the rest.
530
531Before you decide that `fflate` is the end-all compression library, you should note that JavaScript simply cannot rival the performance of a native program. If you're only using Node.js, it's probably better to use the [native Zlib bindings](https://nodejs.org/api/zlib.html), which tend to offer the best performance. Though note that even against Zlib, `fflate` is only around 30% slower in decompression and 10% slower in compression, and can still achieve better compression ratios!
532
533## What about `CompressionStream`?
534Like `fflate`, the [Compression Streams API](https://developer.mozilla.org/en-US/docs/Web/API/Compression_Streams_API) provides DEFLATE, GZIP, and Zlib compression and decompression support. It's a good option if you'd like to compress or decompress data without installing any third-party libraries, and it wraps native Zlib bindings to achieve better performance than what most JavaScript programs can achieve.
535
536However, browsers do not offer any native non-streaming compression API, and `CompressionStream` has surprisingly poor performance on data already loaded into memory; `fflate` tends to be faster even for files that are dozens of megabytes large. Similarly, `fflate` is much faster for files under a megabyte because it avoids marshalling overheads. Even when streaming hundreds of megabytes of data, the native API usually performs between 30% faster and 10% slower than `fflate`. And Compression Streams have many other disadvantages - no ability to control compression level, poor support for older browsers, no ZIP support, etc.
537
538If you'd still prefer to depend upon a native browser API but want to support older browsers, you can use an `fflate`-based [Compression Streams ponyfill](https://github.com/101arrowz/compression-streams-polyfill).
539
540## Browser support
541`fflate` makes heavy use of typed arrays (`Uint8Array`, `Uint16Array`, etc.). Typed arrays can be polyfilled at the cost of performance, but the most recent browser that doesn't support them [is from 2011](https://caniuse.com/typedarrays), so I wouldn't bother.
542
543The asynchronous APIs also use `Worker`, which is not supported in a few browsers (however, the vast majority of browsers that support typed arrays support `Worker`).
544
545Other than that, `fflate` is completely ES3, meaning you probably won't even need a bundler to use it.
546
547## Testing
548You can validate the performance of `fflate` with `npm test`. It validates that the module is working as expected, ensures the outputs are no more than 5% larger than competitors at max compression, and outputs performance metrics to `test/results`.
549
550Note that the time it takes for the CLI to show the completion of each test is not representative of the time each package took, so please check the JSON output if you want accurate measurements.
551
552## License
553
554This software is [MIT Licensed](./LICENSE), with special exemptions for projects
555and organizations as noted below:
556
557- [SheetJS](https://github.com/SheetJS/) is exempt from MIT licensing and may
558 license any source code from this software under the BSD Zero Clause License
Note: See TracBrowser for help on using the repository browser.