7z'V$9/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.bzip2; import java.util.BitSet; /** * Encapsulates the Burrows-Wheeler sorting algorithm needed by {@link * BZip2CompressorOutputStream}. * *

This class is based on a Java port of Julian Seward's * blocksort.c in his libbzip2

* *

The Burrows-Wheeler transform is a reversible transform of the * original data that is supposed to group similiar bytes close to * each other. The idea is to sort all permutations of the input and * only keep the last byte of each permutation. E.g. for "Commons * Compress" you'd get:

* *
 *  CompressCommons
 * Commons Compress
 * CompressCommons 
 * essCommons Compr
 * mmons CompressCo
 * mons CompressCom
 * mpressCommons Co
 * ns CompressCommo
 * ommons CompressC
 * ompressCommons C
 * ons CompressComm
 * pressCommons Com
 * ressCommons Comp
 * s CompressCommon
 * sCommons Compres
 * ssCommons Compre
 * 
* *

Which results in a new text "ss romooCCmmpnse", in adition the * index of the first line that contained the original text is kept - * in this case it is 1. The idea is that in a long English text all * permutations that start with "he" are likely suffixes of a "the" and * thus they end in "t" leading to a larger block of "t"s that can * better be compressed by the subsequent Move-to-Front, run-length * und Huffman encoding steps.

* *

For more information see for example:

* * * @NotThreadSafe */ class BlockSort { /* * Some of the constructs used in the C code cannot be ported * literally to Java - for example macros, unsigned types. Some * code has been hand-tuned to improve performance. In order to * avoid memory pressure some structures are reused for several * blocks and some memory is even shared between sorting and the * MTF stage even though either algorithm uses it for its own * purpose. * * Comments preserved from the actual C code are prefixed with * "LBZ2:". */ /* * 2012-05-20 Stefan Bodewig: * * This class seems to mix several revisions of libbzip2's code. * The mainSort function and those used by it look closer to the * 0.9.5 version but show some variations introduced later. At * the same time the logic of Compress 1.4 to randomize the block * on bad input has been dropped after libbzip2 0.9.0 and replaced * by a fallback sorting algorithm. * * I've added the fallbackSort function of 1.0.6 and tried to * integrate it with the existing code without touching too much. * I've also removed the now unused randomization code. */ /* * LBZ2: If you are ever unlucky/improbable enough to get a stack * overflow whilst sorting, increase the following constant and * try again. In practice I have never seen the stack go above 27 * elems, so the following limit seems very generous. */ private static final int QSORT_STACK_SIZE = 1000; private static final int FALLBACK_QSORT_STACK_SIZE = 100; private static final int STACK_SIZE = QSORT_STACK_SIZE < FALLBACK_QSORT_STACK_SIZE ? FALLBACK_QSORT_STACK_SIZE : QSORT_STACK_SIZE; /* * Used when sorting. If too many long comparisons happen, we stop sorting, * and use fallbackSort instead. */ private int workDone; private int workLimit; private boolean firstAttempt; private final int[] stack_ll = new int[STACK_SIZE]; // 4000 byte private final int[] stack_hh = new int[STACK_SIZE]; // 4000 byte private final int[] stack_dd = new int[QSORT_STACK_SIZE]; // 4000 byte private final int[] mainSort_runningOrder = new int[256]; // 1024 byte private final int[] mainSort_copy = new int[256]; // 1024 byte private final boolean[] mainSort_bigDone = new boolean[256]; // 256 byte private final int[] ftab = new int[65537]; // 262148 byte /** * Array instance identical to Data's sfmap, both are used only * temporarily and indepently, so we do not need to allocate * additional memory. */ private final char[] quadrant; BlockSort(final BZip2CompressorOutputStream.Data data) { this.quadrant = data.sfmap; } void blockSort(final BZip2CompressorOutputStream.Data data, final int last) { this.workLimit = WORK_FACTOR * last; this.workDone = 0; this.firstAttempt = true; if (last + 1 < 10000) { fallbackSort(data, last); } else { mainSort(data, last); if (this.firstAttempt && (this.workDone > this.workLimit)) { fallbackSort(data, last); } } final int[] fmap = data.fmap; data.origPtr = -1; for (int i = 0; i <= last; i++) { if (fmap[i] == 0) { data.origPtr = i; break; } } // assert (data.origPtr != -1) : data.origPtr; } /** * Adapt fallbackSort to the expected interface of the rest of the * code, in particular deal with the fact that block starts at * offset 1 (in libbzip2 1.0.6 it starts at 0). */ final void fallbackSort(final BZip2CompressorOutputStream.Data data, final int last) { data.block[0] = data.block[last + 1]; fallbackSort(data.fmap, data.block, last + 1); for (int i = 0; i < last + 1; i++) { --data.fmap[i]; } for (int i = 0; i < last + 1; i++) { if (data.fmap[i] == -1) { data.fmap[i] = last; break; } } } /*---------------------------------------------*/ /*---------------------------------------------*/ /*--- LBZ2: Fallback O(N log(N)^2) sorting ---*/ /*--- algorithm, for repetitive blocks ---*/ /*---------------------------------------------*/ /* * This is the fallback sorting algorithm libbzip2 1.0.6 uses for * repetitive or very short inputs. * * The idea is inspired by Manber-Myers string suffix sorting * algorithm. First a bucket sort places each permutation of the * block into a bucket based on its first byte. Permutations are * represented by pointers to their first character kept in * (partially) sorted order inside the array ftab. * * The next step visits all buckets in order and performs a * quicksort on all permutations of the bucket based on the index * of the bucket the second byte of the permutation belongs to, * thereby forming new buckets. When arrived here the * permutations are sorted up to the second character and we have * buckets of permutations that are identical up to two * characters. * * Repeat the step of quicksorting each bucket, now based on the * bucket holding the sequence of the third and forth character * leading to four byte buckets. Repeat this doubling of bucket * sizes until all buckets only contain single permutations or the * bucket size exceeds the block size. * * I.e. * * "abraba" form three buckets for the chars "a", "b", and "r" in * the first step with * * fmap = { 'a:' 5, 3, 0, 'b:' 4, 1, 'r', 2 } * * when looking at the bucket of "a"s the second characters are in * the buckets that start with fmap-index 0 (rolled over), 3 and 3 * respectively, forming two new buckets "aa" and "ab", so we get * * fmap = { 'aa:' 5, 'ab:' 3, 0, 'ba:' 4, 'br': 1, 'ra:' 2 } * * since the last bucket only contained a single item it didn't * have to be sorted at all. * * There now is just one bucket with more than one permutation * that remains to be sorted. For the permutation that starts * with index 3 the third and forth char are in bucket 'aa' at * index 0 and for the one starting at block index 0 they are in * bucket 'ra' with sort index 5. The fully sorted order then becomes. * * fmap = { 5, 3, 0, 4, 1, 2 } * */ /** * @param fmap points to the index of the starting point of a * permutation inside the block of data in the current * partially sorted order * @param eclass points from the index of a character inside the * block to the first index in fmap that contains the * bucket of its suffix that is sorted in this step. * @param lo lower boundary of the fmap-interval to be sorted * @param hi upper boundary of the fmap-interval to be sorted */ private void fallbackSimpleSort(int[] fmap, int[] eclass, int lo, int hi) { if (lo == hi) { return; } int j; if (hi - lo > 3) { for (int i = hi - 4; i >= lo; i--) { int tmp = fmap[i]; int ec_tmp = eclass[tmp]; for (j = i + 4; j <= hi && ec_tmp > eclass[fmap[j]]; j += 4) { fmap[j - 4] = fmap[j]; } fmap[j - 4] = tmp; } } for (int i = hi - 1; i >= lo; i--) { int tmp = fmap[i]; int ec_tmp = eclass[tmp]; for (j = i + 1; j <= hi && ec_tmp > eclass[fmap[j]]; j++) { fmap[j - 1] = fmap[j]; } fmap[j-1] = tmp; } } private static final int FALLBACK_QSORT_SMALL_THRESH = 10; /** * swaps two values in fmap */ private void fswap(int[] fmap, int zz1, int zz2) { int zztmp = fmap[zz1]; fmap[zz1] = fmap[zz2]; fmap[zz2] = zztmp; } /** * swaps two intervals starting at yyp1 and yyp2 of length yyn inside fmap. */ private void fvswap(int[] fmap, int yyp1, int yyp2, int yyn) { while (yyn > 0) { fswap(fmap, yyp1, yyp2); yyp1++; yyp2++; yyn--; } } private int fmin(int a, int b) { return a < b ? a : b; } private void fpush(int sp, int lz, int hz) { stack_ll[sp] = lz; stack_hh[sp] = hz; } private int[] fpop(int sp) { return new int[] { stack_ll[sp], stack_hh[sp] }; } /** * @param fmap points to the index of the starting point of a * permutation inside the block of data in the current * partially sorted order * @param eclass points from the index of a character inside the * block to the first index in fmap that contains the * bucket of its suffix that is sorted in this step. * @param loSt lower boundary of the fmap-interval to be sorted * @param hiSt upper boundary of the fmap-interval to be sorted */ private void fallbackQSort3(int[] fmap, int[] eclass, int loSt, int hiSt) { int lo, unLo, ltLo, hi, unHi, gtHi, n; long r = 0; int sp = 0; fpush(sp++, loSt, hiSt); while (sp > 0) { int[] s = fpop(--sp); lo = s[0]; hi = s[1]; if (hi - lo < FALLBACK_QSORT_SMALL_THRESH) { fallbackSimpleSort(fmap, eclass, lo, hi); continue; } /* LBZ2: Random partitioning. Median of 3 sometimes fails to avoid bad cases. Median of 9 seems to help but looks rather expensive. This too seems to work but is cheaper. Guidance for the magic constants 7621 and 32768 is taken from Sedgewick's algorithms book, chapter 35. */ r = ((r * 7621) + 1) % 32768; long r3 = r % 3, med; if (r3 == 0) { med = eclass[fmap[lo]]; } else if (r3 == 1) { med = eclass[fmap[(lo + hi) >>> 1]]; } else { med = eclass[fmap[hi]]; } unLo = ltLo = lo; unHi = gtHi = hi; // looks like the ternary partition attributed to Wegner // in the cited Sedgewick paper while (true) { while (true) { if (unLo > unHi) { break; } n = eclass[fmap[unLo]] - (int) med; if (n == 0) { fswap(fmap, unLo, ltLo); ltLo++; unLo++; continue; } if (n > 0) { break; } unLo++; } while (true) { if (unLo > unHi) { break; } n = eclass[fmap[unHi]] - (int) med; if (n == 0) { fswap(fmap, unHi, gtHi); gtHi--; unHi--; continue; } if (n < 0) { break; } unHi--; } if (unLo > unHi) { break; } fswap(fmap, unLo, unHi); unLo++; unHi--; } if (gtHi < ltLo) { continue; } n = fmin(ltLo - lo, unLo - ltLo); fvswap(fmap, lo, unLo - n, n); int m = fmin(hi - gtHi, gtHi - unHi); fvswap(fmap, unHi + 1, hi - m + 1, m); n = lo + unLo - ltLo - 1; m = hi - (gtHi - unHi) + 1; if (n - lo > hi - m) { fpush(sp++, lo, n); fpush(sp++, m, hi); } else { fpush(sp++, m, hi); fpush(sp++, lo, n); } } } /*---------------------------------------------*/ private int[] eclass; private int[] getEclass() { return eclass == null ? (eclass = new int[quadrant.length / 2]) : eclass; } /* * The C code uses an array of ints (each int holding 32 flags) to * represents the bucket-start flags (bhtab). It also contains * optimizations to skip over 32 consecutively set or * consecutively unset bits on word boundaries at once. For now * I've chosen to use the simpler but potentially slower code * using BitSet - also in the hope that using the BitSet#nextXXX * methods may be fast enough. */ /** * @param fmap points to the index of the starting point of a * permutation inside the block of data in the current * partially sorted order * @param block the original data * @param nblock size of the block * @param off offset of first byte to sort in block */ final void fallbackSort(int[] fmap, byte[] block, int nblock) { final int[] ftab = new int[257]; int H, i, j, k, l, r, cc, cc1; int nNotDone; int nBhtab; final int[] eclass = getEclass(); for (i = 0; i < nblock; i++) { eclass[i] = 0; } /*-- LBZ2: Initial 1-char radix sort to generate initial fmap and initial BH bits. --*/ for (i = 0; i < nblock; i++) { ftab[block[i] & 0xff]++; } for (i = 1; i < 257; i++) { ftab[i] += ftab[i - 1]; } for (i = 0; i < nblock; i++) { j = block[i] & 0xff; k = ftab[j] - 1; ftab[j] = k; fmap[k] = i; } nBhtab = 64 + nblock; BitSet bhtab = new BitSet(nBhtab); for (i = 0; i < 256; i++) { bhtab.set(ftab[i]); } /*-- LBZ2: Inductively refine the buckets. Kind-of an "exponential radix sort" (!), inspired by the Manber-Myers suffix array construction algorithm. --*/ /*-- LBZ2: set sentinel bits for block-end detection --*/ for (i = 0; i < 32; i++) { bhtab.set(nblock + 2 * i); bhtab.clear(nblock + 2 * i + 1); } /*-- LBZ2: the log(N) loop --*/ H = 1; while (true) { j = 0; for (i = 0; i < nblock; i++) { if (bhtab.get(i)) { j = i; } k = fmap[i] - H; if (k < 0) { k += nblock; } eclass[k] = j; } nNotDone = 0; r = -1; while (true) { /*-- LBZ2: find the next non-singleton bucket --*/ k = r + 1; k = bhtab.nextClearBit(k); l = k - 1; if (l >= nblock) { break; } k = bhtab.nextSetBit(k + 1); r = k - 1; if (r >= nblock) { break; } /*-- LBZ2: now [l, r] bracket current bucket --*/ if (r > l) { nNotDone += (r - l + 1); fallbackQSort3(fmap, eclass, l, r); /*-- LBZ2: scan bucket and generate header bits-- */ cc = -1; for (i = l; i <= r; i++) { cc1 = eclass[fmap[i]]; if (cc != cc1) { bhtab.set(i); cc = cc1; } } } } H *= 2; if (H > nblock || nNotDone == 0) { break; } } } /*---------------------------------------------*/ /* * LBZ2: Knuth's increments seem to work better than Incerpi-Sedgewick here. * Possibly because the number of elems to sort is usually small, typically * <= 20. */ private static final int[] INCS = { 1, 4, 13, 40, 121, 364, 1093, 3280, 9841, 29524, 88573, 265720, 797161, 2391484 }; /** * This is the most hammered method of this class. * *

* This is the version using unrolled loops. Normally I never use such ones * in Java code. The unrolling has shown a noticable performance improvement * on JRE 1.4.2 (Linux i586 / HotSpot Client). Of course it depends on the * JIT compiler of the vm. *

*/ private boolean mainSimpleSort(final BZip2CompressorOutputStream.Data dataShadow, final int lo, final int hi, final int d, final int lastShadow) { final int bigN = hi - lo + 1; if (bigN < 2) { return this.firstAttempt && (this.workDone > this.workLimit); } int hp = 0; while (INCS[hp] < bigN) { hp++; } final int[] fmap = dataShadow.fmap; final char[] quadrant = this.quadrant; final byte[] block = dataShadow.block; final int lastPlus1 = lastShadow + 1; final boolean firstAttemptShadow = this.firstAttempt; final int workLimitShadow = this.workLimit; int workDoneShadow = this.workDone; // Following block contains unrolled code which could be shortened by // coding it in additional loops. HP: while (--hp >= 0) { final int h = INCS[hp]; final int mj = lo + h - 1; for (int i = lo + h; i <= hi;) { // copy for (int k = 3; (i <= hi) && (--k >= 0); i++) { final int v = fmap[i]; final int vd = v + d; int j = i; // for (int a; // (j > mj) && mainGtU((a = fmap[j - h]) + d, vd, // block, quadrant, lastShadow); // j -= h) { // fmap[j] = a; // } // // unrolled version: // start inline mainGTU boolean onceRunned = false; int a = 0; HAMMER: while (true) { if (onceRunned) { fmap[j] = a; if ((j -= h) <= mj) { break HAMMER; } } else { onceRunned = true; } a = fmap[j - h]; int i1 = a + d; int i2 = vd; // following could be done in a loop, but // unrolled it for performance: if (block[i1 + 1] == block[i2 + 1]) { if (block[i1 + 2] == block[i2 + 2]) { if (block[i1 + 3] == block[i2 + 3]) { if (block[i1 + 4] == block[i2 + 4]) { if (block[i1 + 5] == block[i2 + 5]) { if (block[(i1 += 6)] == block[(i2 += 6)]) { int x = lastShadow; X: while (x > 0) { x -= 4; if (block[i1 + 1] == block[i2 + 1]) { if (quadrant[i1] == quadrant[i2]) { if (block[i1 + 2] == block[i2 + 2]) { if (quadrant[i1 + 1] == quadrant[i2 + 1]) { if (block[i1 + 3] == block[i2 + 3]) { if (quadrant[i1 + 2] == quadrant[i2 + 2]) { if (block[i1 + 4] == block[i2 + 4]) { if (quadrant[i1 + 3] == quadrant[i2 + 3]) { if ((i1 += 4) >= lastPlus1) { i1 -= lastPlus1; } if ((i2 += 4) >= lastPlus1) { i2 -= lastPlus1; } workDoneShadow++; continue X; } else if ((quadrant[i1 + 3] > quadrant[i2 + 3])) { continue HAMMER; } else { break HAMMER; } } else if ((block[i1 + 4] & 0xff) > (block[i2 + 4] & 0xff)) { continue HAMMER; } else { break HAMMER; } } else if ((quadrant[i1 + 2] > quadrant[i2 + 2])) { continue HAMMER; } else { break HAMMER; } } else if ((block[i1 + 3] & 0xff) > (block[i2 + 3] & 0xff)) { continue HAMMER; } else { break HAMMER; } } else if ((quadrant[i1 + 1] > quadrant[i2 + 1])) { continue HAMMER; } else { break HAMMER; } } else if ((block[i1 + 2] & 0xff) > (block[i2 + 2] & 0xff)) { continue HAMMER; } else { break HAMMER; } } else if ((quadrant[i1] > quadrant[i2])) { continue HAMMER; } else { break HAMMER; } } else if ((block[i1 + 1] & 0xff) > (block[i2 + 1] & 0xff)) { continue HAMMER; } else { break HAMMER; } } break HAMMER; } // while x > 0 else { if ((block[i1] & 0xff) > (block[i2] & 0xff)) { continue HAMMER; } else { break HAMMER; } } } else if ((block[i1 + 5] & 0xff) > (block[i2 + 5] & 0xff)) { continue HAMMER; } else { break HAMMER; } } else if ((block[i1 + 4] & 0xff) > (block[i2 + 4] & 0xff)) { continue HAMMER; } else { break HAMMER; } } else if ((block[i1 + 3] & 0xff) > (block[i2 + 3] & 0xff)) { continue HAMMER; } else { break HAMMER; } } else if ((block[i1 + 2] & 0xff) > (block[i2 + 2] & 0xff)) { continue HAMMER; } else { break HAMMER; } } else if ((block[i1 + 1] & 0xff) > (block[i2 + 1] & 0xff)) { continue HAMMER; } else { break HAMMER; } } // HAMMER // end inline mainGTU fmap[j] = v; } if (firstAttemptShadow && (i <= hi) && (workDoneShadow > workLimitShadow)) { break HP; } } } this.workDone = workDoneShadow; return firstAttemptShadow && (workDoneShadow > workLimitShadow); } /*-- LBZ2: The following is an implementation of an elegant 3-way quicksort for strings, described in a paper "Fast Algorithms for Sorting and Searching Strings", by Robert Sedgewick and Jon L. Bentley. --*/ private static void vswap(int[] fmap, int p1, int p2, int n) { n += p1; while (p1 < n) { int t = fmap[p1]; fmap[p1++] = fmap[p2]; fmap[p2++] = t; } } private static byte med3(byte a, byte b, byte c) { return (a < b) ? (b < c ? b : a < c ? c : a) : (b > c ? b : a > c ? c : a); } private static final int SMALL_THRESH = 20; private static final int DEPTH_THRESH = 10; private static final int WORK_FACTOR = 30; /** * Method "mainQSort3", file "blocksort.c", BZip2 1.0.2 */ private void mainQSort3(final BZip2CompressorOutputStream.Data dataShadow, final int loSt, final int hiSt, final int dSt, final int last) { final int[] stack_ll = this.stack_ll; final int[] stack_hh = this.stack_hh; final int[] stack_dd = this.stack_dd; final int[] fmap = dataShadow.fmap; final byte[] block = dataShadow.block; stack_ll[0] = loSt; stack_hh[0] = hiSt; stack_dd[0] = dSt; for (int sp = 1; --sp >= 0;) { final int lo = stack_ll[sp]; final int hi = stack_hh[sp]; final int d = stack_dd[sp]; if ((hi - lo < SMALL_THRESH) || (d > DEPTH_THRESH)) { if (mainSimpleSort(dataShadow, lo, hi, d, last)) { return; } } else { final int d1 = d + 1; final int med = med3(block[fmap[lo] + d1], block[fmap[hi] + d1], block[fmap[(lo + hi) >>> 1] + d1]) & 0xff; int unLo = lo; int unHi = hi; int ltLo = lo; int gtHi = hi; while (true) { while (unLo <= unHi) { final int n = (block[fmap[unLo] + d1] & 0xff) - med; if (n == 0) { final int temp = fmap[unLo]; fmap[unLo++] = fmap[ltLo]; fmap[ltLo++] = temp; } else if (n < 0) { unLo++; } else { break; } } while (unLo <= unHi) { final int n = (block[fmap[unHi] + d1] & 0xff) - med; if (n == 0) { final int temp = fmap[unHi]; fmap[unHi--] = fmap[gtHi]; fmap[gtHi--] = temp; } else if (n > 0) { unHi--; } else { break; } } if (unLo <= unHi) { final int temp = fmap[unLo]; fmap[unLo++] = fmap[unHi]; fmap[unHi--] = temp; } else { break; } } if (gtHi < ltLo) { stack_ll[sp] = lo; stack_hh[sp] = hi; stack_dd[sp] = d1; sp++; } else { int n = ((ltLo - lo) < (unLo - ltLo)) ? (ltLo - lo) : (unLo - ltLo); vswap(fmap, lo, unLo - n, n); int m = ((hi - gtHi) < (gtHi - unHi)) ? (hi - gtHi) : (gtHi - unHi); vswap(fmap, unLo, hi - m + 1, m); n = lo + unLo - ltLo - 1; m = hi - (gtHi - unHi) + 1; stack_ll[sp] = lo; stack_hh[sp] = n; stack_dd[sp] = d; sp++; stack_ll[sp] = n + 1; stack_hh[sp] = m - 1; stack_dd[sp] = d1; sp++; stack_ll[sp] = m; stack_hh[sp] = hi; stack_dd[sp] = d; sp++; } } } } private static final int SETMASK = (1 << 21); private static final int CLEARMASK = (~SETMASK); final void mainSort(final BZip2CompressorOutputStream.Data dataShadow, final int lastShadow) { final int[] runningOrder = this.mainSort_runningOrder; final int[] copy = this.mainSort_copy; final boolean[] bigDone = this.mainSort_bigDone; final int[] ftab = this.ftab; final byte[] block = dataShadow.block; final int[] fmap = dataShadow.fmap; final char[] quadrant = this.quadrant; final int workLimitShadow = this.workLimit; final boolean firstAttemptShadow = this.firstAttempt; // LBZ2: Set up the 2-byte frequency table for (int i = 65537; --i >= 0;) { ftab[i] = 0; } /* * In the various block-sized structures, live data runs from 0 to * last+NUM_OVERSHOOT_BYTES inclusive. First, set up the overshoot area * for block. */ for (int i = 0; i < BZip2Constants.NUM_OVERSHOOT_BYTES; i++) { block[lastShadow + i + 2] = block[(i % (lastShadow + 1)) + 1]; } for (int i = lastShadow + BZip2Constants.NUM_OVERSHOOT_BYTES +1; --i >= 0;) { quadrant[i] = 0; } block[0] = block[lastShadow + 1]; // LBZ2: Complete the initial radix sort: int c1 = block[0] & 0xff; for (int i = 0; i <= lastShadow; i++) { final int c2 = block[i + 1] & 0xff; ftab[(c1 << 8) + c2]++; c1 = c2; } for (int i = 1; i <= 65536; i++) { ftab[i] += ftab[i - 1]; } c1 = block[1] & 0xff; for (int i = 0; i < lastShadow; i++) { final int c2 = block[i + 2] & 0xff; fmap[--ftab[(c1 << 8) + c2]] = i; c1 = c2; } fmap[--ftab[((block[lastShadow + 1] & 0xff) << 8) + (block[1] & 0xff)]] = lastShadow; /* * LBZ2: Now ftab contains the first loc of every small bucket. Calculate the * running order, from smallest to largest big bucket. */ for (int i = 256; --i >= 0;) { bigDone[i] = false; runningOrder[i] = i; } for (int h = 364; h != 1;) { h /= 3; for (int i = h; i <= 255; i++) { final int vv = runningOrder[i]; final int a = ftab[(vv + 1) << 8] - ftab[vv << 8]; final int b = h - 1; int j = i; for (int ro = runningOrder[j - h]; (ftab[(ro + 1) << 8] - ftab[ro << 8]) > a; ro = runningOrder[j - h]) { runningOrder[j] = ro; j -= h; if (j <= b) { break; } } runningOrder[j] = vv; } } /* * LBZ2: The main sorting loop. */ for (int i = 0; i <= 255; i++) { /* * LBZ2: Process big buckets, starting with the least full. */ final int ss = runningOrder[i]; // Step 1: /* * LBZ2: Complete the big bucket [ss] by quicksorting any unsorted small * buckets [ss, j]. Hopefully previous pointer-scanning phases have * already completed many of the small buckets [ss, j], so we don't * have to sort them at all. */ for (int j = 0; j <= 255; j++) { final int sb = (ss << 8) + j; final int ftab_sb = ftab[sb]; if ((ftab_sb & SETMASK) != SETMASK) { final int lo = ftab_sb & CLEARMASK; final int hi = (ftab[sb + 1] & CLEARMASK) - 1; if (hi > lo) { mainQSort3(dataShadow, lo, hi, 2, lastShadow); if (firstAttemptShadow && (this.workDone > workLimitShadow)) { return; } } ftab[sb] = ftab_sb | SETMASK; } } // Step 2: // LBZ2: Now scan this big bucket so as to synthesise the // sorted order for small buckets [t, ss] for all t != ss. for (int j = 0; j <= 255; j++) { copy[j] = ftab[(j << 8) + ss] & CLEARMASK; } for (int j = ftab[ss << 8] & CLEARMASK, hj = (ftab[(ss + 1) << 8] & CLEARMASK); j < hj; j++) { final int fmap_j = fmap[j]; c1 = block[fmap_j] & 0xff; if (!bigDone[c1]) { fmap[copy[c1]] = (fmap_j == 0) ? lastShadow : (fmap_j - 1); copy[c1]++; } } for (int j = 256; --j >= 0;) { ftab[(j << 8) + ss] |= SETMASK; } // Step 3: /* * LBZ2: The ss big bucket is now done. Record this fact, and update the * quadrant descriptors. Remember to update quadrants in the * overshoot area too, if necessary. The "if (i < 255)" test merely * skips this updating for the last bucket processed, since updating * for the last bucket is pointless. */ bigDone[ss] = true; if (i < 255) { final int bbStart = ftab[ss << 8] & CLEARMASK; final int bbSize = (ftab[(ss + 1) << 8] & CLEARMASK) - bbStart; int shifts = 0; while ((bbSize >> shifts) > 65534) { shifts++; } for (int j = 0; j < bbSize; j++) { final int a2update = fmap[bbStart + j]; final char qVal = (char) (j >> shifts); quadrant[a2update] = qVal; if (a2update < BZip2Constants.NUM_OVERSHOOT_BYTES) { quadrant[a2update + lastShadow + 1] = qVal; } } } } } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ /* * This package is based on the work done by Keiron Liddle, Aftex Software * to whom the Ant project is very grateful for his * great code. */ package org.apache.commons.compress.compressors.bzip2; import java.io.IOException; import java.io.InputStream; import org.apache.commons.compress.compressors.CompressorInputStream; /** * An input stream that decompresses from the BZip2 format to be read as any other stream. * * @NotThreadSafe */ public class BZip2CompressorInputStream extends CompressorInputStream implements BZip2Constants { /** * Index of the last char in the block, so the block size == last + 1. */ private int last; /** * Index in zptr[] of original string after sorting. */ private int origPtr; /** * always: in the range 0 .. 9. The current block size is 100000 * this * number. */ private int blockSize100k; private boolean blockRandomised; private int bsBuff; private int bsLive; private final CRC crc = new CRC(); private int nInUse; private InputStream in; private final boolean decompressConcatenated; private static final int EOF = 0; private static final int START_BLOCK_STATE = 1; private static final int RAND_PART_A_STATE = 2; private static final int RAND_PART_B_STATE = 3; private static final int RAND_PART_C_STATE = 4; private static final int NO_RAND_PART_A_STATE = 5; private static final int NO_RAND_PART_B_STATE = 6; private static final int NO_RAND_PART_C_STATE = 7; private int currentState = START_BLOCK_STATE; private int storedBlockCRC, storedCombinedCRC; private int computedBlockCRC, computedCombinedCRC; // Variables used by setup* methods exclusively private int su_count; private int su_ch2; private int su_chPrev; private int su_i2; private int su_j2; private int su_rNToGo; private int su_rTPos; private int su_tPos; private char su_z; /** * All memory intensive stuff. This field is initialized by initBlock(). */ private BZip2CompressorInputStream.Data data; /** * Constructs a new BZip2CompressorInputStream which decompresses bytes * read from the specified stream. This doesn't suppprt decompressing * concatenated .bz2 files. * * @param in the InputStream from which this object should be created * @throws IOException * if the stream content is malformed or an I/O error occurs. * @throws NullPointerException * if {@code in == null} */ public BZip2CompressorInputStream(final InputStream in) throws IOException { this(in, false); } /** * Constructs a new BZip2CompressorInputStream which decompresses bytes * read from the specified stream. * * @param in the InputStream from which this object should be created * @param decompressConcatenated * if true, decompress until the end of the input; * if false, stop after the first .bz2 stream and * leave the input position to point to the next * byte after the .bz2 stream * * @throws IOException * if the stream content is malformed or an I/O error occurs. * @throws NullPointerException * if {@code in == null} */ public BZip2CompressorInputStream(final InputStream in, final boolean decompressConcatenated) throws IOException { this.in = in; this.decompressConcatenated = decompressConcatenated; init(true); initBlock(); } @Override public int read() throws IOException { if (this.in != null) { int r = read0(); count(r < 0 ? -1 : 1); return r; } else { throw new IOException("stream closed"); } } /* * (non-Javadoc) * * @see java.io.InputStream#read(byte[], int, int) */ @Override public int read(final byte[] dest, final int offs, final int len) throws IOException { if (offs < 0) { throw new IndexOutOfBoundsException("offs(" + offs + ") < 0."); } if (len < 0) { throw new IndexOutOfBoundsException("len(" + len + ") < 0."); } if (offs + len > dest.length) { throw new IndexOutOfBoundsException("offs(" + offs + ") + len(" + len + ") > dest.length(" + dest.length + ")."); } if (this.in == null) { throw new IOException("stream closed"); } if (len == 0) { return 0; } final int hi = offs + len; int destOffs = offs; int b; while (destOffs < hi && ((b = read0()) >= 0)) { dest[destOffs++] = (byte) b; count(1); } int c = (destOffs == offs) ? -1 : (destOffs - offs); return c; } private void makeMaps() { final boolean[] inUse = this.data.inUse; final byte[] seqToUnseq = this.data.seqToUnseq; int nInUseShadow = 0; for (int i = 0; i < 256; i++) { if (inUse[i]) { seqToUnseq[nInUseShadow++] = (byte) i; } } this.nInUse = nInUseShadow; } private int read0() throws IOException { switch (currentState) { case EOF: return -1; case START_BLOCK_STATE: return setupBlock(); case RAND_PART_A_STATE: throw new IllegalStateException(); case RAND_PART_B_STATE: return setupRandPartB(); case RAND_PART_C_STATE: return setupRandPartC(); case NO_RAND_PART_A_STATE: throw new IllegalStateException(); case NO_RAND_PART_B_STATE: return setupNoRandPartB(); case NO_RAND_PART_C_STATE: return setupNoRandPartC(); default: throw new IllegalStateException(); } } private boolean init(boolean isFirstStream) throws IOException { if (null == in) { throw new IOException("No InputStream"); } int magic0 = this.in.read(); if (magic0 == -1 && !isFirstStream) { return false; } int magic1 = this.in.read(); int magic2 = this.in.read(); if (magic0 != 'B' || magic1 != 'Z' || magic2 != 'h') { throw new IOException(isFirstStream ? "Stream is not in the BZip2 format" : "Garbage after a valid BZip2 stream"); } int blockSize = this.in.read(); if ((blockSize < '1') || (blockSize > '9')) { throw new IOException("BZip2 block size is invalid"); } this.blockSize100k = blockSize - '0'; this.bsLive = 0; this.computedCombinedCRC = 0; return true; } private void initBlock() throws IOException { char magic0; char magic1; char magic2; char magic3; char magic4; char magic5; while (true) { // Get the block magic bytes. magic0 = bsGetUByte(); magic1 = bsGetUByte(); magic2 = bsGetUByte(); magic3 = bsGetUByte(); magic4 = bsGetUByte(); magic5 = bsGetUByte(); // If isn't end of stream magic, break out of the loop. if (magic0 != 0x17 || magic1 != 0x72 || magic2 != 0x45 || magic3 != 0x38 || magic4 != 0x50 || magic5 != 0x90) { break; } // End of stream was reached. Check the combined CRC and // advance to the next .bz2 stream if decoding concatenated // streams. if (complete()) { return; } } if (magic0 != 0x31 || // '1' magic1 != 0x41 || // ')' magic2 != 0x59 || // 'Y' magic3 != 0x26 || // '&' magic4 != 0x53 || // 'S' magic5 != 0x59 // 'Y' ) { this.currentState = EOF; throw new IOException("bad block header"); } else { this.storedBlockCRC = bsGetInt(); this.blockRandomised = bsR(1) == 1; /** * Allocate data here instead in constructor, so we do not allocate * it if the input file is empty. */ if (this.data == null) { this.data = new Data(this.blockSize100k); } // currBlockNo++; getAndMoveToFrontDecode(); this.crc.initialiseCRC(); this.currentState = START_BLOCK_STATE; } } private void endBlock() throws IOException { this.computedBlockCRC = this.crc.getFinalCRC(); // A bad CRC is considered a fatal error. if (this.storedBlockCRC != this.computedBlockCRC) { // make next blocks readable without error // (repair feature, not yet documented, not tested) this.computedCombinedCRC = (this.storedCombinedCRC << 1) | (this.storedCombinedCRC >>> 31); this.computedCombinedCRC ^= this.storedBlockCRC; throw new IOException("BZip2 CRC error"); } this.computedCombinedCRC = (this.computedCombinedCRC << 1) | (this.computedCombinedCRC >>> 31); this.computedCombinedCRC ^= this.computedBlockCRC; } private boolean complete() throws IOException { this.storedCombinedCRC = bsGetInt(); this.currentState = EOF; this.data = null; if (this.storedCombinedCRC != this.computedCombinedCRC) { throw new IOException("BZip2 CRC error"); } // Look for the next .bz2 stream if decompressing // concatenated files. return !decompressConcatenated || !init(false); } @Override public void close() throws IOException { InputStream inShadow = this.in; if (inShadow != null) { try { if (inShadow != System.in) { inShadow.close(); } } finally { this.data = null; this.in = null; } } } private int bsR(final int n) throws IOException { int bsLiveShadow = this.bsLive; int bsBuffShadow = this.bsBuff; if (bsLiveShadow < n) { final InputStream inShadow = this.in; do { int thech = inShadow.read(); if (thech < 0) { throw new IOException("unexpected end of stream"); } bsBuffShadow = (bsBuffShadow << 8) | thech; bsLiveShadow += 8; } while (bsLiveShadow < n); this.bsBuff = bsBuffShadow; } this.bsLive = bsLiveShadow - n; return (bsBuffShadow >> (bsLiveShadow - n)) & ((1 << n) - 1); } private boolean bsGetBit() throws IOException { int bsLiveShadow = this.bsLive; int bsBuffShadow = this.bsBuff; if (bsLiveShadow < 1) { int thech = this.in.read(); if (thech < 0) { throw new IOException("unexpected end of stream"); } bsBuffShadow = (bsBuffShadow << 8) | thech; bsLiveShadow += 8; this.bsBuff = bsBuffShadow; } this.bsLive = bsLiveShadow - 1; return ((bsBuffShadow >> (bsLiveShadow - 1)) & 1) != 0; } private char bsGetUByte() throws IOException { return (char) bsR(8); } private int bsGetInt() throws IOException { return (((((bsR(8) << 8) | bsR(8)) << 8) | bsR(8)) << 8) | bsR(8); } /** * Called by createHuffmanDecodingTables() exclusively. */ private static void hbCreateDecodeTables(final int[] limit, final int[] base, final int[] perm, final char[] length, final int minLen, final int maxLen, final int alphaSize) { for (int i = minLen, pp = 0; i <= maxLen; i++) { for (int j = 0; j < alphaSize; j++) { if (length[j] == i) { perm[pp++] = j; } } } for (int i = MAX_CODE_LEN; --i > 0;) { base[i] = 0; limit[i] = 0; } for (int i = 0; i < alphaSize; i++) { base[length[i] + 1]++; } for (int i = 1, b = base[0]; i < MAX_CODE_LEN; i++) { b += base[i]; base[i] = b; } for (int i = minLen, vec = 0, b = base[i]; i <= maxLen; i++) { final int nb = base[i + 1]; vec += nb - b; b = nb; limit[i] = vec - 1; vec <<= 1; } for (int i = minLen + 1; i <= maxLen; i++) { base[i] = ((limit[i - 1] + 1) << 1) - base[i]; } } private void recvDecodingTables() throws IOException { final Data dataShadow = this.data; final boolean[] inUse = dataShadow.inUse; final byte[] pos = dataShadow.recvDecodingTables_pos; final byte[] selector = dataShadow.selector; final byte[] selectorMtf = dataShadow.selectorMtf; int inUse16 = 0; /* Receive the mapping table */ for (int i = 0; i < 16; i++) { if (bsGetBit()) { inUse16 |= 1 << i; } } for (int i = 256; --i >= 0;) { inUse[i] = false; } for (int i = 0; i < 16; i++) { if ((inUse16 & (1 << i)) != 0) { final int i16 = i << 4; for (int j = 0; j < 16; j++) { if (bsGetBit()) { inUse[i16 + j] = true; } } } } makeMaps(); final int alphaSize = this.nInUse + 2; /* Now the selectors */ final int nGroups = bsR(3); final int nSelectors = bsR(15); for (int i = 0; i < nSelectors; i++) { int j = 0; while (bsGetBit()) { j++; } selectorMtf[i] = (byte) j; } /* Undo the MTF values for the selectors. */ for (int v = nGroups; --v >= 0;) { pos[v] = (byte) v; } for (int i = 0; i < nSelectors; i++) { int v = selectorMtf[i] & 0xff; final byte tmp = pos[v]; while (v > 0) { // nearly all times v is zero, 4 in most other cases pos[v] = pos[v - 1]; v--; } pos[0] = tmp; selector[i] = tmp; } final char[][] len = dataShadow.temp_charArray2d; /* Now the coding tables */ for (int t = 0; t < nGroups; t++) { int curr = bsR(5); final char[] len_t = len[t]; for (int i = 0; i < alphaSize; i++) { while (bsGetBit()) { curr += bsGetBit() ? -1 : 1; } len_t[i] = (char) curr; } } // finally create the Huffman tables createHuffmanDecodingTables(alphaSize, nGroups); } /** * Called by recvDecodingTables() exclusively. */ private void createHuffmanDecodingTables(final int alphaSize, final int nGroups) { final Data dataShadow = this.data; final char[][] len = dataShadow.temp_charArray2d; final int[] minLens = dataShadow.minLens; final int[][] limit = dataShadow.limit; final int[][] base = dataShadow.base; final int[][] perm = dataShadow.perm; for (int t = 0; t < nGroups; t++) { int minLen = 32; int maxLen = 0; final char[] len_t = len[t]; for (int i = alphaSize; --i >= 0;) { final char lent = len_t[i]; if (lent > maxLen) { maxLen = lent; } if (lent < minLen) { minLen = lent; } } hbCreateDecodeTables(limit[t], base[t], perm[t], len[t], minLen, maxLen, alphaSize); minLens[t] = minLen; } } private void getAndMoveToFrontDecode() throws IOException { this.origPtr = bsR(24); recvDecodingTables(); final InputStream inShadow = this.in; final Data dataShadow = this.data; final byte[] ll8 = dataShadow.ll8; final int[] unzftab = dataShadow.unzftab; final byte[] selector = dataShadow.selector; final byte[] seqToUnseq = dataShadow.seqToUnseq; final char[] yy = dataShadow.getAndMoveToFrontDecode_yy; final int[] minLens = dataShadow.minLens; final int[][] limit = dataShadow.limit; final int[][] base = dataShadow.base; final int[][] perm = dataShadow.perm; final int limitLast = this.blockSize100k * 100000; /* * Setting up the unzftab entries here is not strictly necessary, but it * does save having to do it later in a separate pass, and so saves a * block's worth of cache misses. */ for (int i = 256; --i >= 0;) { yy[i] = (char) i; unzftab[i] = 0; } int groupNo = 0; int groupPos = G_SIZE - 1; final int eob = this.nInUse + 1; int nextSym = getAndMoveToFrontDecode0(0); int bsBuffShadow = this.bsBuff; int bsLiveShadow = this.bsLive; int lastShadow = -1; int zt = selector[groupNo] & 0xff; int[] base_zt = base[zt]; int[] limit_zt = limit[zt]; int[] perm_zt = perm[zt]; int minLens_zt = minLens[zt]; while (nextSym != eob) { if ((nextSym == RUNA) || (nextSym == RUNB)) { int s = -1; for (int n = 1; true; n <<= 1) { if (nextSym == RUNA) { s += n; } else if (nextSym == RUNB) { s += n << 1; } else { break; } if (groupPos == 0) { groupPos = G_SIZE - 1; zt = selector[++groupNo] & 0xff; base_zt = base[zt]; limit_zt = limit[zt]; perm_zt = perm[zt]; minLens_zt = minLens[zt]; } else { groupPos--; } int zn = minLens_zt; // Inlined: // int zvec = bsR(zn); while (bsLiveShadow < zn) { final int thech = inShadow.read(); if (thech >= 0) { bsBuffShadow = (bsBuffShadow << 8) | thech; bsLiveShadow += 8; continue; } else { throw new IOException("unexpected end of stream"); } } int zvec = (bsBuffShadow >> (bsLiveShadow - zn)) & ((1 << zn) - 1); bsLiveShadow -= zn; while (zvec > limit_zt[zn]) { zn++; while (bsLiveShadow < 1) { final int thech = inShadow.read(); if (thech >= 0) { bsBuffShadow = (bsBuffShadow << 8) | thech; bsLiveShadow += 8; continue; } else { throw new IOException( "unexpected end of stream"); } } bsLiveShadow--; zvec = (zvec << 1) | ((bsBuffShadow >> bsLiveShadow) & 1); } nextSym = perm_zt[zvec - base_zt[zn]]; } final byte ch = seqToUnseq[yy[0]]; unzftab[ch & 0xff] += s + 1; while (s-- >= 0) { ll8[++lastShadow] = ch; } if (lastShadow >= limitLast) { throw new IOException("block overrun"); } } else { if (++lastShadow >= limitLast) { throw new IOException("block overrun"); } final char tmp = yy[nextSym - 1]; unzftab[seqToUnseq[tmp] & 0xff]++; ll8[lastShadow] = seqToUnseq[tmp]; /* * This loop is hammered during decompression, hence avoid * native method call overhead of System.arraycopy for very * small ranges to copy. */ if (nextSym <= 16) { for (int j = nextSym - 1; j > 0;) { yy[j] = yy[--j]; } } else { System.arraycopy(yy, 0, yy, 1, nextSym - 1); } yy[0] = tmp; if (groupPos == 0) { groupPos = G_SIZE - 1; zt = selector[++groupNo] & 0xff; base_zt = base[zt]; limit_zt = limit[zt]; perm_zt = perm[zt]; minLens_zt = minLens[zt]; } else { groupPos--; } int zn = minLens_zt; // Inlined: // int zvec = bsR(zn); while (bsLiveShadow < zn) { final int thech = inShadow.read(); if (thech >= 0) { bsBuffShadow = (bsBuffShadow << 8) | thech; bsLiveShadow += 8; continue; } else { throw new IOException("unexpected end of stream"); } } int zvec = (bsBuffShadow >> (bsLiveShadow - zn)) & ((1 << zn) - 1); bsLiveShadow -= zn; while (zvec > limit_zt[zn]) { zn++; while (bsLiveShadow < 1) { final int thech = inShadow.read(); if (thech >= 0) { bsBuffShadow = (bsBuffShadow << 8) | thech; bsLiveShadow += 8; continue; } else { throw new IOException("unexpected end of stream"); } } bsLiveShadow--; zvec = (zvec << 1) | ((bsBuffShadow >> bsLiveShadow) & 1); } nextSym = perm_zt[zvec - base_zt[zn]]; } } this.last = lastShadow; this.bsLive = bsLiveShadow; this.bsBuff = bsBuffShadow; } private int getAndMoveToFrontDecode0(final int groupNo) throws IOException { final InputStream inShadow = this.in; final Data dataShadow = this.data; final int zt = dataShadow.selector[groupNo] & 0xff; final int[] limit_zt = dataShadow.limit[zt]; int zn = dataShadow.minLens[zt]; int zvec = bsR(zn); int bsLiveShadow = this.bsLive; int bsBuffShadow = this.bsBuff; while (zvec > limit_zt[zn]) { zn++; while (bsLiveShadow < 1) { final int thech = inShadow.read(); if (thech >= 0) { bsBuffShadow = (bsBuffShadow << 8) | thech; bsLiveShadow += 8; continue; } else { throw new IOException("unexpected end of stream"); } } bsLiveShadow--; zvec = (zvec << 1) | ((bsBuffShadow >> bsLiveShadow) & 1); } this.bsLive = bsLiveShadow; this.bsBuff = bsBuffShadow; return dataShadow.perm[zt][zvec - dataShadow.base[zt][zn]]; } private int setupBlock() throws IOException { if (currentState == EOF || this.data == null) { return -1; } final int[] cftab = this.data.cftab; final int[] tt = this.data.initTT(this.last + 1); final byte[] ll8 = this.data.ll8; cftab[0] = 0; System.arraycopy(this.data.unzftab, 0, cftab, 1, 256); for (int i = 1, c = cftab[0]; i <= 256; i++) { c += cftab[i]; cftab[i] = c; } for (int i = 0, lastShadow = this.last; i <= lastShadow; i++) { tt[cftab[ll8[i] & 0xff]++] = i; } if ((this.origPtr < 0) || (this.origPtr >= tt.length)) { throw new IOException("stream corrupted"); } this.su_tPos = tt[this.origPtr]; this.su_count = 0; this.su_i2 = 0; this.su_ch2 = 256; /* not a char and not EOF */ if (this.blockRandomised) { this.su_rNToGo = 0; this.su_rTPos = 0; return setupRandPartA(); } return setupNoRandPartA(); } private int setupRandPartA() throws IOException { if (this.su_i2 <= this.last) { this.su_chPrev = this.su_ch2; int su_ch2Shadow = this.data.ll8[this.su_tPos] & 0xff; this.su_tPos = this.data.tt[this.su_tPos]; if (this.su_rNToGo == 0) { this.su_rNToGo = Rand.rNums(this.su_rTPos) - 1; if (++this.su_rTPos == 512) { this.su_rTPos = 0; } } else { this.su_rNToGo--; } this.su_ch2 = su_ch2Shadow ^= (this.su_rNToGo == 1) ? 1 : 0; this.su_i2++; this.currentState = RAND_PART_B_STATE; this.crc.updateCRC(su_ch2Shadow); return su_ch2Shadow; } else { endBlock(); initBlock(); return setupBlock(); } } private int setupNoRandPartA() throws IOException { if (this.su_i2 <= this.last) { this.su_chPrev = this.su_ch2; int su_ch2Shadow = this.data.ll8[this.su_tPos] & 0xff; this.su_ch2 = su_ch2Shadow; this.su_tPos = this.data.tt[this.su_tPos]; this.su_i2++; this.currentState = NO_RAND_PART_B_STATE; this.crc.updateCRC(su_ch2Shadow); return su_ch2Shadow; } else { this.currentState = NO_RAND_PART_A_STATE; endBlock(); initBlock(); return setupBlock(); } } private int setupRandPartB() throws IOException { if (this.su_ch2 != this.su_chPrev) { this.currentState = RAND_PART_A_STATE; this.su_count = 1; return setupRandPartA(); } else if (++this.su_count >= 4) { this.su_z = (char) (this.data.ll8[this.su_tPos] & 0xff); this.su_tPos = this.data.tt[this.su_tPos]; if (this.su_rNToGo == 0) { this.su_rNToGo = Rand.rNums(this.su_rTPos) - 1; if (++this.su_rTPos == 512) { this.su_rTPos = 0; } } else { this.su_rNToGo--; } this.su_j2 = 0; this.currentState = RAND_PART_C_STATE; if (this.su_rNToGo == 1) { this.su_z ^= 1; } return setupRandPartC(); } else { this.currentState = RAND_PART_A_STATE; return setupRandPartA(); } } private int setupRandPartC() throws IOException { if (this.su_j2 < this.su_z) { this.crc.updateCRC(this.su_ch2); this.su_j2++; return this.su_ch2; } else { this.currentState = RAND_PART_A_STATE; this.su_i2++; this.su_count = 0; return setupRandPartA(); } } private int setupNoRandPartB() throws IOException { if (this.su_ch2 != this.su_chPrev) { this.su_count = 1; return setupNoRandPartA(); } else if (++this.su_count >= 4) { this.su_z = (char) (this.data.ll8[this.su_tPos] & 0xff); this.su_tPos = this.data.tt[this.su_tPos]; this.su_j2 = 0; return setupNoRandPartC(); } else { return setupNoRandPartA(); } } private int setupNoRandPartC() throws IOException { if (this.su_j2 < this.su_z) { int su_ch2Shadow = this.su_ch2; this.crc.updateCRC(su_ch2Shadow); this.su_j2++; this.currentState = NO_RAND_PART_C_STATE; return su_ch2Shadow; } else { this.su_i2++; this.su_count = 0; return setupNoRandPartA(); } } private static final class Data { // (with blockSize 900k) final boolean[] inUse = new boolean[256]; // 256 byte final byte[] seqToUnseq = new byte[256]; // 256 byte final byte[] selector = new byte[MAX_SELECTORS]; // 18002 byte final byte[] selectorMtf = new byte[MAX_SELECTORS]; // 18002 byte /** * Freq table collected to save a pass over the data during * decompression. */ final int[] unzftab = new int[256]; // 1024 byte final int[][] limit = new int[N_GROUPS][MAX_ALPHA_SIZE]; // 6192 byte final int[][] base = new int[N_GROUPS][MAX_ALPHA_SIZE]; // 6192 byte final int[][] perm = new int[N_GROUPS][MAX_ALPHA_SIZE]; // 6192 byte final int[] minLens = new int[N_GROUPS]; // 24 byte final int[] cftab = new int[257]; // 1028 byte final char[] getAndMoveToFrontDecode_yy = new char[256]; // 512 byte final char[][] temp_charArray2d = new char[N_GROUPS][MAX_ALPHA_SIZE]; // 3096 // byte final byte[] recvDecodingTables_pos = new byte[N_GROUPS]; // 6 byte // --------------- // 60798 byte int[] tt; // 3600000 byte byte[] ll8; // 900000 byte // --------------- // 4560782 byte // =============== Data(int blockSize100k) { this.ll8 = new byte[blockSize100k * BZip2Constants.BASEBLOCKSIZE]; } /** * Initializes the {@link #tt} array. * * This method is called when the required length of the array is known. * I don't initialize it at construction time to avoid unneccessary * memory allocation when compressing small files. */ int[] initTT(int length) { int[] ttShadow = this.tt; // tt.length should always be >= length, but theoretically // it can happen, if the compressor mixed small and large // blocks. Normally only the last block will be smaller // than others. if ((ttShadow == null) || (ttShadow.length < length)) { this.tt = ttShadow = new int[length]; } return ttShadow; } } /** * Checks if the signature matches what is expected for a bzip2 file. * * @param signature * the bytes to check * @param length * the number of bytes to check * @return true, if this stream is a bzip2 compressed stream, false otherwise * * @since 1.1 */ public static boolean matches(byte[] signature, int length) { if (length < 3) { return false; } if (signature[0] != 'B') { return false; } if (signature[1] != 'Z') { return false; } if (signature[2] != 'h') { return false; } return true; } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.bzip2; import java.io.IOException; import java.io.OutputStream; import org.apache.commons.compress.compressors.CompressorOutputStream; /** * An output stream that compresses into the BZip2 format into another stream. * *

* The compression requires large amounts of memory. Thus you should call the * {@link #close() close()} method as soon as possible, to force * {@code BZip2CompressorOutputStream} to release the allocated memory. *

* *

You can shrink the amount of allocated memory and maybe raise * the compression speed by choosing a lower blocksize, which in turn * may cause a lower compression ratio. You can avoid unnecessary * memory allocation by avoiding using a blocksize which is bigger * than the size of the input.

* *

You can compute the memory usage for compressing by the * following formula:

* *
 * <code>400k + (9 * blocksize)</code>.
 * 
* *

To get the memory required for decompression by {@link * BZip2CompressorInputStream} use

* *
 * <code>65k + (5 * blocksize)</code>.
 * 
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Memory usage by blocksize
Blocksize Compression
* memory usage
Decompression
* memory usage
100k1300k565k
200k2200k1065k
300k3100k1565k
400k4000k2065k
500k4900k2565k
600k5800k3065k
700k6700k3565k
800k7600k4065k
900k8500k4565k
* *

* For decompression {@code BZip2CompressorInputStream} allocates less memory if the * bzipped input is smaller than one block. *

* *

* Instances of this class are not threadsafe. *

* *

* TODO: Update to BZip2 1.0.1 *

* @NotThreadSafe */ public class BZip2CompressorOutputStream extends CompressorOutputStream implements BZip2Constants { /** * The minimum supported blocksize {@code == 1}. */ public static final int MIN_BLOCKSIZE = 1; /** * The maximum supported blocksize {@code == 9}. */ public static final int MAX_BLOCKSIZE = 9; private static final int GREATER_ICOST = 15; private static final int LESSER_ICOST = 0; private static void hbMakeCodeLengths(final byte[] len, final int[] freq, final Data dat, final int alphaSize, final int maxLen) { /* * Nodes and heap entries run from 1. Entry 0 for both the heap and * nodes is a sentinel. */ final int[] heap = dat.heap; final int[] weight = dat.weight; final int[] parent = dat.parent; for (int i = alphaSize; --i >= 0;) { weight[i + 1] = (freq[i] == 0 ? 1 : freq[i]) << 8; } for (boolean tooLong = true; tooLong;) { tooLong = false; int nNodes = alphaSize; int nHeap = 0; heap[0] = 0; weight[0] = 0; parent[0] = -2; for (int i = 1; i <= alphaSize; i++) { parent[i] = -1; nHeap++; heap[nHeap] = i; int zz = nHeap; int tmp = heap[zz]; while (weight[tmp] < weight[heap[zz >> 1]]) { heap[zz] = heap[zz >> 1]; zz >>= 1; } heap[zz] = tmp; } while (nHeap > 1) { int n1 = heap[1]; heap[1] = heap[nHeap]; nHeap--; int yy = 0; int zz = 1; int tmp = heap[1]; while (true) { yy = zz << 1; if (yy > nHeap) { break; } if ((yy < nHeap) && (weight[heap[yy + 1]] < weight[heap[yy]])) { yy++; } if (weight[tmp] < weight[heap[yy]]) { break; } heap[zz] = heap[yy]; zz = yy; } heap[zz] = tmp; int n2 = heap[1]; heap[1] = heap[nHeap]; nHeap--; yy = 0; zz = 1; tmp = heap[1]; while (true) { yy = zz << 1; if (yy > nHeap) { break; } if ((yy < nHeap) && (weight[heap[yy + 1]] < weight[heap[yy]])) { yy++; } if (weight[tmp] < weight[heap[yy]]) { break; } heap[zz] = heap[yy]; zz = yy; } heap[zz] = tmp; nNodes++; parent[n1] = parent[n2] = nNodes; final int weight_n1 = weight[n1]; final int weight_n2 = weight[n2]; weight[nNodes] = ((weight_n1 & 0xffffff00) + (weight_n2 & 0xffffff00)) | (1 + (((weight_n1 & 0x000000ff) > (weight_n2 & 0x000000ff)) ? (weight_n1 & 0x000000ff) : (weight_n2 & 0x000000ff))); parent[nNodes] = -1; nHeap++; heap[nHeap] = nNodes; tmp = 0; zz = nHeap; tmp = heap[zz]; final int weight_tmp = weight[tmp]; while (weight_tmp < weight[heap[zz >> 1]]) { heap[zz] = heap[zz >> 1]; zz >>= 1; } heap[zz] = tmp; } for (int i = 1; i <= alphaSize; i++) { int j = 0; int k = i; for (int parent_k; (parent_k = parent[k]) >= 0;) { k = parent_k; j++; } len[i - 1] = (byte) j; if (j > maxLen) { tooLong = true; } } if (tooLong) { for (int i = 1; i < alphaSize; i++) { int j = weight[i] >> 8; j = 1 + (j >> 1); weight[i] = j << 8; } } } } /** * Index of the last char in the block, so the block size == last + 1. */ private int last; /** * Always: in the range 0 .. 9. The current block size is 100000 * this * number. */ private final int blockSize100k; private int bsBuff; private int bsLive; private final CRC crc = new CRC(); private int nInUse; private int nMTF; private int currentChar = -1; private int runLength = 0; private int blockCRC; private int combinedCRC; private final int allowableBlockSize; /** * All memory intensive stuff. */ private Data data; private BlockSort blockSorter; private OutputStream out; /** * Chooses a blocksize based on the given length of the data to compress. * * @return The blocksize, between {@link #MIN_BLOCKSIZE} and * {@link #MAX_BLOCKSIZE} both inclusive. For a negative * {@code inputLength} this method returns {@code MAX_BLOCKSIZE} * always. * * @param inputLength * The length of the data which will be compressed by * {@code BZip2CompressorOutputStream}. */ public static int chooseBlockSize(long inputLength) { return (inputLength > 0) ? (int) Math .min((inputLength / 132000) + 1, 9) : MAX_BLOCKSIZE; } /** * Constructs a new {@code BZip2CompressorOutputStream} with a blocksize of 900k. * * @param out * the destination stream. * * @throws IOException * if an I/O error occurs in the specified stream. * @throws NullPointerException * if out == null. */ public BZip2CompressorOutputStream(final OutputStream out) throws IOException { this(out, MAX_BLOCKSIZE); } /** * Constructs a new {@code BZip2CompressorOutputStream} with specified blocksize. * * @param out * the destination stream. * @param blockSize * the blockSize as 100k units. * * @throws IOException * if an I/O error occurs in the specified stream. * @throws IllegalArgumentException * if (blockSize < 1) || (blockSize > 9). * @throws NullPointerException * if out == null. * * @see #MIN_BLOCKSIZE * @see #MAX_BLOCKSIZE */ public BZip2CompressorOutputStream(final OutputStream out, final int blockSize) throws IOException { if (blockSize < 1) { throw new IllegalArgumentException("blockSize(" + blockSize + ") < 1"); } if (blockSize > 9) { throw new IllegalArgumentException("blockSize(" + blockSize + ") > 9"); } this.blockSize100k = blockSize; this.out = out; /* 20 is just a paranoia constant */ this.allowableBlockSize = (this.blockSize100k * BZip2Constants.BASEBLOCKSIZE) - 20; init(); } @Override public void write(final int b) throws IOException { if (this.out != null) { write0(b); } else { throw new IOException("closed"); } } /** * Writes the current byte to the buffer, run-length encoding it * if it has been repeated at least four times (the first step * RLEs sequences of four identical bytes). * *

Flushes the current block before writing data if it is * full.

* *

"write to the buffer" means adding to data.buffer starting * two steps "after" this.last - initially starting at index 1 * (not 0) - and updating this.last to point to the last index * written minus 1.

*/ private void writeRun() throws IOException { final int lastShadow = this.last; if (lastShadow < this.allowableBlockSize) { final int currentCharShadow = this.currentChar; final Data dataShadow = this.data; dataShadow.inUse[currentCharShadow] = true; final byte ch = (byte) currentCharShadow; int runLengthShadow = this.runLength; this.crc.updateCRC(currentCharShadow, runLengthShadow); switch (runLengthShadow) { case 1: dataShadow.block[lastShadow + 2] = ch; this.last = lastShadow + 1; break; case 2: dataShadow.block[lastShadow + 2] = ch; dataShadow.block[lastShadow + 3] = ch; this.last = lastShadow + 2; break; case 3: { final byte[] block = dataShadow.block; block[lastShadow + 2] = ch; block[lastShadow + 3] = ch; block[lastShadow + 4] = ch; this.last = lastShadow + 3; } break; default: { runLengthShadow -= 4; dataShadow.inUse[runLengthShadow] = true; final byte[] block = dataShadow.block; block[lastShadow + 2] = ch; block[lastShadow + 3] = ch; block[lastShadow + 4] = ch; block[lastShadow + 5] = ch; block[lastShadow + 6] = (byte) runLengthShadow; this.last = lastShadow + 5; } break; } } else { endBlock(); initBlock(); writeRun(); } } /** * Overriden to close the stream. */ @Override protected void finalize() throws Throwable { finish(); super.finalize(); } public void finish() throws IOException { if (out != null) { try { if (this.runLength > 0) { writeRun(); } this.currentChar = -1; endBlock(); endCompression(); } finally { this.out = null; this.data = null; this.blockSorter = null; } } } @Override public void close() throws IOException { if (out != null) { OutputStream outShadow = this.out; finish(); outShadow.close(); } } @Override public void flush() throws IOException { OutputStream outShadow = this.out; if (outShadow != null) { outShadow.flush(); } } /** * Writes magic bytes like BZ on the first position of the stream * and bytes indiciating the file-format, which is * huffmanised, followed by a digit indicating blockSize100k. * @throws IOException if the magic bytes could not been written */ private void init() throws IOException { bsPutUByte('B'); bsPutUByte('Z'); this.data = new Data(this.blockSize100k); this.blockSorter = new BlockSort(this.data); // huffmanised magic bytes bsPutUByte('h'); bsPutUByte('0' + this.blockSize100k); this.combinedCRC = 0; initBlock(); } private void initBlock() { // blockNo++; this.crc.initialiseCRC(); this.last = -1; // ch = 0; boolean[] inUse = this.data.inUse; for (int i = 256; --i >= 0;) { inUse[i] = false; } } private void endBlock() throws IOException { this.blockCRC = this.crc.getFinalCRC(); this.combinedCRC = (this.combinedCRC << 1) | (this.combinedCRC >>> 31); this.combinedCRC ^= this.blockCRC; // empty block at end of file if (this.last == -1) { return; } /* sort the block and establish posn of original string */ blockSort(); /* * A 6-byte block header, the value chosen arbitrarily as 0x314159265359 * :-). A 32 bit value does not really give a strong enough guarantee * that the value will not appear by chance in the compressed * datastream. Worst-case probability of this event, for a 900k block, * is about 2.0e-3 for 32 bits, 1.0e-5 for 40 bits and 4.0e-8 for 48 * bits. For a compressed file of size 100Gb -- about 100000 blocks -- * only a 48-bit marker will do. NB: normal compression/ decompression * donot rely on these statistical properties. They are only important * when trying to recover blocks from damaged files. */ bsPutUByte(0x31); bsPutUByte(0x41); bsPutUByte(0x59); bsPutUByte(0x26); bsPutUByte(0x53); bsPutUByte(0x59); /* Now the block's CRC, so it is in a known place. */ bsPutInt(this.blockCRC); /* Now a single bit indicating no randomisation. */ bsW(1, 0); /* Finally, block's contents proper. */ moveToFrontCodeAndSend(); } private void endCompression() throws IOException { /* * Now another magic 48-bit number, 0x177245385090, to indicate the end * of the last block. (sqrt(pi), if you want to know. I did want to use * e, but it contains too much repetition -- 27 18 28 18 28 46 -- for me * to feel statistically comfortable. Call me paranoid.) */ bsPutUByte(0x17); bsPutUByte(0x72); bsPutUByte(0x45); bsPutUByte(0x38); bsPutUByte(0x50); bsPutUByte(0x90); bsPutInt(this.combinedCRC); bsFinishedWithStream(); } /** * Returns the blocksize parameter specified at construction time. */ public final int getBlockSize() { return this.blockSize100k; } @Override public void write(final byte[] buf, int offs, final int len) throws IOException { if (offs < 0) { throw new IndexOutOfBoundsException("offs(" + offs + ") < 0."); } if (len < 0) { throw new IndexOutOfBoundsException("len(" + len + ") < 0."); } if (offs + len > buf.length) { throw new IndexOutOfBoundsException("offs(" + offs + ") + len(" + len + ") > buf.length(" + buf.length + ")."); } if (this.out == null) { throw new IOException("stream closed"); } for (int hi = offs + len; offs < hi;) { write0(buf[offs++]); } } /** * Keeps track of the last bytes written and implicitly performs * run-length encoding as the first step of the bzip2 algorithm. */ private void write0(int b) throws IOException { if (this.currentChar != -1) { b &= 0xff; if (this.currentChar == b) { if (++this.runLength > 254) { writeRun(); this.currentChar = -1; this.runLength = 0; } // else nothing to do } else { writeRun(); this.runLength = 1; this.currentChar = b; } } else { this.currentChar = b & 0xff; this.runLength++; } } private static void hbAssignCodes(final int[] code, final byte[] length, final int minLen, final int maxLen, final int alphaSize) { int vec = 0; for (int n = minLen; n <= maxLen; n++) { for (int i = 0; i < alphaSize; i++) { if ((length[i] & 0xff) == n) { code[i] = vec; vec++; } } vec <<= 1; } } private void bsFinishedWithStream() throws IOException { while (this.bsLive > 0) { int ch = this.bsBuff >> 24; this.out.write(ch); // write 8-bit this.bsBuff <<= 8; this.bsLive -= 8; } } private void bsW(final int n, final int v) throws IOException { final OutputStream outShadow = this.out; int bsLiveShadow = this.bsLive; int bsBuffShadow = this.bsBuff; while (bsLiveShadow >= 8) { outShadow.write(bsBuffShadow >> 24); // write 8-bit bsBuffShadow <<= 8; bsLiveShadow -= 8; } this.bsBuff = bsBuffShadow | (v << (32 - bsLiveShadow - n)); this.bsLive = bsLiveShadow + n; } private void bsPutUByte(final int c) throws IOException { bsW(8, c); } private void bsPutInt(final int u) throws IOException { bsW(8, (u >> 24) & 0xff); bsW(8, (u >> 16) & 0xff); bsW(8, (u >> 8) & 0xff); bsW(8, u & 0xff); } private void sendMTFValues() throws IOException { final byte[][] len = this.data.sendMTFValues_len; final int alphaSize = this.nInUse + 2; for (int t = N_GROUPS; --t >= 0;) { byte[] len_t = len[t]; for (int v = alphaSize; --v >= 0;) { len_t[v] = GREATER_ICOST; } } /* Decide how many coding tables to use */ // assert (this.nMTF > 0) : this.nMTF; final int nGroups = (this.nMTF < 200) ? 2 : (this.nMTF < 600) ? 3 : (this.nMTF < 1200) ? 4 : (this.nMTF < 2400) ? 5 : 6; /* Generate an initial set of coding tables */ sendMTFValues0(nGroups, alphaSize); /* * Iterate up to N_ITERS times to improve the tables. */ final int nSelectors = sendMTFValues1(nGroups, alphaSize); /* Compute MTF values for the selectors. */ sendMTFValues2(nGroups, nSelectors); /* Assign actual codes for the tables. */ sendMTFValues3(nGroups, alphaSize); /* Transmit the mapping table. */ sendMTFValues4(); /* Now the selectors. */ sendMTFValues5(nGroups, nSelectors); /* Now the coding tables. */ sendMTFValues6(nGroups, alphaSize); /* And finally, the block data proper */ sendMTFValues7(); } private void sendMTFValues0(final int nGroups, final int alphaSize) { final byte[][] len = this.data.sendMTFValues_len; final int[] mtfFreq = this.data.mtfFreq; int remF = this.nMTF; int gs = 0; for (int nPart = nGroups; nPart > 0; nPart--) { final int tFreq = remF / nPart; int ge = gs - 1; int aFreq = 0; for (final int a = alphaSize - 1; (aFreq < tFreq) && (ge < a);) { aFreq += mtfFreq[++ge]; } if ((ge > gs) && (nPart != nGroups) && (nPart != 1) && (((nGroups - nPart) & 1) != 0)) { aFreq -= mtfFreq[ge--]; } final byte[] len_np = len[nPart - 1]; for (int v = alphaSize; --v >= 0;) { if ((v >= gs) && (v <= ge)) { len_np[v] = LESSER_ICOST; } else { len_np[v] = GREATER_ICOST; } } gs = ge + 1; remF -= aFreq; } } private int sendMTFValues1(final int nGroups, final int alphaSize) { final Data dataShadow = this.data; final int[][] rfreq = dataShadow.sendMTFValues_rfreq; final int[] fave = dataShadow.sendMTFValues_fave; final short[] cost = dataShadow.sendMTFValues_cost; final char[] sfmap = dataShadow.sfmap; final byte[] selector = dataShadow.selector; final byte[][] len = dataShadow.sendMTFValues_len; final byte[] len_0 = len[0]; final byte[] len_1 = len[1]; final byte[] len_2 = len[2]; final byte[] len_3 = len[3]; final byte[] len_4 = len[4]; final byte[] len_5 = len[5]; final int nMTFShadow = this.nMTF; int nSelectors = 0; for (int iter = 0; iter < N_ITERS; iter++) { for (int t = nGroups; --t >= 0;) { fave[t] = 0; int[] rfreqt = rfreq[t]; for (int i = alphaSize; --i >= 0;) { rfreqt[i] = 0; } } nSelectors = 0; for (int gs = 0; gs < this.nMTF;) { /* Set group start & end marks. */ /* * Calculate the cost of this group as coded by each of the * coding tables. */ final int ge = Math.min(gs + G_SIZE - 1, nMTFShadow - 1); if (nGroups == N_GROUPS) { // unrolled version of the else-block short cost0 = 0; short cost1 = 0; short cost2 = 0; short cost3 = 0; short cost4 = 0; short cost5 = 0; for (int i = gs; i <= ge; i++) { final int icv = sfmap[i]; cost0 += len_0[icv] & 0xff; cost1 += len_1[icv] & 0xff; cost2 += len_2[icv] & 0xff; cost3 += len_3[icv] & 0xff; cost4 += len_4[icv] & 0xff; cost5 += len_5[icv] & 0xff; } cost[0] = cost0; cost[1] = cost1; cost[2] = cost2; cost[3] = cost3; cost[4] = cost4; cost[5] = cost5; } else { for (int t = nGroups; --t >= 0;) { cost[t] = 0; } for (int i = gs; i <= ge; i++) { final int icv = sfmap[i]; for (int t = nGroups; --t >= 0;) { cost[t] += len[t][icv] & 0xff; } } } /* * Find the coding table which is best for this group, and * record its identity in the selector table. */ int bt = -1; for (int t = nGroups, bc = 999999999; --t >= 0;) { final int cost_t = cost[t]; if (cost_t < bc) { bc = cost_t; bt = t; } } fave[bt]++; selector[nSelectors] = (byte) bt; nSelectors++; /* * Increment the symbol frequencies for the selected table. */ final int[] rfreq_bt = rfreq[bt]; for (int i = gs; i <= ge; i++) { rfreq_bt[sfmap[i]]++; } gs = ge + 1; } /* * Recompute the tables based on the accumulated frequencies. */ for (int t = 0; t < nGroups; t++) { hbMakeCodeLengths(len[t], rfreq[t], this.data, alphaSize, 20); } } return nSelectors; } private void sendMTFValues2(final int nGroups, final int nSelectors) { // assert (nGroups < 8) : nGroups; final Data dataShadow = this.data; byte[] pos = dataShadow.sendMTFValues2_pos; for (int i = nGroups; --i >= 0;) { pos[i] = (byte) i; } for (int i = 0; i < nSelectors; i++) { final byte ll_i = dataShadow.selector[i]; byte tmp = pos[0]; int j = 0; while (ll_i != tmp) { j++; byte tmp2 = tmp; tmp = pos[j]; pos[j] = tmp2; } pos[0] = tmp; dataShadow.selectorMtf[i] = (byte) j; } } private void sendMTFValues3(final int nGroups, final int alphaSize) { int[][] code = this.data.sendMTFValues_code; byte[][] len = this.data.sendMTFValues_len; for (int t = 0; t < nGroups; t++) { int minLen = 32; int maxLen = 0; final byte[] len_t = len[t]; for (int i = alphaSize; --i >= 0;) { final int l = len_t[i] & 0xff; if (l > maxLen) { maxLen = l; } if (l < minLen) { minLen = l; } } // assert (maxLen <= 20) : maxLen; // assert (minLen >= 1) : minLen; hbAssignCodes(code[t], len[t], minLen, maxLen, alphaSize); } } private void sendMTFValues4() throws IOException { final boolean[] inUse = this.data.inUse; final boolean[] inUse16 = this.data.sentMTFValues4_inUse16; for (int i = 16; --i >= 0;) { inUse16[i] = false; final int i16 = i * 16; for (int j = 16; --j >= 0;) { if (inUse[i16 + j]) { inUse16[i] = true; } } } for (int i = 0; i < 16; i++) { bsW(1, inUse16[i] ? 1 : 0); } final OutputStream outShadow = this.out; int bsLiveShadow = this.bsLive; int bsBuffShadow = this.bsBuff; for (int i = 0; i < 16; i++) { if (inUse16[i]) { final int i16 = i * 16; for (int j = 0; j < 16; j++) { // inlined: bsW(1, inUse[i16 + j] ? 1 : 0); while (bsLiveShadow >= 8) { outShadow.write(bsBuffShadow >> 24); // write 8-bit bsBuffShadow <<= 8; bsLiveShadow -= 8; } if (inUse[i16 + j]) { bsBuffShadow |= 1 << (32 - bsLiveShadow - 1); } bsLiveShadow++; } } } this.bsBuff = bsBuffShadow; this.bsLive = bsLiveShadow; } private void sendMTFValues5(final int nGroups, final int nSelectors) throws IOException { bsW(3, nGroups); bsW(15, nSelectors); final OutputStream outShadow = this.out; final byte[] selectorMtf = this.data.selectorMtf; int bsLiveShadow = this.bsLive; int bsBuffShadow = this.bsBuff; for (int i = 0; i < nSelectors; i++) { for (int j = 0, hj = selectorMtf[i] & 0xff; j < hj; j++) { // inlined: bsW(1, 1); while (bsLiveShadow >= 8) { outShadow.write(bsBuffShadow >> 24); bsBuffShadow <<= 8; bsLiveShadow -= 8; } bsBuffShadow |= 1 << (32 - bsLiveShadow - 1); bsLiveShadow++; } // inlined: bsW(1, 0); while (bsLiveShadow >= 8) { outShadow.write(bsBuffShadow >> 24); bsBuffShadow <<= 8; bsLiveShadow -= 8; } // bsBuffShadow |= 0 << (32 - bsLiveShadow - 1); bsLiveShadow++; } this.bsBuff = bsBuffShadow; this.bsLive = bsLiveShadow; } private void sendMTFValues6(final int nGroups, final int alphaSize) throws IOException { final byte[][] len = this.data.sendMTFValues_len; final OutputStream outShadow = this.out; int bsLiveShadow = this.bsLive; int bsBuffShadow = this.bsBuff; for (int t = 0; t < nGroups; t++) { byte[] len_t = len[t]; int curr = len_t[0] & 0xff; // inlined: bsW(5, curr); while (bsLiveShadow >= 8) { outShadow.write(bsBuffShadow >> 24); // write 8-bit bsBuffShadow <<= 8; bsLiveShadow -= 8; } bsBuffShadow |= curr << (32 - bsLiveShadow - 5); bsLiveShadow += 5; for (int i = 0; i < alphaSize; i++) { int lti = len_t[i] & 0xff; while (curr < lti) { // inlined: bsW(2, 2); while (bsLiveShadow >= 8) { outShadow.write(bsBuffShadow >> 24); // write 8-bit bsBuffShadow <<= 8; bsLiveShadow -= 8; } bsBuffShadow |= 2 << (32 - bsLiveShadow - 2); bsLiveShadow += 2; curr++; /* 10 */ } while (curr > lti) { // inlined: bsW(2, 3); while (bsLiveShadow >= 8) { outShadow.write(bsBuffShadow >> 24); // write 8-bit bsBuffShadow <<= 8; bsLiveShadow -= 8; } bsBuffShadow |= 3 << (32 - bsLiveShadow - 2); bsLiveShadow += 2; curr--; /* 11 */ } // inlined: bsW(1, 0); while (bsLiveShadow >= 8) { outShadow.write(bsBuffShadow >> 24); // write 8-bit bsBuffShadow <<= 8; bsLiveShadow -= 8; } // bsBuffShadow |= 0 << (32 - bsLiveShadow - 1); bsLiveShadow++; } } this.bsBuff = bsBuffShadow; this.bsLive = bsLiveShadow; } private void sendMTFValues7() throws IOException { final Data dataShadow = this.data; final byte[][] len = dataShadow.sendMTFValues_len; final int[][] code = dataShadow.sendMTFValues_code; final OutputStream outShadow = this.out; final byte[] selector = dataShadow.selector; final char[] sfmap = dataShadow.sfmap; final int nMTFShadow = this.nMTF; int selCtr = 0; int bsLiveShadow = this.bsLive; int bsBuffShadow = this.bsBuff; for (int gs = 0; gs < nMTFShadow;) { final int ge = Math.min(gs + G_SIZE - 1, nMTFShadow - 1); final int selector_selCtr = selector[selCtr] & 0xff; final int[] code_selCtr = code[selector_selCtr]; final byte[] len_selCtr = len[selector_selCtr]; while (gs <= ge) { final int sfmap_i = sfmap[gs]; // // inlined: bsW(len_selCtr[sfmap_i] & 0xff, // code_selCtr[sfmap_i]); // while (bsLiveShadow >= 8) { outShadow.write(bsBuffShadow >> 24); bsBuffShadow <<= 8; bsLiveShadow -= 8; } final int n = len_selCtr[sfmap_i] & 0xFF; bsBuffShadow |= code_selCtr[sfmap_i] << (32 - bsLiveShadow - n); bsLiveShadow += n; gs++; } gs = ge + 1; selCtr++; } this.bsBuff = bsBuffShadow; this.bsLive = bsLiveShadow; } private void moveToFrontCodeAndSend() throws IOException { bsW(24, this.data.origPtr); generateMTFValues(); sendMTFValues(); } private void blockSort() { blockSorter.blockSort(data, last); } /* * Performs Move-To-Front on the Burrows-Wheeler transformed * buffer, storing the MTFed data in data.sfmap in RUNA/RUNB * run-length-encoded form. * *

Keeps track of byte frequencies in data.mtfFreq at the same time.

*/ private void generateMTFValues() { final int lastShadow = this.last; final Data dataShadow = this.data; final boolean[] inUse = dataShadow.inUse; final byte[] block = dataShadow.block; final int[] fmap = dataShadow.fmap; final char[] sfmap = dataShadow.sfmap; final int[] mtfFreq = dataShadow.mtfFreq; final byte[] unseqToSeq = dataShadow.unseqToSeq; final byte[] yy = dataShadow.generateMTFValues_yy; // make maps int nInUseShadow = 0; for (int i = 0; i < 256; i++) { if (inUse[i]) { unseqToSeq[i] = (byte) nInUseShadow; nInUseShadow++; } } this.nInUse = nInUseShadow; final int eob = nInUseShadow + 1; for (int i = eob; i >= 0; i--) { mtfFreq[i] = 0; } for (int i = nInUseShadow; --i >= 0;) { yy[i] = (byte) i; } int wr = 0; int zPend = 0; for (int i = 0; i <= lastShadow; i++) { final byte ll_i = unseqToSeq[block[fmap[i]] & 0xff]; byte tmp = yy[0]; int j = 0; while (ll_i != tmp) { j++; byte tmp2 = tmp; tmp = yy[j]; yy[j] = tmp2; } yy[0] = tmp; if (j == 0) { zPend++; } else { if (zPend > 0) { zPend--; while (true) { if ((zPend & 1) == 0) { sfmap[wr] = RUNA; wr++; mtfFreq[RUNA]++; } else { sfmap[wr] = RUNB; wr++; mtfFreq[RUNB]++; } if (zPend >= 2) { zPend = (zPend - 2) >> 1; } else { break; } } zPend = 0; } sfmap[wr] = (char) (j + 1); wr++; mtfFreq[j + 1]++; } } if (zPend > 0) { zPend--; while (true) { if ((zPend & 1) == 0) { sfmap[wr] = RUNA; wr++; mtfFreq[RUNA]++; } else { sfmap[wr] = RUNB; wr++; mtfFreq[RUNB]++; } if (zPend >= 2) { zPend = (zPend - 2) >> 1; } else { break; } } } sfmap[wr] = (char) eob; mtfFreq[eob]++; this.nMTF = wr + 1; } static final class Data { // with blockSize 900k /* maps unsigned byte => "does it occur in block" */ final boolean[] inUse = new boolean[256]; // 256 byte final byte[] unseqToSeq = new byte[256]; // 256 byte final int[] mtfFreq = new int[MAX_ALPHA_SIZE]; // 1032 byte final byte[] selector = new byte[MAX_SELECTORS]; // 18002 byte final byte[] selectorMtf = new byte[MAX_SELECTORS]; // 18002 byte final byte[] generateMTFValues_yy = new byte[256]; // 256 byte final byte[][] sendMTFValues_len = new byte[N_GROUPS][MAX_ALPHA_SIZE]; // 1548 // byte final int[][] sendMTFValues_rfreq = new int[N_GROUPS][MAX_ALPHA_SIZE]; // 6192 // byte final int[] sendMTFValues_fave = new int[N_GROUPS]; // 24 byte final short[] sendMTFValues_cost = new short[N_GROUPS]; // 12 byte final int[][] sendMTFValues_code = new int[N_GROUPS][MAX_ALPHA_SIZE]; // 6192 // byte final byte[] sendMTFValues2_pos = new byte[N_GROUPS]; // 6 byte final boolean[] sentMTFValues4_inUse16 = new boolean[16]; // 16 byte final int[] heap = new int[MAX_ALPHA_SIZE + 2]; // 1040 byte final int[] weight = new int[MAX_ALPHA_SIZE * 2]; // 2064 byte final int[] parent = new int[MAX_ALPHA_SIZE * 2]; // 2064 byte // ------------ // 333408 byte /* holds the RLEd block of original data starting at index 1. * After sorting the last byte added to the buffer is at index * 0. */ final byte[] block; // 900021 byte /* maps index in Burrows-Wheeler transformed block => index of * byte in original block */ final int[] fmap; // 3600000 byte final char[] sfmap; // 3600000 byte // ------------ // 8433529 byte // ============ /** * Index of original line in Burrows-Wheeler table. * *

This is the index in fmap that points to the last byte * of the original data.

*/ int origPtr; Data(int blockSize100k) { final int n = blockSize100k * BZip2Constants.BASEBLOCKSIZE; this.block = new byte[(n + 1 + NUM_OVERSHOOT_BYTES)]; this.fmap = new int[n]; this.sfmap = new char[2 * n]; } } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.bzip2; /** * Constants for both the compress and decompress BZip2 classes. */ interface BZip2Constants { int BASEBLOCKSIZE = 100000; int MAX_ALPHA_SIZE = 258; int MAX_CODE_LEN = 23; int RUNA = 0; int RUNB = 1; int N_GROUPS = 6; int G_SIZE = 50; int N_ITERS = 4; int MAX_SELECTORS = (2 + (900000 / G_SIZE)); int NUM_OVERSHOOT_BYTES = 20; }/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.bzip2; import java.util.LinkedHashMap; import java.util.Map; import org.apache.commons.compress.compressors.FileNameUtil; /** * Utility code for the BZip2 compression format. * @ThreadSafe * @since 1.1 */ public abstract class BZip2Utils { private static final FileNameUtil fileNameUtil; static { Map uncompressSuffix = new LinkedHashMap(); // backwards compatibilty: BZip2Utils never created the short // tbz form, so .tar.bz2 has to be added explicitly uncompressSuffix.put(".tar.bz2", ".tar"); uncompressSuffix.put(".tbz2", ".tar"); uncompressSuffix.put(".tbz", ".tar"); uncompressSuffix.put(".bz2", ""); uncompressSuffix.put(".bz", ""); fileNameUtil = new FileNameUtil(uncompressSuffix, ".bz2"); } /** Private constructor to prevent instantiation of this utility class. */ private BZip2Utils() { } /** * Detects common bzip2 suffixes in the given filename. * * @param filename name of a file * @return {@code true} if the filename has a common bzip2 suffix, * {@code false} otherwise */ public static boolean isCompressedFilename(String filename) { return fileNameUtil.isCompressedFilename(filename); } /** * Maps the given name of a bzip2-compressed file to the name that the * file should have after uncompression. Commonly used file type specific * suffixes like ".tbz" or ".tbz2" are automatically detected and * correctly mapped. For example the name "package.tbz2" is mapped to * "package.tar". And any filenames with the generic ".bz2" suffix * (or any other generic bzip2 suffix) is mapped to a name without that * suffix. If no bzip2 suffix is detected, then the filename is returned * unmapped. * * @param filename name of a file * @return name of the corresponding uncompressed file */ public static String getUncompressedFilename(String filename) { return fileNameUtil.getUncompressedFilename(filename); } /** * Maps the given filename to the name that the file should have after * compression with bzip2. Currently this method simply appends the suffix * ".bz2" to the filename based on the standard behaviour of the "bzip2" * program, but a future version may implement a more complex mapping if * a new widely used naming pattern emerges. * * @param filename name of a file * @return name of the corresponding compressed file */ public static String getCompressedFilename(String filename) { return fileNameUtil.getCompressedFilename(filename); } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors; /** * Compressor related exception */ public class CompressorException extends Exception { /** Serial */ private static final long serialVersionUID = -2932901310255908814L; /** * Constructs a new exception with the specified detail message. The cause * is not initialized. * * @param message * the detail message */ public CompressorException(String message) { super(message); } /** * Constructs a new exception with the specified detail message and cause. * * @param message * the detail message * @param cause * the cause */ public CompressorException(String message, Throwable cause) { super(message, cause); } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors; import java.io.InputStream; public abstract class CompressorInputStream extends InputStream { private long bytesRead = 0; /** * Increments the counter of already read bytes. * Doesn't increment if the EOF has been hit (read == -1) * * @param read the number of bytes read * * @since 1.1 */ protected void count(int read) { count((long) read); } /** * Increments the counter of already read bytes. * Doesn't increment if the EOF has been hit (read == -1) * * @param read the number of bytes read */ protected void count(long read) { if (read != -1) { bytesRead = bytesRead + read; } } /** * Decrements the counter of already read bytes. * * @param pushedBack the number of bytes pushed back. * @since 1.7 */ protected void pushedBackBytes(long pushedBack) { bytesRead -= pushedBack; } /** * Returns the current number of bytes read from this stream. * @return the number of read bytes * @deprecated this method may yield wrong results for large * archives, use #getBytesRead instead */ @Deprecated public int getCount() { return (int) bytesRead; } /** * Returns the current number of bytes read from this stream. * @return the number of read bytes * * @since 1.1 */ public long getBytesRead() { return bytesRead; } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors; import java.io.OutputStream; public abstract class CompressorOutputStream extends OutputStream { // TODO } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import org.apache.commons.compress.compressors.bzip2.BZip2CompressorInputStream; import org.apache.commons.compress.compressors.bzip2.BZip2CompressorOutputStream; import org.apache.commons.compress.compressors.deflate.DeflateCompressorInputStream; import org.apache.commons.compress.compressors.deflate.DeflateCompressorOutputStream; import org.apache.commons.compress.compressors.gzip.GzipCompressorInputStream; import org.apache.commons.compress.compressors.gzip.GzipCompressorOutputStream; import org.apache.commons.compress.compressors.lzma.LZMACompressorInputStream; import org.apache.commons.compress.compressors.lzma.LZMAUtils; import org.apache.commons.compress.compressors.xz.XZCompressorInputStream; import org.apache.commons.compress.compressors.xz.XZCompressorOutputStream; import org.apache.commons.compress.compressors.xz.XZUtils; import org.apache.commons.compress.compressors.pack200.Pack200CompressorInputStream; import org.apache.commons.compress.compressors.pack200.Pack200CompressorOutputStream; import org.apache.commons.compress.compressors.snappy.FramedSnappyCompressorInputStream; import org.apache.commons.compress.compressors.snappy.SnappyCompressorInputStream; import org.apache.commons.compress.compressors.z.ZCompressorInputStream; import org.apache.commons.compress.utils.IOUtils; /** *

Factory to create Compressor[In|Out]putStreams from names. To add other * implementations you should extend CompressorStreamFactory and override the * appropriate methods (and call their implementation from super of course).

* * Example (Compressing a file): * *
 * final OutputStream out = new FileOutputStream(output); 
 * CompressorOutputStream cos = 
 *      new CompressorStreamFactory().createCompressorOutputStream(CompressorStreamFactory.BZIP2, out);
 * IOUtils.copy(new FileInputStream(input), cos);
 * cos.close();
 * 
* * Example (Decompressing a file): *
 * final InputStream is = new FileInputStream(input); 
 * CompressorInputStream in = 
 *      new CompressorStreamFactory().createCompressorInputStream(CompressorStreamFactory.BZIP2, is);
 * IOUtils.copy(in, new FileOutputStream(output));
 * in.close();
 * 
* @Immutable provided that the deprecated method setDecompressConcatenated is not used. * @ThreadSafe even if the deprecated method setDecompressConcatenated is used */ public class CompressorStreamFactory { /** * Constant (value {@value}) used to identify the BZIP2 compression algorithm. * @since 1.1 */ public static final String BZIP2 = "bzip2"; /** * Constant (value {@value}) used to identify the GZIP compression algorithm. * Not supported as an output stream type. * @since 1.1 */ public static final String GZIP = "gz"; /** * Constant (value {@value}) used to identify the PACK200 compression algorithm. * @since 1.3 */ public static final String PACK200 = "pack200"; /** * Constant (value {@value}) used to identify the XZ compression method. * @since 1.4 */ public static final String XZ = "xz"; /** * Constant (value {@value}) used to identify the LZMA compression method. * Not supported as an output stream type. * @since 1.6 */ public static final String LZMA = "lzma"; /** * Constant (value {@value}) used to identify the "framed" Snappy compression method. * Not supported as an output stream type. * @since 1.7 */ public static final String SNAPPY_FRAMED = "snappy-framed"; /** * Constant (value {@value}) used to identify the "raw" Snappy compression method. * Not supported as an output stream type. * @since 1.7 */ public static final String SNAPPY_RAW = "snappy-raw"; /** * Constant (value {@value}) used to identify the traditional Unix compress method. * Not supported as an output stream type. * @since 1.7 */ public static final String Z = "z"; /** * Constant (value {@value}) used to identify the Deflate compress method. * @since 1.9 */ public static final String DEFLATE = "deflate"; /** * If true, decompress until the end of the input. * If false, stop after the first stream and leave the * input position to point to the next byte after the stream */ private final Boolean decompressUntilEOF; // This is Boolean so setDecompressConcatenated can determine whether it has been set by the ctor // once the setDecompressConcatenated method has been removed, it can revert to boolean /** * If true, decompress until the end of the input. * If false, stop after the first stream and leave the * input position to point to the next byte after the stream */ private volatile boolean decompressConcatenated = false; /** * Create an instance with the decompress Concatenated option set to false. */ public CompressorStreamFactory() { this.decompressUntilEOF = null; } /** * Create an instance with the provided decompress Concatenated option. * @param decompressUntilEOF * if true, decompress until the end of the * input; if false, stop after the first * stream and leave the input position to point * to the next byte after the stream. * This setting applies to the gzip, bzip2 and xz formats only. * @since 1.10 */ public CompressorStreamFactory(boolean decompressUntilEOF) { this.decompressUntilEOF = Boolean.valueOf(decompressUntilEOF); // Also copy to existing variable so can continue to use that as the current value this.decompressConcatenated = decompressUntilEOF; } /** * Whether to decompress the full input or only the first stream * in formats supporting multiple concatenated input streams. * *

This setting applies to the gzip, bzip2 and xz formats only.

* * @param decompressConcatenated * if true, decompress until the end of the * input; if false, stop after the first * stream and leave the input position to point * to the next byte after the stream * @since 1.5 * @deprecated 1.10 use the {@link #CompressorStreamFactory(boolean)} constructor instead * @throws IllegalStateException if the constructor {@link #CompressorStreamFactory(boolean)} * was used to create the factory */ @Deprecated public void setDecompressConcatenated(boolean decompressConcatenated) { if (this.decompressUntilEOF != null) { throw new IllegalStateException("Cannot override the setting defined by the constructor"); } this.decompressConcatenated = decompressConcatenated; } /** * Create an compressor input stream from an input stream, autodetecting * the compressor type from the first few bytes of the stream. The InputStream * must support marks, like BufferedInputStream. * * @param in the input stream * @return the compressor input stream * @throws CompressorException if the compressor name is not known * @throws IllegalArgumentException if the stream is null or does not support mark * @since 1.1 */ public CompressorInputStream createCompressorInputStream(final InputStream in) throws CompressorException { if (in == null) { throw new IllegalArgumentException("Stream must not be null."); } if (!in.markSupported()) { throw new IllegalArgumentException("Mark is not supported."); } final byte[] signature = new byte[12]; in.mark(signature.length); try { int signatureLength = IOUtils.readFully(in, signature); in.reset(); if (BZip2CompressorInputStream.matches(signature, signatureLength)) { return new BZip2CompressorInputStream(in, decompressConcatenated); } if (GzipCompressorInputStream.matches(signature, signatureLength)) { return new GzipCompressorInputStream(in, decompressConcatenated); } if (Pack200CompressorInputStream.matches(signature, signatureLength)) { return new Pack200CompressorInputStream(in); } if (FramedSnappyCompressorInputStream.matches(signature, signatureLength)) { return new FramedSnappyCompressorInputStream(in); } if (ZCompressorInputStream.matches(signature, signatureLength)) { return new ZCompressorInputStream(in); } if (DeflateCompressorInputStream.matches(signature, signatureLength)) { return new DeflateCompressorInputStream(in); } if (XZUtils.matches(signature, signatureLength) && XZUtils.isXZCompressionAvailable()) { return new XZCompressorInputStream(in, decompressConcatenated); } if (LZMAUtils.matches(signature, signatureLength) && LZMAUtils.isLZMACompressionAvailable()) { return new LZMACompressorInputStream(in); } } catch (IOException e) { throw new CompressorException("Failed to detect Compressor from InputStream.", e); } throw new CompressorException("No Compressor found for the stream signature."); } /** * Create a compressor input stream from a compressor name and an input stream. * * @param name of the compressor, * i.e. {@value #GZIP}, {@value #BZIP2}, {@value #XZ}, {@value #LZMA}, * {@value #PACK200}, {@value #SNAPPY_RAW}, {@value #SNAPPY_FRAMED}, * {@value #Z} or {@value #DEFLATE} * @param in the input stream * @return compressor input stream * @throws CompressorException if the compressor name is not known * @throws IllegalArgumentException if the name or input stream is null */ public CompressorInputStream createCompressorInputStream(final String name, final InputStream in) throws CompressorException { if (name == null || in == null) { throw new IllegalArgumentException( "Compressor name and stream must not be null."); } try { if (GZIP.equalsIgnoreCase(name)) { return new GzipCompressorInputStream(in, decompressConcatenated); } if (BZIP2.equalsIgnoreCase(name)) { return new BZip2CompressorInputStream(in, decompressConcatenated); } if (XZ.equalsIgnoreCase(name)) { return new XZCompressorInputStream(in, decompressConcatenated); } if (LZMA.equalsIgnoreCase(name)) { return new LZMACompressorInputStream(in); } if (PACK200.equalsIgnoreCase(name)) { return new Pack200CompressorInputStream(in); } if (SNAPPY_RAW.equalsIgnoreCase(name)) { return new SnappyCompressorInputStream(in); } if (SNAPPY_FRAMED.equalsIgnoreCase(name)) { return new FramedSnappyCompressorInputStream(in); } if (Z.equalsIgnoreCase(name)) { return new ZCompressorInputStream(in); } if (DEFLATE.equalsIgnoreCase(name)) { return new DeflateCompressorInputStream(in); } } catch (IOException e) { throw new CompressorException( "Could not create CompressorInputStream.", e); } throw new CompressorException("Compressor: " + name + " not found."); } /** * Create an compressor output stream from an compressor name and an output stream. * * @param name the compressor name, * i.e. {@value #GZIP}, {@value #BZIP2}, {@value #XZ}, * {@value #PACK200} or {@value #DEFLATE} * @param out the output stream * @return the compressor output stream * @throws CompressorException if the archiver name is not known * @throws IllegalArgumentException if the archiver name or stream is null */ public CompressorOutputStream createCompressorOutputStream( final String name, final OutputStream out) throws CompressorException { if (name == null || out == null) { throw new IllegalArgumentException( "Compressor name and stream must not be null."); } try { if (GZIP.equalsIgnoreCase(name)) { return new GzipCompressorOutputStream(out); } if (BZIP2.equalsIgnoreCase(name)) { return new BZip2CompressorOutputStream(out); } if (XZ.equalsIgnoreCase(name)) { return new XZCompressorOutputStream(out); } if (PACK200.equalsIgnoreCase(name)) { return new Pack200CompressorOutputStream(out); } if (DEFLATE.equalsIgnoreCase(name)) { return new DeflateCompressorOutputStream(out); } } catch (IOException e) { throw new CompressorException( "Could not create CompressorOutputStream", e); } throw new CompressorException("Compressor: " + name + " not found."); } // For Unit tests boolean getDecompressConcatenated() { return decompressConcatenated; } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.bzip2; /** * A simple class the hold and calculate the CRC for sanity checking of the * data. * @NotThreadSafe */ class CRC { private static final int crc32Table[] = { 0x00000000, 0x04c11db7, 0x09823b6e, 0x0d4326d9, 0x130476dc, 0x17c56b6b, 0x1a864db2, 0x1e475005, 0x2608edb8, 0x22c9f00f, 0x2f8ad6d6, 0x2b4bcb61, 0x350c9b64, 0x31cd86d3, 0x3c8ea00a, 0x384fbdbd, 0x4c11db70, 0x48d0c6c7, 0x4593e01e, 0x4152fda9, 0x5f15adac, 0x5bd4b01b, 0x569796c2, 0x52568b75, 0x6a1936c8, 0x6ed82b7f, 0x639b0da6, 0x675a1011, 0x791d4014, 0x7ddc5da3, 0x709f7b7a, 0x745e66cd, 0x9823b6e0, 0x9ce2ab57, 0x91a18d8e, 0x95609039, 0x8b27c03c, 0x8fe6dd8b, 0x82a5fb52, 0x8664e6e5, 0xbe2b5b58, 0xbaea46ef, 0xb7a96036, 0xb3687d81, 0xad2f2d84, 0xa9ee3033, 0xa4ad16ea, 0xa06c0b5d, 0xd4326d90, 0xd0f37027, 0xddb056fe, 0xd9714b49, 0xc7361b4c, 0xc3f706fb, 0xceb42022, 0xca753d95, 0xf23a8028, 0xf6fb9d9f, 0xfbb8bb46, 0xff79a6f1, 0xe13ef6f4, 0xe5ffeb43, 0xe8bccd9a, 0xec7dd02d, 0x34867077, 0x30476dc0, 0x3d044b19, 0x39c556ae, 0x278206ab, 0x23431b1c, 0x2e003dc5, 0x2ac12072, 0x128e9dcf, 0x164f8078, 0x1b0ca6a1, 0x1fcdbb16, 0x018aeb13, 0x054bf6a4, 0x0808d07d, 0x0cc9cdca, 0x7897ab07, 0x7c56b6b0, 0x71159069, 0x75d48dde, 0x6b93dddb, 0x6f52c06c, 0x6211e6b5, 0x66d0fb02, 0x5e9f46bf, 0x5a5e5b08, 0x571d7dd1, 0x53dc6066, 0x4d9b3063, 0x495a2dd4, 0x44190b0d, 0x40d816ba, 0xaca5c697, 0xa864db20, 0xa527fdf9, 0xa1e6e04e, 0xbfa1b04b, 0xbb60adfc, 0xb6238b25, 0xb2e29692, 0x8aad2b2f, 0x8e6c3698, 0x832f1041, 0x87ee0df6, 0x99a95df3, 0x9d684044, 0x902b669d, 0x94ea7b2a, 0xe0b41de7, 0xe4750050, 0xe9362689, 0xedf73b3e, 0xf3b06b3b, 0xf771768c, 0xfa325055, 0xfef34de2, 0xc6bcf05f, 0xc27dede8, 0xcf3ecb31, 0xcbffd686, 0xd5b88683, 0xd1799b34, 0xdc3abded, 0xd8fba05a, 0x690ce0ee, 0x6dcdfd59, 0x608edb80, 0x644fc637, 0x7a089632, 0x7ec98b85, 0x738aad5c, 0x774bb0eb, 0x4f040d56, 0x4bc510e1, 0x46863638, 0x42472b8f, 0x5c007b8a, 0x58c1663d, 0x558240e4, 0x51435d53, 0x251d3b9e, 0x21dc2629, 0x2c9f00f0, 0x285e1d47, 0x36194d42, 0x32d850f5, 0x3f9b762c, 0x3b5a6b9b, 0x0315d626, 0x07d4cb91, 0x0a97ed48, 0x0e56f0ff, 0x1011a0fa, 0x14d0bd4d, 0x19939b94, 0x1d528623, 0xf12f560e, 0xf5ee4bb9, 0xf8ad6d60, 0xfc6c70d7, 0xe22b20d2, 0xe6ea3d65, 0xeba91bbc, 0xef68060b, 0xd727bbb6, 0xd3e6a601, 0xdea580d8, 0xda649d6f, 0xc423cd6a, 0xc0e2d0dd, 0xcda1f604, 0xc960ebb3, 0xbd3e8d7e, 0xb9ff90c9, 0xb4bcb610, 0xb07daba7, 0xae3afba2, 0xaafbe615, 0xa7b8c0cc, 0xa379dd7b, 0x9b3660c6, 0x9ff77d71, 0x92b45ba8, 0x9675461f, 0x8832161a, 0x8cf30bad, 0x81b02d74, 0x857130c3, 0x5d8a9099, 0x594b8d2e, 0x5408abf7, 0x50c9b640, 0x4e8ee645, 0x4a4ffbf2, 0x470cdd2b, 0x43cdc09c, 0x7b827d21, 0x7f436096, 0x7200464f, 0x76c15bf8, 0x68860bfd, 0x6c47164a, 0x61043093, 0x65c52d24, 0x119b4be9, 0x155a565e, 0x18197087, 0x1cd86d30, 0x029f3d35, 0x065e2082, 0x0b1d065b, 0x0fdc1bec, 0x3793a651, 0x3352bbe6, 0x3e119d3f, 0x3ad08088, 0x2497d08d, 0x2056cd3a, 0x2d15ebe3, 0x29d4f654, 0xc5a92679, 0xc1683bce, 0xcc2b1d17, 0xc8ea00a0, 0xd6ad50a5, 0xd26c4d12, 0xdf2f6bcb, 0xdbee767c, 0xe3a1cbc1, 0xe760d676, 0xea23f0af, 0xeee2ed18, 0xf0a5bd1d, 0xf464a0aa, 0xf9278673, 0xfde69bc4, 0x89b8fd09, 0x8d79e0be, 0x803ac667, 0x84fbdbd0, 0x9abc8bd5, 0x9e7d9662, 0x933eb0bb, 0x97ffad0c, 0xafb010b1, 0xab710d06, 0xa6322bdf, 0xa2f33668, 0xbcb4666d, 0xb8757bda, 0xb5365d03, 0xb1f740b4 }; CRC() { initialiseCRC(); } void initialiseCRC() { globalCrc = 0xffffffff; } int getFinalCRC() { return ~globalCrc; } int getGlobalCRC() { return globalCrc; } void setGlobalCRC(int newCrc) { globalCrc = newCrc; } void updateCRC(int inCh) { int temp = (globalCrc >> 24) ^ inCh; if (temp < 0) { temp = 256 + temp; } globalCrc = (globalCrc << 8) ^ CRC.crc32Table[temp]; } void updateCRC(int inCh, int repeat) { int globalCrcShadow = this.globalCrc; while (repeat-- > 0) { int temp = (globalCrcShadow >> 24) ^ inCh; globalCrcShadow = (globalCrcShadow << 8) ^ crc32Table[(temp >= 0) ? temp : (temp + 256)]; } this.globalCrc = globalCrcShadow; } private int globalCrc; }/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.deflate; import java.io.IOException; import java.io.InputStream; import java.util.zip.Inflater; import java.util.zip.InflaterInputStream; import org.apache.commons.compress.compressors.CompressorInputStream; /** * Deflate decompressor. * @since 1.9 */ public class DeflateCompressorInputStream extends CompressorInputStream { private static final int MAGIC_1 = 0x78; private static final int MAGIC_2a = 0x01; private static final int MAGIC_2b = 0x5e; private static final int MAGIC_2c = 0x9c; private static final int MAGIC_2d = 0xda; private final InputStream in; /** * Creates a new input stream that decompresses Deflate-compressed data * from the specified input stream. * * @param inputStream where to read the compressed data * */ public DeflateCompressorInputStream(InputStream inputStream) { this(inputStream, new DeflateParameters()); } /** * Creates a new input stream that decompresses Deflate-compressed data * from the specified input stream. * * @param inputStream where to read the compressed data * @param parameters parameters */ public DeflateCompressorInputStream(InputStream inputStream, DeflateParameters parameters) { in = new InflaterInputStream(inputStream, new Inflater(!parameters.withZlibHeader())); } /** {@inheritDoc} */ @Override public int read() throws IOException { int ret = in.read(); count(ret == -1 ? 0 : 1); return ret; } /** {@inheritDoc} */ @Override public int read(byte[] buf, int off, int len) throws IOException { int ret = in.read(buf, off, len); count(ret); return ret; } /** {@inheritDoc} */ @Override public long skip(long n) throws IOException { return in.skip(n); } /** {@inheritDoc} */ @Override public int available() throws IOException { return in.available(); } /** {@inheritDoc} */ @Override public void close() throws IOException { in.close(); } /** * Checks if the signature matches what is expected for a zlib / deflated file * with the zlib header. * * @param signature * the bytes to check * @param length * the number of bytes to check * @return true, if this stream is zlib / deflate compressed with a header * stream, false otherwise * * @since 1.10 */ public static boolean matches(byte[] signature, int length) { return length > 3 && signature[0] == MAGIC_1 && ( signature[1] == (byte) MAGIC_2a || signature[1] == (byte) MAGIC_2b || signature[1] == (byte) MAGIC_2c || signature[1] == (byte) MAGIC_2d); } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.deflate; import java.io.IOException; import java.io.OutputStream; import java.util.zip.Deflater; import java.util.zip.DeflaterOutputStream; import org.apache.commons.compress.compressors.CompressorOutputStream; /** * Deflate compressor. * @since 1.9 */ public class DeflateCompressorOutputStream extends CompressorOutputStream { private final DeflaterOutputStream out; /** * Creates a Deflate compressed output stream with the default parameters. * @param outputStream the stream to wrap * @throws IOException on error */ public DeflateCompressorOutputStream(OutputStream outputStream) throws IOException { this(outputStream, new DeflateParameters()); } /** * Creates a Deflate compressed output stream with the specified parameters. * @param outputStream the stream to wrap * @param parameters the deflate parameters to apply * @throws IOException on error */ public DeflateCompressorOutputStream(OutputStream outputStream, DeflateParameters parameters) throws IOException { this.out = new DeflaterOutputStream(outputStream, new Deflater(parameters.getCompressionLevel(), !parameters.withZlibHeader())); } @Override public void write(int b) throws IOException { out.write(b); } @Override public void write(byte[] buf, int off, int len) throws IOException { out.write(buf, off, len); } /** * Flushes the encoder and calls outputStream.flush(). * All buffered pending data will then be decompressible from * the output stream. Calling this function very often may increase * the compressed file size a lot. */ @Override public void flush() throws IOException { out.flush(); } /** * Finishes compression without closing the underlying stream. * No more data can be written to this stream after finishing. */ public void finish() throws IOException { out.finish(); } @Override public void close() throws IOException { out.close(); } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.deflate; import java.util.zip.Deflater; /** * Parameters for the Deflate compressor. * @since 1.9 */ public class DeflateParameters { private boolean zlibHeader = true; private int compressionLevel = Deflater.DEFAULT_COMPRESSION; /** * Whether or not the zlib header shall be written (when * compressing) or expected (when decompressing). */ public boolean withZlibHeader() { return zlibHeader; } /** * Sets the zlib header presence parameter. * *

This affects whether or not the zlib header will be written * (when compressing) or expected (when decompressing).

* * @param zlibHeader */ public void setWithZlibHeader(boolean zlibHeader) { this.zlibHeader = zlibHeader; } /** * The compression level. * @see #setCompressionLevel */ public int getCompressionLevel() { return compressionLevel; } /** * Sets the compression level. * * @param compressionLevel the compression level (between 0 and 9) * @see Deflater#NO_COMPRESSION * @see Deflater#BEST_SPEED * @see Deflater#DEFAULT_COMPRESSION * @see Deflater#BEST_COMPRESSION */ public void setCompressionLevel(int compressionLevel) { if (compressionLevel < -1 || compressionLevel > 9) { throw new IllegalArgumentException("Invalid Deflate compression level: " + compressionLevel); } this.compressionLevel = compressionLevel; } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors; import java.util.Collections; import java.util.HashMap; import java.util.Locale; import java.util.Map; /** * File name mapping code for the compression formats. * @ThreadSafe * @since 1.4 */ public class FileNameUtil { /** * Map from common filename suffixes to the suffixes that identify compressed * versions of those file types. For example: from ".tar" to ".tgz". */ private final Map compressSuffix = new HashMap(); /** * Map from common filename suffixes of compressed files to the * corresponding suffixes of uncompressed files. For example: from * ".tgz" to ".tar". *

* This map also contains format-specific suffixes like ".gz" and "-z". * These suffixes are mapped to the empty string, as they should simply * be removed from the filename when the file is uncompressed. */ private final Map uncompressSuffix; /** * Length of the longest compressed suffix. */ private final int longestCompressedSuffix; /** * Length of the shortest compressed suffix. */ private final int shortestCompressedSuffix; /** * Length of the longest uncompressed suffix. */ private final int longestUncompressedSuffix; /** * Length of the shortest uncompressed suffix longer than the * empty string. */ private final int shortestUncompressedSuffix; /** * The format's default extension. */ private final String defaultExtension; /** * sets up the utility with a map of known compressed to * uncompressed suffix mappings and the default extension of the * format. * * @param uncompressSuffix Map from common filename suffixes of * compressed files to the corresponding suffixes of uncompressed * files. For example: from ".tgz" to ".tar". This map also * contains format-specific suffixes like ".gz" and "-z". These * suffixes are mapped to the empty string, as they should simply * be removed from the filename when the file is uncompressed. * * @param defaultExtension the format's default extension like ".gz" */ public FileNameUtil(Map uncompressSuffix, String defaultExtension) { this.uncompressSuffix = Collections.unmodifiableMap(uncompressSuffix); int lc = Integer.MIN_VALUE, sc = Integer.MAX_VALUE; int lu = Integer.MIN_VALUE, su = Integer.MAX_VALUE; for (Map.Entry ent : uncompressSuffix.entrySet()) { int cl = ent.getKey().length(); if (cl > lc) { lc = cl; } if (cl < sc) { sc = cl; } String u = ent.getValue(); int ul = u.length(); if (ul > 0) { if (!compressSuffix.containsKey(u)) { compressSuffix.put(u, ent.getKey()); } if (ul > lu) { lu = ul; } if (ul < su) { su = ul; } } } longestCompressedSuffix = lc; longestUncompressedSuffix = lu; shortestCompressedSuffix = sc; shortestUncompressedSuffix = su; this.defaultExtension = defaultExtension; } /** * Detects common format suffixes in the given filename. * * @param filename name of a file * @return {@code true} if the filename has a common format suffix, * {@code false} otherwise */ public boolean isCompressedFilename(String filename) { final String lower = filename.toLowerCase(Locale.ENGLISH); final int n = lower.length(); for (int i = shortestCompressedSuffix; i <= longestCompressedSuffix && i < n; i++) { if (uncompressSuffix.containsKey(lower.substring(n - i))) { return true; } } return false; } /** * Maps the given name of a compressed file to the name that the * file should have after uncompression. Commonly used file type specific * suffixes like ".tgz" or ".svgz" are automatically detected and * correctly mapped. For example the name "package.tgz" is mapped to * "package.tar". And any filenames with the generic ".gz" suffix * (or any other generic gzip suffix) is mapped to a name without that * suffix. If no format suffix is detected, then the filename is returned * unmapped. * * @param filename name of a file * @return name of the corresponding uncompressed file */ public String getUncompressedFilename(String filename) { final String lower = filename.toLowerCase(Locale.ENGLISH); final int n = lower.length(); for (int i = shortestCompressedSuffix; i <= longestCompressedSuffix && i < n; i++) { String suffix = uncompressSuffix.get(lower.substring(n - i)); if (suffix != null) { return filename.substring(0, n - i) + suffix; } } return filename; } /** * Maps the given filename to the name that the file should have after * compression. Common file types with custom suffixes for * compressed versions are automatically detected and correctly mapped. * For example the name "package.tar" is mapped to "package.tgz". If no * custom mapping is applicable, then the default ".gz" suffix is appended * to the filename. * * @param filename name of a file * @return name of the corresponding compressed file */ public String getCompressedFilename(String filename) { final String lower = filename.toLowerCase(Locale.ENGLISH); final int n = lower.length(); for (int i = shortestUncompressedSuffix; i <= longestUncompressedSuffix && i < n; i++) { String suffix = compressSuffix.get(lower.substring(n - i)); if (suffix != null) { return filename.substring(0, n - i) + suffix; } } // No custom suffix found, just append the default return filename + defaultExtension; } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.snappy; import java.io.IOException; import java.io.InputStream; import java.io.PushbackInputStream; import java.util.Arrays; import org.apache.commons.compress.compressors.CompressorInputStream; import org.apache.commons.compress.utils.BoundedInputStream; import org.apache.commons.compress.utils.IOUtils; /** * CompressorInputStream for the framing Snappy format. * *

Based on the "spec" in the version "Last revised: 2013-10-25"

* * @see Snappy framing format description * @since 1.7 */ public class FramedSnappyCompressorInputStream extends CompressorInputStream { /** * package private for tests only. */ static final long MASK_OFFSET = 0xa282ead8L; private static final int STREAM_IDENTIFIER_TYPE = 0xff; private static final int COMPRESSED_CHUNK_TYPE = 0; private static final int UNCOMPRESSED_CHUNK_TYPE = 1; private static final int PADDING_CHUNK_TYPE = 0xfe; private static final int MIN_UNSKIPPABLE_TYPE = 2; private static final int MAX_UNSKIPPABLE_TYPE = 0x7f; private static final int MAX_SKIPPABLE_TYPE = 0xfd; private static final byte[] SZ_SIGNATURE = new byte[] { (byte) STREAM_IDENTIFIER_TYPE, // tag 6, 0, 0, // length 's', 'N', 'a', 'P', 'p', 'Y' }; /** The underlying stream to read compressed data from */ private final PushbackInputStream in; private SnappyCompressorInputStream currentCompressedChunk; // used in no-arg read method private final byte[] oneByte = new byte[1]; private boolean endReached, inUncompressedChunk; private int uncompressedBytesRemaining; private long expectedChecksum = -1; private final PureJavaCrc32C checksum = new PureJavaCrc32C(); /** * Constructs a new input stream that decompresses snappy-framed-compressed data * from the specified input stream. * @param in the InputStream from which to read the compressed data */ public FramedSnappyCompressorInputStream(InputStream in) throws IOException { this.in = new PushbackInputStream(in, 1); readStreamIdentifier(); } /** {@inheritDoc} */ @Override public int read() throws IOException { return read(oneByte, 0, 1) == -1 ? -1 : oneByte[0] & 0xFF; } /** {@inheritDoc} */ @Override public void close() throws IOException { if (currentCompressedChunk != null) { currentCompressedChunk.close(); currentCompressedChunk = null; } in.close(); } /** {@inheritDoc} */ @Override public int read(byte[] b, int off, int len) throws IOException { int read = readOnce(b, off, len); if (read == -1) { readNextBlock(); if (endReached) { return -1; } read = readOnce(b, off, len); } return read; } /** {@inheritDoc} */ @Override public int available() throws IOException { if (inUncompressedChunk) { return Math.min(uncompressedBytesRemaining, in.available()); } else if (currentCompressedChunk != null) { return currentCompressedChunk.available(); } return 0; } /** * Read from the current chunk into the given array. * * @return -1 if there is no current chunk or the number of bytes * read from the current chunk (which may be -1 if the end of the * chunk is reached). */ private int readOnce(byte[] b, int off, int len) throws IOException { int read = -1; if (inUncompressedChunk) { int amount = Math.min(uncompressedBytesRemaining, len); if (amount == 0) { return -1; } read = in.read(b, off, amount); if (read != -1) { uncompressedBytesRemaining -= read; count(read); } } else if (currentCompressedChunk != null) { long before = currentCompressedChunk.getBytesRead(); read = currentCompressedChunk.read(b, off, len); if (read == -1) { currentCompressedChunk.close(); currentCompressedChunk = null; } else { count(currentCompressedChunk.getBytesRead() - before); } } if (read > 0) { checksum.update(b, off, read); } return read; } private void readNextBlock() throws IOException { verifyLastChecksumAndReset(); inUncompressedChunk = false; int type = readOneByte(); if (type == -1) { endReached = true; } else if (type == STREAM_IDENTIFIER_TYPE) { in.unread(type); pushedBackBytes(1); readStreamIdentifier(); readNextBlock(); } else if (type == PADDING_CHUNK_TYPE || (type > MAX_UNSKIPPABLE_TYPE && type <= MAX_SKIPPABLE_TYPE)) { skipBlock(); readNextBlock(); } else if (type >= MIN_UNSKIPPABLE_TYPE && type <= MAX_UNSKIPPABLE_TYPE) { throw new IOException("unskippable chunk with type " + type + " (hex " + Integer.toHexString(type) + ")" + " detected."); } else if (type == UNCOMPRESSED_CHUNK_TYPE) { inUncompressedChunk = true; uncompressedBytesRemaining = readSize() - 4 /* CRC */; expectedChecksum = unmask(readCrc()); } else if (type == COMPRESSED_CHUNK_TYPE) { long size = readSize() - 4 /* CRC */; expectedChecksum = unmask(readCrc()); currentCompressedChunk = new SnappyCompressorInputStream(new BoundedInputStream(in, size)); // constructor reads uncompressed size count(currentCompressedChunk.getBytesRead()); } else { // impossible as all potential byte values have been covered throw new IOException("unknown chunk type " + type + " detected."); } } private long readCrc() throws IOException { byte[] b = new byte[4]; int read = IOUtils.readFully(in, b); count(read); if (read != 4) { throw new IOException("premature end of stream"); } long crc = 0; for (int i = 0; i < 4; i++) { crc |= (b[i] & 0xFFL) << (8 * i); } return crc; } static long unmask(long x) { // ugly, maybe we should just have used ints and deal with the // overflow x -= MASK_OFFSET; x &= 0xffffFFFFL; return ((x >> 17) | (x << 15)) & 0xffffFFFFL; } private int readSize() throws IOException { int b = 0; int sz = 0; for (int i = 0; i < 3; i++) { b = readOneByte(); if (b == -1) { throw new IOException("premature end of stream"); } sz |= (b << (i * 8)); } return sz; } private void skipBlock() throws IOException { int size = readSize(); long read = IOUtils.skip(in, size); count(read); if (read != size) { throw new IOException("premature end of stream"); } } private void readStreamIdentifier() throws IOException { byte[] b = new byte[10]; int read = IOUtils.readFully(in, b); count(read); if (10 != read || !matches(b, 10)) { throw new IOException("Not a framed Snappy stream"); } } private int readOneByte() throws IOException { int b = in.read(); if (b != -1) { count(1); return b & 0xFF; } return -1; } private void verifyLastChecksumAndReset() throws IOException { if (expectedChecksum >= 0 && expectedChecksum != checksum.getValue()) { throw new IOException("Checksum verification failed"); } expectedChecksum = -1; checksum.reset(); } /** * Checks if the signature matches what is expected for a .sz file. * *

.sz files start with a chunk with tag 0xff and content sNaPpY.

* * @param signature the bytes to check * @param length the number of bytes to check * @return true if this is a .sz stream, false otherwise */ public static boolean matches(byte[] signature, int length) { if (length < SZ_SIGNATURE.length) { return false; } byte[] shortenedSig = signature; if (signature.length > SZ_SIGNATURE.length) { shortenedSig = new byte[SZ_SIGNATURE.length]; System.arraycopy(signature, 0, shortenedSig, 0, SZ_SIGNATURE.length); } return Arrays.equals(shortenedSig, SZ_SIGNATURE); } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.gzip; import java.io.ByteArrayOutputStream; import java.io.IOException; import java.io.EOFException; import java.io.InputStream; import java.io.DataInputStream; import java.io.BufferedInputStream; import java.util.zip.DataFormatException; import java.util.zip.Deflater; import java.util.zip.Inflater; import java.util.zip.CRC32; import org.apache.commons.compress.compressors.CompressorInputStream; import org.apache.commons.compress.utils.CharsetNames; /** * Input stream that decompresses .gz files. * This supports decompressing concatenated .gz files which is important * when decompressing standalone .gz files. *

* {@link java.util.zip.GZIPInputStream} doesn't decompress concatenated .gz * files: it stops after the first member and silently ignores the rest. * It doesn't leave the read position to point to the beginning of the next * member, which makes it difficult workaround the lack of concatenation * support. *

* Instead of using GZIPInputStream, this class has its own .gz * container format decoder. The actual decompression is done with * {@link java.util.zip.Inflater}. */ public class GzipCompressorInputStream extends CompressorInputStream { // Header flags // private static final int FTEXT = 0x01; // Uninteresting for us private static final int FHCRC = 0x02; private static final int FEXTRA = 0x04; private static final int FNAME = 0x08; private static final int FCOMMENT = 0x10; private static final int FRESERVED = 0xE0; // Compressed input stream, possibly wrapped in a BufferedInputStream private final InputStream in; // True if decompressing multimember streams. private final boolean decompressConcatenated; // Buffer to hold the input data private final byte[] buf = new byte[8192]; // Amount of data in buf. private int bufUsed = 0; // Decompressor private Inflater inf = new Inflater(true); // CRC32 from uncompressed data private final CRC32 crc = new CRC32(); // True once everything has been decompressed private boolean endReached = false; // used in no-arg read method private final byte[] oneByte = new byte[1]; private final GzipParameters parameters = new GzipParameters(); /** * Constructs a new input stream that decompresses gzip-compressed data * from the specified input stream. *

* This is equivalent to * GzipCompressorInputStream(inputStream, false) and thus * will not decompress concatenated .gz files. * * @param inputStream the InputStream from which this object should * be created of * * @throws IOException if the stream could not be created */ public GzipCompressorInputStream(InputStream inputStream) throws IOException { this(inputStream, false); } /** * Constructs a new input stream that decompresses gzip-compressed data * from the specified input stream. *

* If decompressConcatenated is {@code false}: * This decompressor might read more input than it will actually use. * If inputStream supports mark and * reset, then the input position will be adjusted * so that it is right after the last byte of the compressed stream. * If mark isn't supported, the input position will be * undefined. * * @param inputStream the InputStream from which this object should * be created of * @param decompressConcatenated * if true, decompress until the end of the input; * if false, stop after the first .gz member * * @throws IOException if the stream could not be created */ public GzipCompressorInputStream(InputStream inputStream, boolean decompressConcatenated) throws IOException { // Mark support is strictly needed for concatenated files only, // but it's simpler if it is always available. if (inputStream.markSupported()) { in = inputStream; } else { in = new BufferedInputStream(inputStream); } this.decompressConcatenated = decompressConcatenated; init(true); } /** * Provides the stream's meta data - may change with each stream * when decompressing concatenated streams. * @return the stream's meta data * @since 1.8 */ public GzipParameters getMetaData() { return parameters; } private boolean init(boolean isFirstMember) throws IOException { assert isFirstMember || decompressConcatenated; // Check the magic bytes without a possibility of EOFException. int magic0 = in.read(); int magic1 = in.read(); // If end of input was reached after decompressing at least // one .gz member, we have reached the end of the file successfully. if (magic0 == -1 && !isFirstMember) { return false; } if (magic0 != 31 || magic1 != 139) { throw new IOException(isFirstMember ? "Input is not in the .gz format" : "Garbage after a valid .gz stream"); } // Parsing the rest of the header may throw EOFException. DataInputStream inData = new DataInputStream(in); int method = inData.readUnsignedByte(); if (method != Deflater.DEFLATED) { throw new IOException("Unsupported compression method " + method + " in the .gz header"); } int flg = inData.readUnsignedByte(); if ((flg & FRESERVED) != 0) { throw new IOException( "Reserved flags are set in the .gz header"); } parameters.setModificationTime(readLittleEndianInt(inData) * 1000); switch (inData.readUnsignedByte()) { // extra flags case 2: parameters.setCompressionLevel(Deflater.BEST_COMPRESSION); break; case 4: parameters.setCompressionLevel(Deflater.BEST_SPEED); break; default: // ignored for now break; } parameters.setOperatingSystem(inData.readUnsignedByte()); // Extra field, ignored if ((flg & FEXTRA) != 0) { int xlen = inData.readUnsignedByte(); xlen |= inData.readUnsignedByte() << 8; // This isn't as efficient as calling in.skip would be, // but it's lazier to handle unexpected end of input this way. // Most files don't have an extra field anyway. while (xlen-- > 0) { inData.readUnsignedByte(); } } // Original file name if ((flg & FNAME) != 0) { parameters.setFilename(new String(readToNull(inData), CharsetNames.ISO_8859_1)); } // Comment if ((flg & FCOMMENT) != 0) { parameters.setComment(new String(readToNull(inData), CharsetNames.ISO_8859_1)); } // Header "CRC16" which is actually a truncated CRC32 (which isn't // as good as real CRC16). I don't know if any encoder implementation // sets this, so it's not worth trying to verify it. GNU gzip 1.4 // doesn't support this field, but zlib seems to be able to at least // skip over it. if ((flg & FHCRC) != 0) { inData.readShort(); } // Reset inf.reset(); crc.reset(); return true; } private byte[] readToNull(DataInputStream inData) throws IOException { ByteArrayOutputStream bos = new ByteArrayOutputStream(); int b = 0; while ((b = inData.readUnsignedByte()) != 0x00) { // NOPMD bos.write(b); } return bos.toByteArray(); } private long readLittleEndianInt(DataInputStream inData) throws IOException { return inData.readUnsignedByte() | (inData.readUnsignedByte() << 8) | (inData.readUnsignedByte() << 16) | (((long) inData.readUnsignedByte()) << 24); } @Override public int read() throws IOException { return read(oneByte, 0, 1) == -1 ? -1 : oneByte[0] & 0xFF; } /** * {@inheritDoc} * * @since 1.1 */ @Override public int read(byte[] b, int off, int len) throws IOException { if (endReached) { return -1; } int size = 0; while (len > 0) { if (inf.needsInput()) { // Remember the current position because we may need to // rewind after reading too much input. in.mark(buf.length); bufUsed = in.read(buf); if (bufUsed == -1) { throw new EOFException(); } inf.setInput(buf, 0, bufUsed); } int ret; try { ret = inf.inflate(b, off, len); } catch (DataFormatException e) { throw new IOException("Gzip-compressed data is corrupt"); } crc.update(b, off, ret); off += ret; len -= ret; size += ret; count(ret); if (inf.finished()) { // We may have read too many bytes. Rewind the read // position to match the actual amount used. // // NOTE: The "if" is there just in case. Since we used // in.mark earler, it should always skip enough. in.reset(); int skipAmount = bufUsed - inf.getRemaining(); if (in.skip(skipAmount) != skipAmount) { throw new IOException(); } bufUsed = 0; DataInputStream inData = new DataInputStream(in); // CRC32 long crcStored = readLittleEndianInt(inData); if (crcStored != crc.getValue()) { throw new IOException("Gzip-compressed data is corrupt " + "(CRC32 error)"); } // Uncompressed size modulo 2^32 (ISIZE in the spec) long isize = readLittleEndianInt(inData); if (isize != (inf.getBytesWritten() & 0xffffffffl)) { throw new IOException("Gzip-compressed data is corrupt" + "(uncompressed size mismatch)"); } // See if this is the end of the file. if (!decompressConcatenated || !init(false)) { inf.end(); inf = null; endReached = true; return size == 0 ? -1 : size; } } } return size; } /** * Checks if the signature matches what is expected for a .gz file. * * @param signature the bytes to check * @param length the number of bytes to check * @return true if this is a .gz stream, false otherwise * * @since 1.1 */ public static boolean matches(byte[] signature, int length) { if (length < 2) { return false; } if (signature[0] != 31) { return false; } if (signature[1] != -117) { return false; } return true; } /** * Closes the input stream (unless it is System.in). * * @since 1.2 */ @Override public void close() throws IOException { if (inf != null) { inf.end(); inf = null; } if (this.in != System.in) { this.in.close(); } } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.gzip; import java.io.IOException; import java.io.OutputStream; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.util.zip.CRC32; import java.util.zip.Deflater; import java.util.zip.GZIPInputStream; import java.util.zip.GZIPOutputStream; import org.apache.commons.compress.compressors.CompressorOutputStream; import org.apache.commons.compress.utils.CharsetNames; /** * Compressed output stream using the gzip format. This implementation improves * over the standard {@link GZIPOutputStream} class by allowing * the configuration of the compression level and the header metadata (filename, * comment, modification time, operating system and extra flags). * * @see GZIP File Format Specification */ public class GzipCompressorOutputStream extends CompressorOutputStream { /** Header flag indicating a file name follows the header */ private static final int FNAME = 1 << 3; /** Header flag indicating a comment follows the header */ private static final int FCOMMENT = 1 << 4; /** The underlying stream */ private final OutputStream out; /** Deflater used to compress the data */ private final Deflater deflater; /** The buffer receiving the compressed data from the deflater */ private final byte[] deflateBuffer = new byte[512]; /** Indicates if the stream has been closed */ private boolean closed; /** The checksum of the uncompressed data */ private final CRC32 crc = new CRC32(); /** * Creates a gzip compressed output stream with the default parameters. */ public GzipCompressorOutputStream(OutputStream out) throws IOException { this(out, new GzipParameters()); } /** * Creates a gzip compressed output stream with the specified parameters. * * @since 1.7 */ public GzipCompressorOutputStream(OutputStream out, GzipParameters parameters) throws IOException { this.out = out; this.deflater = new Deflater(parameters.getCompressionLevel(), true); writeHeader(parameters); } private void writeHeader(GzipParameters parameters) throws IOException { String filename = parameters.getFilename(); String comment = parameters.getComment(); ByteBuffer buffer = ByteBuffer.allocate(10); buffer.order(ByteOrder.LITTLE_ENDIAN); buffer.putShort((short) GZIPInputStream.GZIP_MAGIC); buffer.put((byte) Deflater.DEFLATED); // compression method (8: deflate) buffer.put((byte) ((filename != null ? FNAME : 0) | (comment != null ? FCOMMENT : 0))); // flags buffer.putInt((int) (parameters.getModificationTime() / 1000)); // extra flags int compressionLevel = parameters.getCompressionLevel(); if (compressionLevel == Deflater.BEST_COMPRESSION) { buffer.put((byte) 2); } else if (compressionLevel == Deflater.BEST_SPEED) { buffer.put((byte) 4); } else { buffer.put((byte) 0); } buffer.put((byte) parameters.getOperatingSystem()); out.write(buffer.array()); if (filename != null) { out.write(filename.getBytes(CharsetNames.ISO_8859_1)); out.write(0); } if (comment != null) { out.write(comment.getBytes(CharsetNames.ISO_8859_1)); out.write(0); } } private void writeTrailer() throws IOException { ByteBuffer buffer = ByteBuffer.allocate(8); buffer.order(ByteOrder.LITTLE_ENDIAN); buffer.putInt((int) crc.getValue()); buffer.putInt(deflater.getTotalIn()); out.write(buffer.array()); } @Override public void write(int b) throws IOException { write(new byte[]{(byte) (b & 0xff)}, 0, 1); } /** * {@inheritDoc} * * @since 1.1 */ @Override public void write(byte[] buffer) throws IOException { write(buffer, 0, buffer.length); } /** * {@inheritDoc} * * @since 1.1 */ @Override public void write(byte[] buffer, int offset, int length) throws IOException { if (deflater.finished()) { throw new IOException("Cannot write more data, the end of the compressed data stream has been reached"); } else if (length > 0) { deflater.setInput(buffer, offset, length); while (!deflater.needsInput()) { deflate(); } crc.update(buffer, offset, length); } } private void deflate() throws IOException { int length = deflater.deflate(deflateBuffer, 0, deflateBuffer.length); if (length > 0) { out.write(deflateBuffer, 0, length); } } /** * Finishes writing compressed data to the underlying stream without closing it. * * @since 1.7 */ public void finish() throws IOException { if (!deflater.finished()) { deflater.finish(); while (!deflater.finished()) { deflate(); } writeTrailer(); } } /** * {@inheritDoc} * * @since 1.7 */ @Override public void flush() throws IOException { out.flush(); } @Override public void close() throws IOException { if (!closed) { finish(); deflater.end(); out.close(); closed = true; } } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.gzip; import java.util.zip.Deflater; /** * Parameters for the GZIP compressor. * * @since 1.7 */ public class GzipParameters { private int compressionLevel = Deflater.DEFAULT_COMPRESSION; private long modificationTime; private String filename; private String comment; private int operatingSystem = 255; // Unknown OS by default public int getCompressionLevel() { return compressionLevel; } /** * Sets the compression level. * * @param compressionLevel the compression level (between 0 and 9) * @see Deflater#NO_COMPRESSION * @see Deflater#BEST_SPEED * @see Deflater#DEFAULT_COMPRESSION * @see Deflater#BEST_COMPRESSION */ public void setCompressionLevel(int compressionLevel) { if (compressionLevel < -1 || compressionLevel > 9) { throw new IllegalArgumentException("Invalid gzip compression level: " + compressionLevel); } this.compressionLevel = compressionLevel; } public long getModificationTime() { return modificationTime; } /** * Sets the modification time of the compressed file. * * @param modificationTime the modification time, in milliseconds */ public void setModificationTime(long modificationTime) { this.modificationTime = modificationTime; } public String getFilename() { return filename; } /** * Sets the name of the compressed file. * * @param filename the name of the file without the directory path */ public void setFilename(String filename) { this.filename = filename; } public String getComment() { return comment; } public void setComment(String comment) { this.comment = comment; } public int getOperatingSystem() { return operatingSystem; } /** * Sets the operating system on which the compression took place. * The defined values are: *

    *
  • 0: FAT filesystem (MS-DOS, OS/2, NT/Win32)
  • *
  • 1: Amiga
  • *
  • 2: VMS (or OpenVMS)
  • *
  • 3: Unix
  • *
  • 4: VM/CMS
  • *
  • 5: Atari TOS
  • *
  • 6: HPFS filesystem (OS/2, NT)
  • *
  • 7: Macintosh
  • *
  • 8: Z-System
  • *
  • 9: CP/M
  • *
  • 10: TOPS-20
  • *
  • 11: NTFS filesystem (NT)
  • *
  • 12: QDOS
  • *
  • 13: Acorn RISCOS
  • *
  • 255: Unknown
  • *
* * @param operatingSystem the code of the operating system */ public void setOperatingSystem(int operatingSystem) { this.operatingSystem = operatingSystem; } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.gzip; import java.util.LinkedHashMap; import java.util.Map; import org.apache.commons.compress.compressors.FileNameUtil; /** * Utility code for the gzip compression format. * @ThreadSafe */ public class GzipUtils { private static final FileNameUtil fileNameUtil; static { // using LinkedHashMap so .tgz is preferred over .taz as // compressed extension of .tar as FileNameUtil will use the // first one found Map uncompressSuffix = new LinkedHashMap(); uncompressSuffix.put(".tgz", ".tar"); uncompressSuffix.put(".taz", ".tar"); uncompressSuffix.put(".svgz", ".svg"); uncompressSuffix.put(".cpgz", ".cpio"); uncompressSuffix.put(".wmz", ".wmf"); uncompressSuffix.put(".emz", ".emf"); uncompressSuffix.put(".gz", ""); uncompressSuffix.put(".z", ""); uncompressSuffix.put("-gz", ""); uncompressSuffix.put("-z", ""); uncompressSuffix.put("_z", ""); fileNameUtil = new FileNameUtil(uncompressSuffix, ".gz"); } /** Private constructor to prevent instantiation of this utility class. */ private GzipUtils() { } /** * Detects common gzip suffixes in the given filename. * * @param filename name of a file * @return {@code true} if the filename has a common gzip suffix, * {@code false} otherwise */ public static boolean isCompressedFilename(String filename) { return fileNameUtil.isCompressedFilename(filename); } /** * Maps the given name of a gzip-compressed file to the name that the * file should have after uncompression. Commonly used file type specific * suffixes like ".tgz" or ".svgz" are automatically detected and * correctly mapped. For example the name "package.tgz" is mapped to * "package.tar". And any filenames with the generic ".gz" suffix * (or any other generic gzip suffix) is mapped to a name without that * suffix. If no gzip suffix is detected, then the filename is returned * unmapped. * * @param filename name of a file * @return name of the corresponding uncompressed file */ public static String getUncompressedFilename(String filename) { return fileNameUtil.getUncompressedFilename(filename); } /** * Maps the given filename to the name that the file should have after * compression with gzip. Common file types with custom suffixes for * compressed versions are automatically detected and correctly mapped. * For example the name "package.tar" is mapped to "package.tgz". If no * custom mapping is applicable, then the default ".gz" suffix is appended * to the filename. * * @param filename name of a file * @return name of the corresponding compressed file */ public static String getCompressedFilename(String filename) { return fileNameUtil.getCompressedFilename(filename); } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.pack200; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.IOException; import java.io.InputStream; /** * StreamSwitcher that caches all data written to the output side in * memory. * @since 1.3 */ class InMemoryCachingStreamBridge extends StreamBridge { InMemoryCachingStreamBridge() { super(new ByteArrayOutputStream()); } @Override InputStream getInputView() throws IOException { return new ByteArrayInputStream(((ByteArrayOutputStream) out) .toByteArray()); } }/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.lzma; import java.io.IOException; import java.io.InputStream; import org.tukaani.xz.LZMAInputStream; import org.apache.commons.compress.compressors.CompressorInputStream; /** * LZMA decompressor. * @since 1.6 */ public class LZMACompressorInputStream extends CompressorInputStream { private final InputStream in; /** * Creates a new input stream that decompresses LZMA-compressed data * from the specified input stream. * * @param inputStream where to read the compressed data * * @throws IOException if the input is not in the .lzma format, * the input is corrupt or truncated, the .lzma * headers specify sizes that are not supported * by this implementation, or the underlying * inputStream throws an exception */ public LZMACompressorInputStream(InputStream inputStream) throws IOException { in = new LZMAInputStream(inputStream); } /** {@inheritDoc} */ @Override public int read() throws IOException { int ret = in.read(); count(ret == -1 ? 0 : 1); return ret; } /** {@inheritDoc} */ @Override public int read(byte[] buf, int off, int len) throws IOException { int ret = in.read(buf, off, len); count(ret); return ret; } /** {@inheritDoc} */ @Override public long skip(long n) throws IOException { return in.skip(n); } /** {@inheritDoc} */ @Override public int available() throws IOException { return in.available(); } /** {@inheritDoc} */ @Override public void close() throws IOException { in.close(); } /** * Checks if the signature matches what is expected for an lzma file. * * @param signature * the bytes to check * @param length * the number of bytes to check * @return true, if this stream is an lzma compressed stream, false otherwise * * @since 1.10 */ public static boolean matches(byte[] signature, int length) { if (signature == null || length < 3) { return false; } if (signature[0] != 0x5d) { return false; } if (signature[1] != 0) { return false; } if (signature[2] != 0) { return false; } return true; } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.lzma; import java.util.HashMap; import java.util.Map; import org.apache.commons.compress.compressors.FileNameUtil; /** * Utility code for the lzma compression format. * @ThreadSafe * @since 1.10 */ public class LZMAUtils { private static final FileNameUtil fileNameUtil; /** * LZMA Header Magic Bytes begin a LZMA file. */ private static final byte[] HEADER_MAGIC = { (byte) 0x5D, 0, 0 }; static enum CachedAvailability { DONT_CACHE, CACHED_AVAILABLE, CACHED_UNAVAILABLE } private static volatile CachedAvailability cachedLZMAAvailability; static { Map uncompressSuffix = new HashMap(); uncompressSuffix.put(".lzma", ""); uncompressSuffix.put("-lzma", ""); fileNameUtil = new FileNameUtil(uncompressSuffix, ".lzma"); cachedLZMAAvailability = CachedAvailability.DONT_CACHE; try { Class.forName("org.osgi.framework.BundleEvent"); } catch (Exception ex) { setCacheLZMAAvailablity(true); } } /** Private constructor to prevent instantiation of this utility class. */ private LZMAUtils() { } /** * Checks if the signature matches what is expected for a .lzma file. * * @param signature the bytes to check * @param length the number of bytes to check * @return true if signature matches the .lzma magic bytes, false otherwise */ public static boolean matches(byte[] signature, int length) { if (length < HEADER_MAGIC.length) { return false; } for (int i = 0; i < HEADER_MAGIC.length; ++i) { if (signature[i] != HEADER_MAGIC[i]) { return false; } } return true; } /** * Are the classes required to support LZMA compression available? */ public static boolean isLZMACompressionAvailable() { final CachedAvailability cachedResult = cachedLZMAAvailability; if (cachedResult != CachedAvailability.DONT_CACHE) { return cachedResult == CachedAvailability.CACHED_AVAILABLE; } return internalIsLZMACompressionAvailable(); } private static boolean internalIsLZMACompressionAvailable() { try { LZMACompressorInputStream.matches(null, 0); return true; } catch (NoClassDefFoundError error) { return false; } } /** * Detects common lzma suffixes in the given filename. * * @param filename name of a file * @return {@code true} if the filename has a common lzma suffix, * {@code false} otherwise */ public static boolean isCompressedFilename(String filename) { return fileNameUtil.isCompressedFilename(filename); } /** * Maps the given name of a lzma-compressed file to the name that * the file should have after uncompression. Any filenames with * the generic ".lzma" suffix (or any other generic lzma suffix) * is mapped to a name without that suffix. If no lzma suffix is * detected, then the filename is returned unmapped. * * @param filename name of a file * @return name of the corresponding uncompressed file */ public static String getUncompressedFilename(String filename) { return fileNameUtil.getUncompressedFilename(filename); } /** * Maps the given filename to the name that the file should have after * compression with lzma. * * @param filename name of a file * @return name of the corresponding compressed file */ public static String getCompressedFilename(String filename) { return fileNameUtil.getCompressedFilename(filename); } /** * Whether to cache the result of the LZMA check. * *

This defaults to {@code false} in an OSGi environment and {@code true} otherwise.

* @param doCache whether to cache the result */ public static void setCacheLZMAAvailablity(boolean doCache) { if (!doCache) { cachedLZMAAvailability = CachedAvailability.DONT_CACHE; } else if (cachedLZMAAvailability == CachedAvailability.DONT_CACHE) { final boolean hasLzma = internalIsLZMACompressionAvailable(); cachedLZMAAvailability = hasLzma ? CachedAvailability.CACHED_AVAILABLE : CachedAvailability.CACHED_UNAVAILABLE; } } // only exists to support unit tests static CachedAvailability getCachedLZMAAvailability() { return cachedLZMAAvailability; } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.lzw; import java.io.IOException; import java.io.InputStream; import java.nio.ByteOrder; import org.apache.commons.compress.compressors.CompressorInputStream; import org.apache.commons.compress.utils.BitInputStream; /** *

Generic LZW implementation. It is used internally for * the Z decompressor and the Unshrinking Zip file compression method, * but may be useful for third-party projects in implementing their own LZW variations.

* * @NotThreadSafe * @since 1.10 */ public abstract class LZWInputStream extends CompressorInputStream { protected static final int DEFAULT_CODE_SIZE = 9; protected static final int UNUSED_PREFIX = -1; private final byte[] oneByte = new byte[1]; protected final BitInputStream in; private int clearCode = -1; private int codeSize = DEFAULT_CODE_SIZE; private byte previousCodeFirstChar; private int previousCode = UNUSED_PREFIX; private int tableSize; private int[] prefixes; private byte[] characters; private byte[] outputStack; private int outputStackLocation; protected LZWInputStream(final InputStream inputStream, final ByteOrder byteOrder) { this.in = new BitInputStream(inputStream, byteOrder); } @Override public void close() throws IOException { in.close(); } @Override public int read() throws IOException { int ret = read(oneByte); if (ret < 0) { return ret; } return 0xff & oneByte[0]; } @Override public int read(byte[] b, int off, int len) throws IOException { int bytesRead = readFromStack(b, off, len); while (len - bytesRead > 0) { int result = decompressNextSymbol(); if (result < 0) { if (bytesRead > 0) { count(bytesRead); return bytesRead; } return result; } bytesRead += readFromStack(b, off + bytesRead, len - bytesRead); } count(bytesRead); return bytesRead; } /** * Read the next code and expand it. */ protected abstract int decompressNextSymbol() throws IOException; /** * Add a new entry to the dictionary. */ protected abstract int addEntry(int previousCode, byte character) throws IOException; /** * Sets the clear code based on the code size. */ protected void setClearCode(int codeSize) { clearCode = (1 << (codeSize - 1)); } /** * Initializes the arrays based on the maximum code size. */ protected void initializeTables(int maxCodeSize) { final int maxTableSize = 1 << maxCodeSize; prefixes = new int[maxTableSize]; characters = new byte[maxTableSize]; outputStack = new byte[maxTableSize]; outputStackLocation = maxTableSize; final int max = 1 << 8; for (int i = 0; i < max; i++) { prefixes[i] = -1; characters[i] = (byte) i; } } /** * Reads the next code from the stream. */ protected int readNextCode() throws IOException { if (codeSize > 31) { throw new IllegalArgumentException("code size must not be bigger than 31"); } return (int) in.readBits(codeSize); } /** * Adds a new entry if the maximum table size hasn't been exceeded * and returns the new index. */ protected int addEntry(int previousCode, byte character, int maxTableSize) { if (tableSize < maxTableSize) { prefixes[tableSize] = previousCode; characters[tableSize] = character; return tableSize++; } return -1; } /** * Add entry for repeat of previousCode we haven't added, yet. */ protected int addRepeatOfPreviousCode() throws IOException { if (previousCode == -1) { // can't have a repeat for the very first code throw new IOException("The first code can't be a reference to its preceding code"); } return addEntry(previousCode, previousCodeFirstChar); } /** * Expands the entry with index code to the output stack and may * create a new entry */ protected int expandCodeToOutputStack(int code, boolean addedUnfinishedEntry) throws IOException { for (int entry = code; entry >= 0; entry = prefixes[entry]) { outputStack[--outputStackLocation] = characters[entry]; } if (previousCode != -1 && !addedUnfinishedEntry) { addEntry(previousCode, outputStack[outputStackLocation]); } previousCode = code; previousCodeFirstChar = outputStack[outputStackLocation]; return outputStackLocation; } private int readFromStack(byte[] b, int off, int len) { int remainingInStack = outputStack.length - outputStackLocation; if (remainingInStack > 0) { int maxLength = Math.min(remainingInStack, len); System.arraycopy(outputStack, outputStackLocation, b, off, maxLength); outputStackLocation += maxLength; return maxLength; } return 0; } protected int getCodeSize() { return codeSize; } protected void resetCodeSize() { setCodeSize(DEFAULT_CODE_SIZE); } protected void setCodeSize(int cs) { this.codeSize = cs; } protected void incrementCodeSize() { codeSize++; } protected void resetPreviousCode() { this.previousCode = -1; } protected int getPrefix(int offset) { return prefixes[offset]; } protected void setPrefix(int offset, int value) { prefixes[offset] = value; } protected int getPrefixesLength() { return prefixes.length; } protected int getClearCode() { return clearCode; } protected int getTableSize() { return tableSize; } protected void setTableSize(int newSize) { tableSize = newSize; } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.pack200; import java.io.File; import java.io.FilterInputStream; import java.io.IOException; import java.io.InputStream; import java.util.Map; import java.util.jar.JarOutputStream; import java.util.jar.Pack200; import org.apache.commons.compress.compressors.CompressorInputStream; /** * An input stream that decompresses from the Pack200 format to be read * as any other stream. * *

The {@link CompressorInputStream#getCount getCount} and {@link * CompressorInputStream#getBytesRead getBytesRead} methods always * return 0.

* * @NotThreadSafe * @since 1.3 */ public class Pack200CompressorInputStream extends CompressorInputStream { private final InputStream originalInput; private final StreamBridge streamBridge; /** * Decompresses the given stream, caching the decompressed data in * memory. * *

When reading from a file the File-arg constructor may * provide better performance.

* * @param in the InputStream from which this object should be created */ public Pack200CompressorInputStream(final InputStream in) throws IOException { this(in, Pack200Strategy.IN_MEMORY); } /** * Decompresses the given stream using the given strategy to cache * the results. * *

When reading from a file the File-arg constructor may * provide better performance.

* * @param in the InputStream from which this object should be created * @param mode the strategy to use */ public Pack200CompressorInputStream(final InputStream in, final Pack200Strategy mode) throws IOException { this(in, null, mode, null); } /** * Decompresses the given stream, caching the decompressed data in * memory and using the given properties. * *

When reading from a file the File-arg constructor may * provide better performance.

* * @param in the InputStream from which this object should be created * @param props Pack200 properties to use */ public Pack200CompressorInputStream(final InputStream in, final Map props) throws IOException { this(in, Pack200Strategy.IN_MEMORY, props); } /** * Decompresses the given stream using the given strategy to cache * the results and the given properties. * *

When reading from a file the File-arg constructor may * provide better performance.

* * @param in the InputStream from which this object should be created * @param mode the strategy to use * @param props Pack200 properties to use */ public Pack200CompressorInputStream(final InputStream in, final Pack200Strategy mode, final Map props) throws IOException { this(in, null, mode, props); } /** * Decompresses the given file, caching the decompressed data in * memory. * * @param f the file to decompress */ public Pack200CompressorInputStream(final File f) throws IOException { this(f, Pack200Strategy.IN_MEMORY); } /** * Decompresses the given file using the given strategy to cache * the results. * * @param f the file to decompress * @param mode the strategy to use */ public Pack200CompressorInputStream(final File f, final Pack200Strategy mode) throws IOException { this(null, f, mode, null); } /** * Decompresses the given file, caching the decompressed data in * memory and using the given properties. * * @param f the file to decompress * @param props Pack200 properties to use */ public Pack200CompressorInputStream(final File f, final Map props) throws IOException { this(f, Pack200Strategy.IN_MEMORY, props); } /** * Decompresses the given file using the given strategy to cache * the results and the given properties. * * @param f the file to decompress * @param mode the strategy to use * @param props Pack200 properties to use */ public Pack200CompressorInputStream(final File f, final Pack200Strategy mode, final Map props) throws IOException { this(null, f, mode, props); } private Pack200CompressorInputStream(final InputStream in, final File f, final Pack200Strategy mode, final Map props) throws IOException { originalInput = in; streamBridge = mode.newStreamBridge(); JarOutputStream jarOut = new JarOutputStream(streamBridge); Pack200.Unpacker u = Pack200.newUnpacker(); if (props != null) { u.properties().putAll(props); } if (f == null) { u.unpack(new FilterInputStream(in) { @Override public void close() { // unpack would close this stream but we // want to give the user code more control } }, jarOut); } else { u.unpack(f, jarOut); } jarOut.close(); } @Override public int read() throws IOException { return streamBridge.getInput().read(); } @Override public int read(byte[] b) throws IOException { return streamBridge.getInput().read(b); } @Override public int read(byte[] b, int off, int count) throws IOException { return streamBridge.getInput().read(b, off, count); } @Override public int available() throws IOException { return streamBridge.getInput().available(); } @Override public boolean markSupported() { try { return streamBridge.getInput().markSupported(); } catch (IOException ex) { return false; } } @Override public void mark(int limit) { try { streamBridge.getInput().mark(limit); } catch (IOException ex) { throw new RuntimeException(ex); } } @Override public void reset() throws IOException { streamBridge.getInput().reset(); } @Override public long skip(long count) throws IOException { return streamBridge.getInput().skip(count); } @Override public void close() throws IOException { try { streamBridge.stop(); } finally { if (originalInput != null) { originalInput.close(); } } } private static final byte[] CAFE_DOOD = new byte[] { (byte) 0xCA, (byte) 0xFE, (byte) 0xD0, (byte) 0x0D }; private static final int SIG_LENGTH = CAFE_DOOD.length; /** * Checks if the signature matches what is expected for a pack200 * file (0xCAFED00D). * * @param signature * the bytes to check * @param length * the number of bytes to check * @return true, if this stream is a pack200 compressed stream, * false otherwise */ public static boolean matches(byte[] signature, int length) { if (length < SIG_LENGTH) { return false; } for (int i = 0; i < SIG_LENGTH; i++) { if (signature[i] != CAFE_DOOD[i]) { return false; } } return true; } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.pack200; import java.io.IOException; import java.io.OutputStream; import java.util.Map; import java.util.jar.JarInputStream; import java.util.jar.Pack200; import org.apache.commons.compress.compressors.CompressorOutputStream; import org.apache.commons.compress.utils.IOUtils; /** * An output stream that compresses using the Pack200 format. * * @NotThreadSafe * @since 1.3 */ public class Pack200CompressorOutputStream extends CompressorOutputStream { private boolean finished = false; private final OutputStream originalOutput; private final StreamBridge streamBridge; private final Map properties; /** * Compresses the given stream, caching the compressed data in * memory. * * @param out the stream to write to */ public Pack200CompressorOutputStream(final OutputStream out) throws IOException { this(out, Pack200Strategy.IN_MEMORY); } /** * Compresses the given stream using the given strategy to cache * the results. * * @param out the stream to write to * @param mode the strategy to use */ public Pack200CompressorOutputStream(final OutputStream out, final Pack200Strategy mode) throws IOException { this(out, mode, null); } /** * Compresses the given stream, caching the compressed data in * memory and using the given properties. * * @param out the stream to write to * @param props Pack200 properties to use */ public Pack200CompressorOutputStream(final OutputStream out, final Map props) throws IOException { this(out, Pack200Strategy.IN_MEMORY, props); } /** * Compresses the given stream using the given strategy to cache * the results and the given properties. * * @param out the stream to write to * @param mode the strategy to use * @param props Pack200 properties to use */ public Pack200CompressorOutputStream(final OutputStream out, final Pack200Strategy mode, final Map props) throws IOException { originalOutput = out; streamBridge = mode.newStreamBridge(); properties = props; } @Override public void write(int b) throws IOException { streamBridge.write(b); } @Override public void write(byte[] b) throws IOException { streamBridge.write(b); } @Override public void write(byte[] b, int from, int length) throws IOException { streamBridge.write(b, from, length); } @Override public void close() throws IOException { finish(); try { streamBridge.stop(); } finally { originalOutput.close(); } } public void finish() throws IOException { if (!finished) { finished = true; Pack200.Packer p = Pack200.newPacker(); if (properties != null) { p.properties().putAll(properties); } JarInputStream ji = null; boolean success = false; try { p.pack(ji = new JarInputStream(streamBridge.getInput()), originalOutput); success = true; } finally { if (!success) { IOUtils.closeQuietly(ji); } } } } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.pack200; import java.io.IOException; /** * The different modes the Pack200 streams can use to wrap input and * output. * @since 1.3 */ public enum Pack200Strategy { /** Cache output in memory */ IN_MEMORY() { @Override StreamBridge newStreamBridge() { return new InMemoryCachingStreamBridge(); } }, /** Cache output in a temporary file */ TEMP_FILE() { @Override StreamBridge newStreamBridge() throws IOException { return new TempFileCachingStreamBridge(); } }; abstract StreamBridge newStreamBridge() throws IOException; }/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.pack200; import java.io.File; import java.io.FileOutputStream; import java.io.IOException; import java.io.OutputStream; import java.util.HashMap; import java.util.Map; import java.util.jar.JarFile; import java.util.jar.JarOutputStream; import java.util.jar.Pack200; /** * Utility methods for Pack200. * * @ThreadSafe * @since 1.3 */ public class Pack200Utils { private Pack200Utils() { } /** * Normalizes a JAR archive in-place so it can be safely signed * and packed. * *

As stated in Pack200.Packer's * javadocs applying a Pack200 compression to a JAR archive will * in general make its sigantures invalid. In order to prepare a * JAR for signing it should be "normalized" by packing and * unpacking it. This is what this method does.

* *

Note this methods implicitly sets the segment length to * -1.

* * @param jar the JAR archive to normalize */ public static void normalize(File jar) throws IOException { normalize(jar, jar, null); } /** * Normalizes a JAR archive in-place so it can be safely signed * and packed. * *

As stated in Pack200.Packer's * javadocs applying a Pack200 compression to a JAR archive will * in general make its sigantures invalid. In order to prepare a * JAR for signing it should be "normalized" by packing and * unpacking it. This is what this method does.

* * @param jar the JAR archive to normalize * @param props properties to set for the pack operation. This * method will implicitly set the segment limit to -1. */ public static void normalize(File jar, Map props) throws IOException { normalize(jar, jar, props); } /** * Normalizes a JAR archive so it can be safely signed and packed. * *

As stated in Pack200.Packer's * javadocs applying a Pack200 compression to a JAR archive will * in general make its sigantures invalid. In order to prepare a * JAR for signing it should be "normalized" by packing and * unpacking it. This is what this method does.

* *

This method does not replace the existing archive but creates * a new one.

* *

Note this methods implicitly sets the segment length to * -1.

* * @param from the JAR archive to normalize * @param to the normalized archive */ public static void normalize(File from, File to) throws IOException { normalize(from, to, null); } /** * Normalizes a JAR archive so it can be safely signed and packed. * *

As stated in Pack200.Packer's * javadocs applying a Pack200 compression to a JAR archive will * in general make its sigantures invalid. In order to prepare a * JAR for signing it should be "normalized" by packing and * unpacking it. This is what this method does.

* *

This method does not replace the existing archive but creates * a new one.

* * @param from the JAR archive to normalize * @param to the normalized archive * @param props properties to set for the pack operation. This * method will implicitly set the segment limit to -1. */ public static void normalize(File from, File to, Map props) throws IOException { if (props == null) { props = new HashMap(); } props.put(Pack200.Packer.SEGMENT_LIMIT, "-1"); File f = File.createTempFile("commons-compress", "pack200normalize"); f.deleteOnExit(); try { OutputStream os = new FileOutputStream(f); JarFile j = null; try { Pack200.Packer p = Pack200.newPacker(); p.properties().putAll(props); p.pack(j = new JarFile(from), os); j = null; os.close(); os = null; Pack200.Unpacker u = Pack200.newUnpacker(); os = new JarOutputStream(new FileOutputStream(to)); u.unpack(f, (JarOutputStream) os); } finally { if (j != null) { j.close(); } if (os != null) { os.close(); } } } finally { f.delete(); } } }/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * Some portions of this file Copyright (c) 2004-2006 Intel Corportation * and licensed under the BSD license. */ package org.apache.commons.compress.compressors.snappy; import java.util.zip.Checksum; /** * A pure-java implementation of the CRC32 checksum that uses * the CRC32-C polynomial, the same polynomial used by iSCSI * and implemented on many Intel chipsets supporting SSE4.2. * *

This file is a copy of the implementation at the Apache Hadoop project.

* @see "http://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/PureJavaCrc32C.java" * @NotThreadSafe * @since 1.7 */ class PureJavaCrc32C implements Checksum { /** the current CRC value, bit-flipped */ private int crc; /** Create a new PureJavaCrc32 object. */ public PureJavaCrc32C() { reset(); } public long getValue() { long ret = crc; return (~ret) & 0xffffffffL; } public void reset() { crc = 0xffffffff; } public void update(byte[] b, int off, int len) { int localCrc = crc; while(len > 7) { final int c0 =(b[off+0] ^ localCrc) & 0xff; final int c1 =(b[off+1] ^ (localCrc >>>= 8)) & 0xff; final int c2 =(b[off+2] ^ (localCrc >>>= 8)) & 0xff; final int c3 =(b[off+3] ^ (localCrc >>>= 8)) & 0xff; localCrc = (T[T8_7_start + c0] ^ T[T8_6_start + c1]) ^ (T[T8_5_start + c2] ^ T[T8_4_start + c3]); final int c4 = b[off+4] & 0xff; final int c5 = b[off+5] & 0xff; final int c6 = b[off+6] & 0xff; final int c7 = b[off+7] & 0xff; localCrc ^= (T[T8_3_start + c4] ^ T[T8_2_start + c5]) ^ (T[T8_1_start + c6] ^ T[T8_0_start + c7]); off += 8; len -= 8; } /* loop unroll - duff's device style */ switch(len) { case 7: localCrc = (localCrc >>> 8) ^ T[T8_0_start + ((localCrc ^ b[off++]) & 0xff)]; case 6: localCrc = (localCrc >>> 8) ^ T[T8_0_start + ((localCrc ^ b[off++]) & 0xff)]; case 5: localCrc = (localCrc >>> 8) ^ T[T8_0_start + ((localCrc ^ b[off++]) & 0xff)]; case 4: localCrc = (localCrc >>> 8) ^ T[T8_0_start + ((localCrc ^ b[off++]) & 0xff)]; case 3: localCrc = (localCrc >>> 8) ^ T[T8_0_start + ((localCrc ^ b[off++]) & 0xff)]; case 2: localCrc = (localCrc >>> 8) ^ T[T8_0_start + ((localCrc ^ b[off++]) & 0xff)]; case 1: localCrc = (localCrc >>> 8) ^ T[T8_0_start + ((localCrc ^ b[off++]) & 0xff)]; default: /* nothing */ } // Publish crc out to object crc = localCrc; } final public void update(int b) { crc = (crc >>> 8) ^ T[T8_0_start + ((crc ^ b) & 0xff)]; } // CRC polynomial tables generated by: // java -cp build/test/classes/:build/classes/ \ // org.apache.hadoop.util.TestPureJavaCrc32\$Table 82F63B78 private static final int T8_0_start = 0*256; private static final int T8_1_start = 1*256; private static final int T8_2_start = 2*256; private static final int T8_3_start = 3*256; private static final int T8_4_start = 4*256; private static final int T8_5_start = 5*256; private static final int T8_6_start = 6*256; private static final int T8_7_start = 7*256; private static final int[] T = new int[] { /* T8_0 */ 0x00000000, 0xF26B8303, 0xE13B70F7, 0x1350F3F4, 0xC79A971F, 0x35F1141C, 0x26A1E7E8, 0xD4CA64EB, 0x8AD958CF, 0x78B2DBCC, 0x6BE22838, 0x9989AB3B, 0x4D43CFD0, 0xBF284CD3, 0xAC78BF27, 0x5E133C24, 0x105EC76F, 0xE235446C, 0xF165B798, 0x030E349B, 0xD7C45070, 0x25AFD373, 0x36FF2087, 0xC494A384, 0x9A879FA0, 0x68EC1CA3, 0x7BBCEF57, 0x89D76C54, 0x5D1D08BF, 0xAF768BBC, 0xBC267848, 0x4E4DFB4B, 0x20BD8EDE, 0xD2D60DDD, 0xC186FE29, 0x33ED7D2A, 0xE72719C1, 0x154C9AC2, 0x061C6936, 0xF477EA35, 0xAA64D611, 0x580F5512, 0x4B5FA6E6, 0xB93425E5, 0x6DFE410E, 0x9F95C20D, 0x8CC531F9, 0x7EAEB2FA, 0x30E349B1, 0xC288CAB2, 0xD1D83946, 0x23B3BA45, 0xF779DEAE, 0x05125DAD, 0x1642AE59, 0xE4292D5A, 0xBA3A117E, 0x4851927D, 0x5B016189, 0xA96AE28A, 0x7DA08661, 0x8FCB0562, 0x9C9BF696, 0x6EF07595, 0x417B1DBC, 0xB3109EBF, 0xA0406D4B, 0x522BEE48, 0x86E18AA3, 0x748A09A0, 0x67DAFA54, 0x95B17957, 0xCBA24573, 0x39C9C670, 0x2A993584, 0xD8F2B687, 0x0C38D26C, 0xFE53516F, 0xED03A29B, 0x1F682198, 0x5125DAD3, 0xA34E59D0, 0xB01EAA24, 0x42752927, 0x96BF4DCC, 0x64D4CECF, 0x77843D3B, 0x85EFBE38, 0xDBFC821C, 0x2997011F, 0x3AC7F2EB, 0xC8AC71E8, 0x1C661503, 0xEE0D9600, 0xFD5D65F4, 0x0F36E6F7, 0x61C69362, 0x93AD1061, 0x80FDE395, 0x72966096, 0xA65C047D, 0x5437877E, 0x4767748A, 0xB50CF789, 0xEB1FCBAD, 0x197448AE, 0x0A24BB5A, 0xF84F3859, 0x2C855CB2, 0xDEEEDFB1, 0xCDBE2C45, 0x3FD5AF46, 0x7198540D, 0x83F3D70E, 0x90A324FA, 0x62C8A7F9, 0xB602C312, 0x44694011, 0x5739B3E5, 0xA55230E6, 0xFB410CC2, 0x092A8FC1, 0x1A7A7C35, 0xE811FF36, 0x3CDB9BDD, 0xCEB018DE, 0xDDE0EB2A, 0x2F8B6829, 0x82F63B78, 0x709DB87B, 0x63CD4B8F, 0x91A6C88C, 0x456CAC67, 0xB7072F64, 0xA457DC90, 0x563C5F93, 0x082F63B7, 0xFA44E0B4, 0xE9141340, 0x1B7F9043, 0xCFB5F4A8, 0x3DDE77AB, 0x2E8E845F, 0xDCE5075C, 0x92A8FC17, 0x60C37F14, 0x73938CE0, 0x81F80FE3, 0x55326B08, 0xA759E80B, 0xB4091BFF, 0x466298FC, 0x1871A4D8, 0xEA1A27DB, 0xF94AD42F, 0x0B21572C, 0xDFEB33C7, 0x2D80B0C4, 0x3ED04330, 0xCCBBC033, 0xA24BB5A6, 0x502036A5, 0x4370C551, 0xB11B4652, 0x65D122B9, 0x97BAA1BA, 0x84EA524E, 0x7681D14D, 0x2892ED69, 0xDAF96E6A, 0xC9A99D9E, 0x3BC21E9D, 0xEF087A76, 0x1D63F975, 0x0E330A81, 0xFC588982, 0xB21572C9, 0x407EF1CA, 0x532E023E, 0xA145813D, 0x758FE5D6, 0x87E466D5, 0x94B49521, 0x66DF1622, 0x38CC2A06, 0xCAA7A905, 0xD9F75AF1, 0x2B9CD9F2, 0xFF56BD19, 0x0D3D3E1A, 0x1E6DCDEE, 0xEC064EED, 0xC38D26C4, 0x31E6A5C7, 0x22B65633, 0xD0DDD530, 0x0417B1DB, 0xF67C32D8, 0xE52CC12C, 0x1747422F, 0x49547E0B, 0xBB3FFD08, 0xA86F0EFC, 0x5A048DFF, 0x8ECEE914, 0x7CA56A17, 0x6FF599E3, 0x9D9E1AE0, 0xD3D3E1AB, 0x21B862A8, 0x32E8915C, 0xC083125F, 0x144976B4, 0xE622F5B7, 0xF5720643, 0x07198540, 0x590AB964, 0xAB613A67, 0xB831C993, 0x4A5A4A90, 0x9E902E7B, 0x6CFBAD78, 0x7FAB5E8C, 0x8DC0DD8F, 0xE330A81A, 0x115B2B19, 0x020BD8ED, 0xF0605BEE, 0x24AA3F05, 0xD6C1BC06, 0xC5914FF2, 0x37FACCF1, 0x69E9F0D5, 0x9B8273D6, 0x88D28022, 0x7AB90321, 0xAE7367CA, 0x5C18E4C9, 0x4F48173D, 0xBD23943E, 0xF36E6F75, 0x0105EC76, 0x12551F82, 0xE03E9C81, 0x34F4F86A, 0xC69F7B69, 0xD5CF889D, 0x27A40B9E, 0x79B737BA, 0x8BDCB4B9, 0x988C474D, 0x6AE7C44E, 0xBE2DA0A5, 0x4C4623A6, 0x5F16D052, 0xAD7D5351, /* T8_1 */ 0x00000000, 0x13A29877, 0x274530EE, 0x34E7A899, 0x4E8A61DC, 0x5D28F9AB, 0x69CF5132, 0x7A6DC945, 0x9D14C3B8, 0x8EB65BCF, 0xBA51F356, 0xA9F36B21, 0xD39EA264, 0xC03C3A13, 0xF4DB928A, 0xE7790AFD, 0x3FC5F181, 0x2C6769F6, 0x1880C16F, 0x0B225918, 0x714F905D, 0x62ED082A, 0x560AA0B3, 0x45A838C4, 0xA2D13239, 0xB173AA4E, 0x859402D7, 0x96369AA0, 0xEC5B53E5, 0xFFF9CB92, 0xCB1E630B, 0xD8BCFB7C, 0x7F8BE302, 0x6C297B75, 0x58CED3EC, 0x4B6C4B9B, 0x310182DE, 0x22A31AA9, 0x1644B230, 0x05E62A47, 0xE29F20BA, 0xF13DB8CD, 0xC5DA1054, 0xD6788823, 0xAC154166, 0xBFB7D911, 0x8B507188, 0x98F2E9FF, 0x404E1283, 0x53EC8AF4, 0x670B226D, 0x74A9BA1A, 0x0EC4735F, 0x1D66EB28, 0x298143B1, 0x3A23DBC6, 0xDD5AD13B, 0xCEF8494C, 0xFA1FE1D5, 0xE9BD79A2, 0x93D0B0E7, 0x80722890, 0xB4958009, 0xA737187E, 0xFF17C604, 0xECB55E73, 0xD852F6EA, 0xCBF06E9D, 0xB19DA7D8, 0xA23F3FAF, 0x96D89736, 0x857A0F41, 0x620305BC, 0x71A19DCB, 0x45463552, 0x56E4AD25, 0x2C896460, 0x3F2BFC17, 0x0BCC548E, 0x186ECCF9, 0xC0D23785, 0xD370AFF2, 0xE797076B, 0xF4359F1C, 0x8E585659, 0x9DFACE2E, 0xA91D66B7, 0xBABFFEC0, 0x5DC6F43D, 0x4E646C4A, 0x7A83C4D3, 0x69215CA4, 0x134C95E1, 0x00EE0D96, 0x3409A50F, 0x27AB3D78, 0x809C2506, 0x933EBD71, 0xA7D915E8, 0xB47B8D9F, 0xCE1644DA, 0xDDB4DCAD, 0xE9537434, 0xFAF1EC43, 0x1D88E6BE, 0x0E2A7EC9, 0x3ACDD650, 0x296F4E27, 0x53028762, 0x40A01F15, 0x7447B78C, 0x67E52FFB, 0xBF59D487, 0xACFB4CF0, 0x981CE469, 0x8BBE7C1E, 0xF1D3B55B, 0xE2712D2C, 0xD69685B5, 0xC5341DC2, 0x224D173F, 0x31EF8F48, 0x050827D1, 0x16AABFA6, 0x6CC776E3, 0x7F65EE94, 0x4B82460D, 0x5820DE7A, 0xFBC3FAF9, 0xE861628E, 0xDC86CA17, 0xCF245260, 0xB5499B25, 0xA6EB0352, 0x920CABCB, 0x81AE33BC, 0x66D73941, 0x7575A136, 0x419209AF, 0x523091D8, 0x285D589D, 0x3BFFC0EA, 0x0F186873, 0x1CBAF004, 0xC4060B78, 0xD7A4930F, 0xE3433B96, 0xF0E1A3E1, 0x8A8C6AA4, 0x992EF2D3, 0xADC95A4A, 0xBE6BC23D, 0x5912C8C0, 0x4AB050B7, 0x7E57F82E, 0x6DF56059, 0x1798A91C, 0x043A316B, 0x30DD99F2, 0x237F0185, 0x844819FB, 0x97EA818C, 0xA30D2915, 0xB0AFB162, 0xCAC27827, 0xD960E050, 0xED8748C9, 0xFE25D0BE, 0x195CDA43, 0x0AFE4234, 0x3E19EAAD, 0x2DBB72DA, 0x57D6BB9F, 0x447423E8, 0x70938B71, 0x63311306, 0xBB8DE87A, 0xA82F700D, 0x9CC8D894, 0x8F6A40E3, 0xF50789A6, 0xE6A511D1, 0xD242B948, 0xC1E0213F, 0x26992BC2, 0x353BB3B5, 0x01DC1B2C, 0x127E835B, 0x68134A1E, 0x7BB1D269, 0x4F567AF0, 0x5CF4E287, 0x04D43CFD, 0x1776A48A, 0x23910C13, 0x30339464, 0x4A5E5D21, 0x59FCC556, 0x6D1B6DCF, 0x7EB9F5B8, 0x99C0FF45, 0x8A626732, 0xBE85CFAB, 0xAD2757DC, 0xD74A9E99, 0xC4E806EE, 0xF00FAE77, 0xE3AD3600, 0x3B11CD7C, 0x28B3550B, 0x1C54FD92, 0x0FF665E5, 0x759BACA0, 0x663934D7, 0x52DE9C4E, 0x417C0439, 0xA6050EC4, 0xB5A796B3, 0x81403E2A, 0x92E2A65D, 0xE88F6F18, 0xFB2DF76F, 0xCFCA5FF6, 0xDC68C781, 0x7B5FDFFF, 0x68FD4788, 0x5C1AEF11, 0x4FB87766, 0x35D5BE23, 0x26772654, 0x12908ECD, 0x013216BA, 0xE64B1C47, 0xF5E98430, 0xC10E2CA9, 0xD2ACB4DE, 0xA8C17D9B, 0xBB63E5EC, 0x8F844D75, 0x9C26D502, 0x449A2E7E, 0x5738B609, 0x63DF1E90, 0x707D86E7, 0x0A104FA2, 0x19B2D7D5, 0x2D557F4C, 0x3EF7E73B, 0xD98EEDC6, 0xCA2C75B1, 0xFECBDD28, 0xED69455F, 0x97048C1A, 0x84A6146D, 0xB041BCF4, 0xA3E32483, /* T8_2 */ 0x00000000, 0xA541927E, 0x4F6F520D, 0xEA2EC073, 0x9EDEA41A, 0x3B9F3664, 0xD1B1F617, 0x74F06469, 0x38513EC5, 0x9D10ACBB, 0x773E6CC8, 0xD27FFEB6, 0xA68F9ADF, 0x03CE08A1, 0xE9E0C8D2, 0x4CA15AAC, 0x70A27D8A, 0xD5E3EFF4, 0x3FCD2F87, 0x9A8CBDF9, 0xEE7CD990, 0x4B3D4BEE, 0xA1138B9D, 0x045219E3, 0x48F3434F, 0xEDB2D131, 0x079C1142, 0xA2DD833C, 0xD62DE755, 0x736C752B, 0x9942B558, 0x3C032726, 0xE144FB14, 0x4405696A, 0xAE2BA919, 0x0B6A3B67, 0x7F9A5F0E, 0xDADBCD70, 0x30F50D03, 0x95B49F7D, 0xD915C5D1, 0x7C5457AF, 0x967A97DC, 0x333B05A2, 0x47CB61CB, 0xE28AF3B5, 0x08A433C6, 0xADE5A1B8, 0x91E6869E, 0x34A714E0, 0xDE89D493, 0x7BC846ED, 0x0F382284, 0xAA79B0FA, 0x40577089, 0xE516E2F7, 0xA9B7B85B, 0x0CF62A25, 0xE6D8EA56, 0x43997828, 0x37691C41, 0x92288E3F, 0x78064E4C, 0xDD47DC32, 0xC76580D9, 0x622412A7, 0x880AD2D4, 0x2D4B40AA, 0x59BB24C3, 0xFCFAB6BD, 0x16D476CE, 0xB395E4B0, 0xFF34BE1C, 0x5A752C62, 0xB05BEC11, 0x151A7E6F, 0x61EA1A06, 0xC4AB8878, 0x2E85480B, 0x8BC4DA75, 0xB7C7FD53, 0x12866F2D, 0xF8A8AF5E, 0x5DE93D20, 0x29195949, 0x8C58CB37, 0x66760B44, 0xC337993A, 0x8F96C396, 0x2AD751E8, 0xC0F9919B, 0x65B803E5, 0x1148678C, 0xB409F5F2, 0x5E273581, 0xFB66A7FF, 0x26217BCD, 0x8360E9B3, 0x694E29C0, 0xCC0FBBBE, 0xB8FFDFD7, 0x1DBE4DA9, 0xF7908DDA, 0x52D11FA4, 0x1E704508, 0xBB31D776, 0x511F1705, 0xF45E857B, 0x80AEE112, 0x25EF736C, 0xCFC1B31F, 0x6A802161, 0x56830647, 0xF3C29439, 0x19EC544A, 0xBCADC634, 0xC85DA25D, 0x6D1C3023, 0x8732F050, 0x2273622E, 0x6ED23882, 0xCB93AAFC, 0x21BD6A8F, 0x84FCF8F1, 0xF00C9C98, 0x554D0EE6, 0xBF63CE95, 0x1A225CEB, 0x8B277743, 0x2E66E53D, 0xC448254E, 0x6109B730, 0x15F9D359, 0xB0B84127, 0x5A968154, 0xFFD7132A, 0xB3764986, 0x1637DBF8, 0xFC191B8B, 0x595889F5, 0x2DA8ED9C, 0x88E97FE2, 0x62C7BF91, 0xC7862DEF, 0xFB850AC9, 0x5EC498B7, 0xB4EA58C4, 0x11ABCABA, 0x655BAED3, 0xC01A3CAD, 0x2A34FCDE, 0x8F756EA0, 0xC3D4340C, 0x6695A672, 0x8CBB6601, 0x29FAF47F, 0x5D0A9016, 0xF84B0268, 0x1265C21B, 0xB7245065, 0x6A638C57, 0xCF221E29, 0x250CDE5A, 0x804D4C24, 0xF4BD284D, 0x51FCBA33, 0xBBD27A40, 0x1E93E83E, 0x5232B292, 0xF77320EC, 0x1D5DE09F, 0xB81C72E1, 0xCCEC1688, 0x69AD84F6, 0x83834485, 0x26C2D6FB, 0x1AC1F1DD, 0xBF8063A3, 0x55AEA3D0, 0xF0EF31AE, 0x841F55C7, 0x215EC7B9, 0xCB7007CA, 0x6E3195B4, 0x2290CF18, 0x87D15D66, 0x6DFF9D15, 0xC8BE0F6B, 0xBC4E6B02, 0x190FF97C, 0xF321390F, 0x5660AB71, 0x4C42F79A, 0xE90365E4, 0x032DA597, 0xA66C37E9, 0xD29C5380, 0x77DDC1FE, 0x9DF3018D, 0x38B293F3, 0x7413C95F, 0xD1525B21, 0x3B7C9B52, 0x9E3D092C, 0xEACD6D45, 0x4F8CFF3B, 0xA5A23F48, 0x00E3AD36, 0x3CE08A10, 0x99A1186E, 0x738FD81D, 0xD6CE4A63, 0xA23E2E0A, 0x077FBC74, 0xED517C07, 0x4810EE79, 0x04B1B4D5, 0xA1F026AB, 0x4BDEE6D8, 0xEE9F74A6, 0x9A6F10CF, 0x3F2E82B1, 0xD50042C2, 0x7041D0BC, 0xAD060C8E, 0x08479EF0, 0xE2695E83, 0x4728CCFD, 0x33D8A894, 0x96993AEA, 0x7CB7FA99, 0xD9F668E7, 0x9557324B, 0x3016A035, 0xDA386046, 0x7F79F238, 0x0B899651, 0xAEC8042F, 0x44E6C45C, 0xE1A75622, 0xDDA47104, 0x78E5E37A, 0x92CB2309, 0x378AB177, 0x437AD51E, 0xE63B4760, 0x0C158713, 0xA954156D, 0xE5F54FC1, 0x40B4DDBF, 0xAA9A1DCC, 0x0FDB8FB2, 0x7B2BEBDB, 0xDE6A79A5, 0x3444B9D6, 0x91052BA8, /* T8_3 */ 0x00000000, 0xDD45AAB8, 0xBF672381, 0x62228939, 0x7B2231F3, 0xA6679B4B, 0xC4451272, 0x1900B8CA, 0xF64463E6, 0x2B01C95E, 0x49234067, 0x9466EADF, 0x8D665215, 0x5023F8AD, 0x32017194, 0xEF44DB2C, 0xE964B13D, 0x34211B85, 0x560392BC, 0x8B463804, 0x924680CE, 0x4F032A76, 0x2D21A34F, 0xF06409F7, 0x1F20D2DB, 0xC2657863, 0xA047F15A, 0x7D025BE2, 0x6402E328, 0xB9474990, 0xDB65C0A9, 0x06206A11, 0xD725148B, 0x0A60BE33, 0x6842370A, 0xB5079DB2, 0xAC072578, 0x71428FC0, 0x136006F9, 0xCE25AC41, 0x2161776D, 0xFC24DDD5, 0x9E0654EC, 0x4343FE54, 0x5A43469E, 0x8706EC26, 0xE524651F, 0x3861CFA7, 0x3E41A5B6, 0xE3040F0E, 0x81268637, 0x5C632C8F, 0x45639445, 0x98263EFD, 0xFA04B7C4, 0x27411D7C, 0xC805C650, 0x15406CE8, 0x7762E5D1, 0xAA274F69, 0xB327F7A3, 0x6E625D1B, 0x0C40D422, 0xD1057E9A, 0xABA65FE7, 0x76E3F55F, 0x14C17C66, 0xC984D6DE, 0xD0846E14, 0x0DC1C4AC, 0x6FE34D95, 0xB2A6E72D, 0x5DE23C01, 0x80A796B9, 0xE2851F80, 0x3FC0B538, 0x26C00DF2, 0xFB85A74A, 0x99A72E73, 0x44E284CB, 0x42C2EEDA, 0x9F874462, 0xFDA5CD5B, 0x20E067E3, 0x39E0DF29, 0xE4A57591, 0x8687FCA8, 0x5BC25610, 0xB4868D3C, 0x69C32784, 0x0BE1AEBD, 0xD6A40405, 0xCFA4BCCF, 0x12E11677, 0x70C39F4E, 0xAD8635F6, 0x7C834B6C, 0xA1C6E1D4, 0xC3E468ED, 0x1EA1C255, 0x07A17A9F, 0xDAE4D027, 0xB8C6591E, 0x6583F3A6, 0x8AC7288A, 0x57828232, 0x35A00B0B, 0xE8E5A1B3, 0xF1E51979, 0x2CA0B3C1, 0x4E823AF8, 0x93C79040, 0x95E7FA51, 0x48A250E9, 0x2A80D9D0, 0xF7C57368, 0xEEC5CBA2, 0x3380611A, 0x51A2E823, 0x8CE7429B, 0x63A399B7, 0xBEE6330F, 0xDCC4BA36, 0x0181108E, 0x1881A844, 0xC5C402FC, 0xA7E68BC5, 0x7AA3217D, 0x52A0C93F, 0x8FE56387, 0xEDC7EABE, 0x30824006, 0x2982F8CC, 0xF4C75274, 0x96E5DB4D, 0x4BA071F5, 0xA4E4AAD9, 0x79A10061, 0x1B838958, 0xC6C623E0, 0xDFC69B2A, 0x02833192, 0x60A1B8AB, 0xBDE41213, 0xBBC47802, 0x6681D2BA, 0x04A35B83, 0xD9E6F13B, 0xC0E649F1, 0x1DA3E349, 0x7F816A70, 0xA2C4C0C8, 0x4D801BE4, 0x90C5B15C, 0xF2E73865, 0x2FA292DD, 0x36A22A17, 0xEBE780AF, 0x89C50996, 0x5480A32E, 0x8585DDB4, 0x58C0770C, 0x3AE2FE35, 0xE7A7548D, 0xFEA7EC47, 0x23E246FF, 0x41C0CFC6, 0x9C85657E, 0x73C1BE52, 0xAE8414EA, 0xCCA69DD3, 0x11E3376B, 0x08E38FA1, 0xD5A62519, 0xB784AC20, 0x6AC10698, 0x6CE16C89, 0xB1A4C631, 0xD3864F08, 0x0EC3E5B0, 0x17C35D7A, 0xCA86F7C2, 0xA8A47EFB, 0x75E1D443, 0x9AA50F6F, 0x47E0A5D7, 0x25C22CEE, 0xF8878656, 0xE1873E9C, 0x3CC29424, 0x5EE01D1D, 0x83A5B7A5, 0xF90696D8, 0x24433C60, 0x4661B559, 0x9B241FE1, 0x8224A72B, 0x5F610D93, 0x3D4384AA, 0xE0062E12, 0x0F42F53E, 0xD2075F86, 0xB025D6BF, 0x6D607C07, 0x7460C4CD, 0xA9256E75, 0xCB07E74C, 0x16424DF4, 0x106227E5, 0xCD278D5D, 0xAF050464, 0x7240AEDC, 0x6B401616, 0xB605BCAE, 0xD4273597, 0x09629F2F, 0xE6264403, 0x3B63EEBB, 0x59416782, 0x8404CD3A, 0x9D0475F0, 0x4041DF48, 0x22635671, 0xFF26FCC9, 0x2E238253, 0xF36628EB, 0x9144A1D2, 0x4C010B6A, 0x5501B3A0, 0x88441918, 0xEA669021, 0x37233A99, 0xD867E1B5, 0x05224B0D, 0x6700C234, 0xBA45688C, 0xA345D046, 0x7E007AFE, 0x1C22F3C7, 0xC167597F, 0xC747336E, 0x1A0299D6, 0x782010EF, 0xA565BA57, 0xBC65029D, 0x6120A825, 0x0302211C, 0xDE478BA4, 0x31035088, 0xEC46FA30, 0x8E647309, 0x5321D9B1, 0x4A21617B, 0x9764CBC3, 0xF54642FA, 0x2803E842, /* T8_4 */ 0x00000000, 0x38116FAC, 0x7022DF58, 0x4833B0F4, 0xE045BEB0, 0xD854D11C, 0x906761E8, 0xA8760E44, 0xC5670B91, 0xFD76643D, 0xB545D4C9, 0x8D54BB65, 0x2522B521, 0x1D33DA8D, 0x55006A79, 0x6D1105D5, 0x8F2261D3, 0xB7330E7F, 0xFF00BE8B, 0xC711D127, 0x6F67DF63, 0x5776B0CF, 0x1F45003B, 0x27546F97, 0x4A456A42, 0x725405EE, 0x3A67B51A, 0x0276DAB6, 0xAA00D4F2, 0x9211BB5E, 0xDA220BAA, 0xE2336406, 0x1BA8B557, 0x23B9DAFB, 0x6B8A6A0F, 0x539B05A3, 0xFBED0BE7, 0xC3FC644B, 0x8BCFD4BF, 0xB3DEBB13, 0xDECFBEC6, 0xE6DED16A, 0xAEED619E, 0x96FC0E32, 0x3E8A0076, 0x069B6FDA, 0x4EA8DF2E, 0x76B9B082, 0x948AD484, 0xAC9BBB28, 0xE4A80BDC, 0xDCB96470, 0x74CF6A34, 0x4CDE0598, 0x04EDB56C, 0x3CFCDAC0, 0x51EDDF15, 0x69FCB0B9, 0x21CF004D, 0x19DE6FE1, 0xB1A861A5, 0x89B90E09, 0xC18ABEFD, 0xF99BD151, 0x37516AAE, 0x0F400502, 0x4773B5F6, 0x7F62DA5A, 0xD714D41E, 0xEF05BBB2, 0xA7360B46, 0x9F2764EA, 0xF236613F, 0xCA270E93, 0x8214BE67, 0xBA05D1CB, 0x1273DF8F, 0x2A62B023, 0x625100D7, 0x5A406F7B, 0xB8730B7D, 0x806264D1, 0xC851D425, 0xF040BB89, 0x5836B5CD, 0x6027DA61, 0x28146A95, 0x10050539, 0x7D1400EC, 0x45056F40, 0x0D36DFB4, 0x3527B018, 0x9D51BE5C, 0xA540D1F0, 0xED736104, 0xD5620EA8, 0x2CF9DFF9, 0x14E8B055, 0x5CDB00A1, 0x64CA6F0D, 0xCCBC6149, 0xF4AD0EE5, 0xBC9EBE11, 0x848FD1BD, 0xE99ED468, 0xD18FBBC4, 0x99BC0B30, 0xA1AD649C, 0x09DB6AD8, 0x31CA0574, 0x79F9B580, 0x41E8DA2C, 0xA3DBBE2A, 0x9BCAD186, 0xD3F96172, 0xEBE80EDE, 0x439E009A, 0x7B8F6F36, 0x33BCDFC2, 0x0BADB06E, 0x66BCB5BB, 0x5EADDA17, 0x169E6AE3, 0x2E8F054F, 0x86F90B0B, 0xBEE864A7, 0xF6DBD453, 0xCECABBFF, 0x6EA2D55C, 0x56B3BAF0, 0x1E800A04, 0x269165A8, 0x8EE76BEC, 0xB6F60440, 0xFEC5B4B4, 0xC6D4DB18, 0xABC5DECD, 0x93D4B161, 0xDBE70195, 0xE3F66E39, 0x4B80607D, 0x73910FD1, 0x3BA2BF25, 0x03B3D089, 0xE180B48F, 0xD991DB23, 0x91A26BD7, 0xA9B3047B, 0x01C50A3F, 0x39D46593, 0x71E7D567, 0x49F6BACB, 0x24E7BF1E, 0x1CF6D0B2, 0x54C56046, 0x6CD40FEA, 0xC4A201AE, 0xFCB36E02, 0xB480DEF6, 0x8C91B15A, 0x750A600B, 0x4D1B0FA7, 0x0528BF53, 0x3D39D0FF, 0x954FDEBB, 0xAD5EB117, 0xE56D01E3, 0xDD7C6E4F, 0xB06D6B9A, 0x887C0436, 0xC04FB4C2, 0xF85EDB6E, 0x5028D52A, 0x6839BA86, 0x200A0A72, 0x181B65DE, 0xFA2801D8, 0xC2396E74, 0x8A0ADE80, 0xB21BB12C, 0x1A6DBF68, 0x227CD0C4, 0x6A4F6030, 0x525E0F9C, 0x3F4F0A49, 0x075E65E5, 0x4F6DD511, 0x777CBABD, 0xDF0AB4F9, 0xE71BDB55, 0xAF286BA1, 0x9739040D, 0x59F3BFF2, 0x61E2D05E, 0x29D160AA, 0x11C00F06, 0xB9B60142, 0x81A76EEE, 0xC994DE1A, 0xF185B1B6, 0x9C94B463, 0xA485DBCF, 0xECB66B3B, 0xD4A70497, 0x7CD10AD3, 0x44C0657F, 0x0CF3D58B, 0x34E2BA27, 0xD6D1DE21, 0xEEC0B18D, 0xA6F30179, 0x9EE26ED5, 0x36946091, 0x0E850F3D, 0x46B6BFC9, 0x7EA7D065, 0x13B6D5B0, 0x2BA7BA1C, 0x63940AE8, 0x5B856544, 0xF3F36B00, 0xCBE204AC, 0x83D1B458, 0xBBC0DBF4, 0x425B0AA5, 0x7A4A6509, 0x3279D5FD, 0x0A68BA51, 0xA21EB415, 0x9A0FDBB9, 0xD23C6B4D, 0xEA2D04E1, 0x873C0134, 0xBF2D6E98, 0xF71EDE6C, 0xCF0FB1C0, 0x6779BF84, 0x5F68D028, 0x175B60DC, 0x2F4A0F70, 0xCD796B76, 0xF56804DA, 0xBD5BB42E, 0x854ADB82, 0x2D3CD5C6, 0x152DBA6A, 0x5D1E0A9E, 0x650F6532, 0x081E60E7, 0x300F0F4B, 0x783CBFBF, 0x402DD013, 0xE85BDE57, 0xD04AB1FB, 0x9879010F, 0xA0686EA3, /* T8_5 */ 0x00000000, 0xEF306B19, 0xDB8CA0C3, 0x34BCCBDA, 0xB2F53777, 0x5DC55C6E, 0x697997B4, 0x8649FCAD, 0x6006181F, 0x8F367306, 0xBB8AB8DC, 0x54BAD3C5, 0xD2F32F68, 0x3DC34471, 0x097F8FAB, 0xE64FE4B2, 0xC00C303E, 0x2F3C5B27, 0x1B8090FD, 0xF4B0FBE4, 0x72F90749, 0x9DC96C50, 0xA975A78A, 0x4645CC93, 0xA00A2821, 0x4F3A4338, 0x7B8688E2, 0x94B6E3FB, 0x12FF1F56, 0xFDCF744F, 0xC973BF95, 0x2643D48C, 0x85F4168D, 0x6AC47D94, 0x5E78B64E, 0xB148DD57, 0x370121FA, 0xD8314AE3, 0xEC8D8139, 0x03BDEA20, 0xE5F20E92, 0x0AC2658B, 0x3E7EAE51, 0xD14EC548, 0x570739E5, 0xB83752FC, 0x8C8B9926, 0x63BBF23F, 0x45F826B3, 0xAAC84DAA, 0x9E748670, 0x7144ED69, 0xF70D11C4, 0x183D7ADD, 0x2C81B107, 0xC3B1DA1E, 0x25FE3EAC, 0xCACE55B5, 0xFE729E6F, 0x1142F576, 0x970B09DB, 0x783B62C2, 0x4C87A918, 0xA3B7C201, 0x0E045BEB, 0xE13430F2, 0xD588FB28, 0x3AB89031, 0xBCF16C9C, 0x53C10785, 0x677DCC5F, 0x884DA746, 0x6E0243F4, 0x813228ED, 0xB58EE337, 0x5ABE882E, 0xDCF77483, 0x33C71F9A, 0x077BD440, 0xE84BBF59, 0xCE086BD5, 0x213800CC, 0x1584CB16, 0xFAB4A00F, 0x7CFD5CA2, 0x93CD37BB, 0xA771FC61, 0x48419778, 0xAE0E73CA, 0x413E18D3, 0x7582D309, 0x9AB2B810, 0x1CFB44BD, 0xF3CB2FA4, 0xC777E47E, 0x28478F67, 0x8BF04D66, 0x64C0267F, 0x507CEDA5, 0xBF4C86BC, 0x39057A11, 0xD6351108, 0xE289DAD2, 0x0DB9B1CB, 0xEBF65579, 0x04C63E60, 0x307AF5BA, 0xDF4A9EA3, 0x5903620E, 0xB6330917, 0x828FC2CD, 0x6DBFA9D4, 0x4BFC7D58, 0xA4CC1641, 0x9070DD9B, 0x7F40B682, 0xF9094A2F, 0x16392136, 0x2285EAEC, 0xCDB581F5, 0x2BFA6547, 0xC4CA0E5E, 0xF076C584, 0x1F46AE9D, 0x990F5230, 0x763F3929, 0x4283F2F3, 0xADB399EA, 0x1C08B7D6, 0xF338DCCF, 0xC7841715, 0x28B47C0C, 0xAEFD80A1, 0x41CDEBB8, 0x75712062, 0x9A414B7B, 0x7C0EAFC9, 0x933EC4D0, 0xA7820F0A, 0x48B26413, 0xCEFB98BE, 0x21CBF3A7, 0x1577387D, 0xFA475364, 0xDC0487E8, 0x3334ECF1, 0x0788272B, 0xE8B84C32, 0x6EF1B09F, 0x81C1DB86, 0xB57D105C, 0x5A4D7B45, 0xBC029FF7, 0x5332F4EE, 0x678E3F34, 0x88BE542D, 0x0EF7A880, 0xE1C7C399, 0xD57B0843, 0x3A4B635A, 0x99FCA15B, 0x76CCCA42, 0x42700198, 0xAD406A81, 0x2B09962C, 0xC439FD35, 0xF08536EF, 0x1FB55DF6, 0xF9FAB944, 0x16CAD25D, 0x22761987, 0xCD46729E, 0x4B0F8E33, 0xA43FE52A, 0x90832EF0, 0x7FB345E9, 0x59F09165, 0xB6C0FA7C, 0x827C31A6, 0x6D4C5ABF, 0xEB05A612, 0x0435CD0B, 0x308906D1, 0xDFB96DC8, 0x39F6897A, 0xD6C6E263, 0xE27A29B9, 0x0D4A42A0, 0x8B03BE0D, 0x6433D514, 0x508F1ECE, 0xBFBF75D7, 0x120CEC3D, 0xFD3C8724, 0xC9804CFE, 0x26B027E7, 0xA0F9DB4A, 0x4FC9B053, 0x7B757B89, 0x94451090, 0x720AF422, 0x9D3A9F3B, 0xA98654E1, 0x46B63FF8, 0xC0FFC355, 0x2FCFA84C, 0x1B736396, 0xF443088F, 0xD200DC03, 0x3D30B71A, 0x098C7CC0, 0xE6BC17D9, 0x60F5EB74, 0x8FC5806D, 0xBB794BB7, 0x544920AE, 0xB206C41C, 0x5D36AF05, 0x698A64DF, 0x86BA0FC6, 0x00F3F36B, 0xEFC39872, 0xDB7F53A8, 0x344F38B1, 0x97F8FAB0, 0x78C891A9, 0x4C745A73, 0xA344316A, 0x250DCDC7, 0xCA3DA6DE, 0xFE816D04, 0x11B1061D, 0xF7FEE2AF, 0x18CE89B6, 0x2C72426C, 0xC3422975, 0x450BD5D8, 0xAA3BBEC1, 0x9E87751B, 0x71B71E02, 0x57F4CA8E, 0xB8C4A197, 0x8C786A4D, 0x63480154, 0xE501FDF9, 0x0A3196E0, 0x3E8D5D3A, 0xD1BD3623, 0x37F2D291, 0xD8C2B988, 0xEC7E7252, 0x034E194B, 0x8507E5E6, 0x6A378EFF, 0x5E8B4525, 0xB1BB2E3C, /* T8_6 */ 0x00000000, 0x68032CC8, 0xD0065990, 0xB8057558, 0xA5E0C5D1, 0xCDE3E919, 0x75E69C41, 0x1DE5B089, 0x4E2DFD53, 0x262ED19B, 0x9E2BA4C3, 0xF628880B, 0xEBCD3882, 0x83CE144A, 0x3BCB6112, 0x53C84DDA, 0x9C5BFAA6, 0xF458D66E, 0x4C5DA336, 0x245E8FFE, 0x39BB3F77, 0x51B813BF, 0xE9BD66E7, 0x81BE4A2F, 0xD27607F5, 0xBA752B3D, 0x02705E65, 0x6A7372AD, 0x7796C224, 0x1F95EEEC, 0xA7909BB4, 0xCF93B77C, 0x3D5B83BD, 0x5558AF75, 0xED5DDA2D, 0x855EF6E5, 0x98BB466C, 0xF0B86AA4, 0x48BD1FFC, 0x20BE3334, 0x73767EEE, 0x1B755226, 0xA370277E, 0xCB730BB6, 0xD696BB3F, 0xBE9597F7, 0x0690E2AF, 0x6E93CE67, 0xA100791B, 0xC90355D3, 0x7106208B, 0x19050C43, 0x04E0BCCA, 0x6CE39002, 0xD4E6E55A, 0xBCE5C992, 0xEF2D8448, 0x872EA880, 0x3F2BDDD8, 0x5728F110, 0x4ACD4199, 0x22CE6D51, 0x9ACB1809, 0xF2C834C1, 0x7AB7077A, 0x12B42BB2, 0xAAB15EEA, 0xC2B27222, 0xDF57C2AB, 0xB754EE63, 0x0F519B3B, 0x6752B7F3, 0x349AFA29, 0x5C99D6E1, 0xE49CA3B9, 0x8C9F8F71, 0x917A3FF8, 0xF9791330, 0x417C6668, 0x297F4AA0, 0xE6ECFDDC, 0x8EEFD114, 0x36EAA44C, 0x5EE98884, 0x430C380D, 0x2B0F14C5, 0x930A619D, 0xFB094D55, 0xA8C1008F, 0xC0C22C47, 0x78C7591F, 0x10C475D7, 0x0D21C55E, 0x6522E996, 0xDD279CCE, 0xB524B006, 0x47EC84C7, 0x2FEFA80F, 0x97EADD57, 0xFFE9F19F, 0xE20C4116, 0x8A0F6DDE, 0x320A1886, 0x5A09344E, 0x09C17994, 0x61C2555C, 0xD9C72004, 0xB1C40CCC, 0xAC21BC45, 0xC422908D, 0x7C27E5D5, 0x1424C91D, 0xDBB77E61, 0xB3B452A9, 0x0BB127F1, 0x63B20B39, 0x7E57BBB0, 0x16549778, 0xAE51E220, 0xC652CEE8, 0x959A8332, 0xFD99AFFA, 0x459CDAA2, 0x2D9FF66A, 0x307A46E3, 0x58796A2B, 0xE07C1F73, 0x887F33BB, 0xF56E0EF4, 0x9D6D223C, 0x25685764, 0x4D6B7BAC, 0x508ECB25, 0x388DE7ED, 0x808892B5, 0xE88BBE7D, 0xBB43F3A7, 0xD340DF6F, 0x6B45AA37, 0x034686FF, 0x1EA33676, 0x76A01ABE, 0xCEA56FE6, 0xA6A6432E, 0x6935F452, 0x0136D89A, 0xB933ADC2, 0xD130810A, 0xCCD53183, 0xA4D61D4B, 0x1CD36813, 0x74D044DB, 0x27180901, 0x4F1B25C9, 0xF71E5091, 0x9F1D7C59, 0x82F8CCD0, 0xEAFBE018, 0x52FE9540, 0x3AFDB988, 0xC8358D49, 0xA036A181, 0x1833D4D9, 0x7030F811, 0x6DD54898, 0x05D66450, 0xBDD31108, 0xD5D03DC0, 0x8618701A, 0xEE1B5CD2, 0x561E298A, 0x3E1D0542, 0x23F8B5CB, 0x4BFB9903, 0xF3FEEC5B, 0x9BFDC093, 0x546E77EF, 0x3C6D5B27, 0x84682E7F, 0xEC6B02B7, 0xF18EB23E, 0x998D9EF6, 0x2188EBAE, 0x498BC766, 0x1A438ABC, 0x7240A674, 0xCA45D32C, 0xA246FFE4, 0xBFA34F6D, 0xD7A063A5, 0x6FA516FD, 0x07A63A35, 0x8FD9098E, 0xE7DA2546, 0x5FDF501E, 0x37DC7CD6, 0x2A39CC5F, 0x423AE097, 0xFA3F95CF, 0x923CB907, 0xC1F4F4DD, 0xA9F7D815, 0x11F2AD4D, 0x79F18185, 0x6414310C, 0x0C171DC4, 0xB412689C, 0xDC114454, 0x1382F328, 0x7B81DFE0, 0xC384AAB8, 0xAB878670, 0xB66236F9, 0xDE611A31, 0x66646F69, 0x0E6743A1, 0x5DAF0E7B, 0x35AC22B3, 0x8DA957EB, 0xE5AA7B23, 0xF84FCBAA, 0x904CE762, 0x2849923A, 0x404ABEF2, 0xB2828A33, 0xDA81A6FB, 0x6284D3A3, 0x0A87FF6B, 0x17624FE2, 0x7F61632A, 0xC7641672, 0xAF673ABA, 0xFCAF7760, 0x94AC5BA8, 0x2CA92EF0, 0x44AA0238, 0x594FB2B1, 0x314C9E79, 0x8949EB21, 0xE14AC7E9, 0x2ED97095, 0x46DA5C5D, 0xFEDF2905, 0x96DC05CD, 0x8B39B544, 0xE33A998C, 0x5B3FECD4, 0x333CC01C, 0x60F48DC6, 0x08F7A10E, 0xB0F2D456, 0xD8F1F89E, 0xC5144817, 0xAD1764DF, 0x15121187, 0x7D113D4F, /* T8_7 */ 0x00000000, 0x493C7D27, 0x9278FA4E, 0xDB448769, 0x211D826D, 0x6821FF4A, 0xB3657823, 0xFA590504, 0x423B04DA, 0x0B0779FD, 0xD043FE94, 0x997F83B3, 0x632686B7, 0x2A1AFB90, 0xF15E7CF9, 0xB86201DE, 0x847609B4, 0xCD4A7493, 0x160EF3FA, 0x5F328EDD, 0xA56B8BD9, 0xEC57F6FE, 0x37137197, 0x7E2F0CB0, 0xC64D0D6E, 0x8F717049, 0x5435F720, 0x1D098A07, 0xE7508F03, 0xAE6CF224, 0x7528754D, 0x3C14086A, 0x0D006599, 0x443C18BE, 0x9F789FD7, 0xD644E2F0, 0x2C1DE7F4, 0x65219AD3, 0xBE651DBA, 0xF759609D, 0x4F3B6143, 0x06071C64, 0xDD439B0D, 0x947FE62A, 0x6E26E32E, 0x271A9E09, 0xFC5E1960, 0xB5626447, 0x89766C2D, 0xC04A110A, 0x1B0E9663, 0x5232EB44, 0xA86BEE40, 0xE1579367, 0x3A13140E, 0x732F6929, 0xCB4D68F7, 0x827115D0, 0x593592B9, 0x1009EF9E, 0xEA50EA9A, 0xA36C97BD, 0x782810D4, 0x31146DF3, 0x1A00CB32, 0x533CB615, 0x8878317C, 0xC1444C5B, 0x3B1D495F, 0x72213478, 0xA965B311, 0xE059CE36, 0x583BCFE8, 0x1107B2CF, 0xCA4335A6, 0x837F4881, 0x79264D85, 0x301A30A2, 0xEB5EB7CB, 0xA262CAEC, 0x9E76C286, 0xD74ABFA1, 0x0C0E38C8, 0x453245EF, 0xBF6B40EB, 0xF6573DCC, 0x2D13BAA5, 0x642FC782, 0xDC4DC65C, 0x9571BB7B, 0x4E353C12, 0x07094135, 0xFD504431, 0xB46C3916, 0x6F28BE7F, 0x2614C358, 0x1700AEAB, 0x5E3CD38C, 0x857854E5, 0xCC4429C2, 0x361D2CC6, 0x7F2151E1, 0xA465D688, 0xED59ABAF, 0x553BAA71, 0x1C07D756, 0xC743503F, 0x8E7F2D18, 0x7426281C, 0x3D1A553B, 0xE65ED252, 0xAF62AF75, 0x9376A71F, 0xDA4ADA38, 0x010E5D51, 0x48322076, 0xB26B2572, 0xFB575855, 0x2013DF3C, 0x692FA21B, 0xD14DA3C5, 0x9871DEE2, 0x4335598B, 0x0A0924AC, 0xF05021A8, 0xB96C5C8F, 0x6228DBE6, 0x2B14A6C1, 0x34019664, 0x7D3DEB43, 0xA6796C2A, 0xEF45110D, 0x151C1409, 0x5C20692E, 0x8764EE47, 0xCE589360, 0x763A92BE, 0x3F06EF99, 0xE44268F0, 0xAD7E15D7, 0x572710D3, 0x1E1B6DF4, 0xC55FEA9D, 0x8C6397BA, 0xB0779FD0, 0xF94BE2F7, 0x220F659E, 0x6B3318B9, 0x916A1DBD, 0xD856609A, 0x0312E7F3, 0x4A2E9AD4, 0xF24C9B0A, 0xBB70E62D, 0x60346144, 0x29081C63, 0xD3511967, 0x9A6D6440, 0x4129E329, 0x08159E0E, 0x3901F3FD, 0x703D8EDA, 0xAB7909B3, 0xE2457494, 0x181C7190, 0x51200CB7, 0x8A648BDE, 0xC358F6F9, 0x7B3AF727, 0x32068A00, 0xE9420D69, 0xA07E704E, 0x5A27754A, 0x131B086D, 0xC85F8F04, 0x8163F223, 0xBD77FA49, 0xF44B876E, 0x2F0F0007, 0x66337D20, 0x9C6A7824, 0xD5560503, 0x0E12826A, 0x472EFF4D, 0xFF4CFE93, 0xB67083B4, 0x6D3404DD, 0x240879FA, 0xDE517CFE, 0x976D01D9, 0x4C2986B0, 0x0515FB97, 0x2E015D56, 0x673D2071, 0xBC79A718, 0xF545DA3F, 0x0F1CDF3B, 0x4620A21C, 0x9D642575, 0xD4585852, 0x6C3A598C, 0x250624AB, 0xFE42A3C2, 0xB77EDEE5, 0x4D27DBE1, 0x041BA6C6, 0xDF5F21AF, 0x96635C88, 0xAA7754E2, 0xE34B29C5, 0x380FAEAC, 0x7133D38B, 0x8B6AD68F, 0xC256ABA8, 0x19122CC1, 0x502E51E6, 0xE84C5038, 0xA1702D1F, 0x7A34AA76, 0x3308D751, 0xC951D255, 0x806DAF72, 0x5B29281B, 0x1215553C, 0x230138CF, 0x6A3D45E8, 0xB179C281, 0xF845BFA6, 0x021CBAA2, 0x4B20C785, 0x906440EC, 0xD9583DCB, 0x613A3C15, 0x28064132, 0xF342C65B, 0xBA7EBB7C, 0x4027BE78, 0x091BC35F, 0xD25F4436, 0x9B633911, 0xA777317B, 0xEE4B4C5C, 0x350FCB35, 0x7C33B612, 0x866AB316, 0xCF56CE31, 0x14124958, 0x5D2E347F, 0xE54C35A1, 0xAC704886, 0x7734CFEF, 0x3E08B2C8, 0xC451B7CC, 0x8D6DCAEB, 0x56294D82, 0x1F1530A5 }; } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.bzip2; /** * Random numbers for both the compress and decompress BZip2 classes. */ final class Rand { private static final int[] RNUMS = { 619, 720, 127, 481, 931, 816, 813, 233, 566, 247, 985, 724, 205, 454, 863, 491, 741, 242, 949, 214, 733, 859, 335, 708, 621, 574, 73, 654, 730, 472, 419, 436, 278, 496, 867, 210, 399, 680, 480, 51, 878, 465, 811, 169, 869, 675, 611, 697, 867, 561, 862, 687, 507, 283, 482, 129, 807, 591, 733, 623, 150, 238, 59, 379, 684, 877, 625, 169, 643, 105, 170, 607, 520, 932, 727, 476, 693, 425, 174, 647, 73, 122, 335, 530, 442, 853, 695, 249, 445, 515, 909, 545, 703, 919, 874, 474, 882, 500, 594, 612, 641, 801, 220, 162, 819, 984, 589, 513, 495, 799, 161, 604, 958, 533, 221, 400, 386, 867, 600, 782, 382, 596, 414, 171, 516, 375, 682, 485, 911, 276, 98, 553, 163, 354, 666, 933, 424, 341, 533, 870, 227, 730, 475, 186, 263, 647, 537, 686, 600, 224, 469, 68, 770, 919, 190, 373, 294, 822, 808, 206, 184, 943, 795, 384, 383, 461, 404, 758, 839, 887, 715, 67, 618, 276, 204, 918, 873, 777, 604, 560, 951, 160, 578, 722, 79, 804, 96, 409, 713, 940, 652, 934, 970, 447, 318, 353, 859, 672, 112, 785, 645, 863, 803, 350, 139, 93, 354, 99, 820, 908, 609, 772, 154, 274, 580, 184, 79, 626, 630, 742, 653, 282, 762, 623, 680, 81, 927, 626, 789, 125, 411, 521, 938, 300, 821, 78, 343, 175, 128, 250, 170, 774, 972, 275, 999, 639, 495, 78, 352, 126, 857, 956, 358, 619, 580, 124, 737, 594, 701, 612, 669, 112, 134, 694, 363, 992, 809, 743, 168, 974, 944, 375, 748, 52, 600, 747, 642, 182, 862, 81, 344, 805, 988, 739, 511, 655, 814, 334, 249, 515, 897, 955, 664, 981, 649, 113, 974, 459, 893, 228, 433, 837, 553, 268, 926, 240, 102, 654, 459, 51, 686, 754, 806, 760, 493, 403, 415, 394, 687, 700, 946, 670, 656, 610, 738, 392, 760, 799, 887, 653, 978, 321, 576, 617, 626, 502, 894, 679, 243, 440, 680, 879, 194, 572, 640, 724, 926, 56, 204, 700, 707, 151, 457, 449, 797, 195, 791, 558, 945, 679, 297, 59, 87, 824, 713, 663, 412, 693, 342, 606, 134, 108, 571, 364, 631, 212, 174, 643, 304, 329, 343, 97, 430, 751, 497, 314, 983, 374, 822, 928, 140, 206, 73, 263, 980, 736, 876, 478, 430, 305, 170, 514, 364, 692, 829, 82, 855, 953, 676, 246, 369, 970, 294, 750, 807, 827, 150, 790, 288, 923, 804, 378, 215, 828, 592, 281, 565, 555, 710, 82, 896, 831, 547, 261, 524, 462, 293, 465, 502, 56, 661, 821, 976, 991, 658, 869, 905, 758, 745, 193, 768, 550, 608, 933, 378, 286, 215, 979, 792, 961, 61, 688, 793, 644, 986, 403, 106, 366, 905, 644, 372, 567, 466, 434, 645, 210, 389, 550, 919, 135, 780, 773, 635, 389, 707, 100, 626, 958, 165, 504, 920, 176, 193, 713, 857, 265, 203, 50, 668, 108, 645, 990, 626, 197, 510, 357, 358, 850, 858, 364, 936, 638 }; /** * Return the random number at a specific index. * * @param i the index * @return the random number */ static int rNums(int i){ return RNUMS[i]; } }/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.snappy; import java.io.IOException; import java.io.InputStream; import org.apache.commons.compress.compressors.CompressorInputStream; import org.apache.commons.compress.utils.IOUtils; /** * CompressorInputStream for the raw Snappy format. * *

This implementation uses an internal buffer in order to handle * the back-references that are at the heart of the LZ77 algorithm. * The size of the buffer must be at least as big as the biggest * offset used in the compressed stream. The current version of the * Snappy algorithm as defined by Google works on 32k blocks and * doesn't contain offsets bigger than 32k which is the default block * size used by this class.

* * @see Snappy compressed format description * @since 1.7 */ public class SnappyCompressorInputStream extends CompressorInputStream { /** Mask used to determine the type of "tag" is being processed */ private static final int TAG_MASK = 0x03; /** Default block size */ public static final int DEFAULT_BLOCK_SIZE = 32768; /** Buffer to write decompressed bytes to for back-references */ private final byte[] decompressBuf; /** One behind the index of the last byte in the buffer that was written */ private int writeIndex; /** Index of the next byte to be read. */ private int readIndex; /** The actual block size specified */ private final int blockSize; /** The underlying stream to read compressed data from */ private final InputStream in; /** The size of the uncompressed data */ private final int size; /** Number of uncompressed bytes still to be read. */ private int uncompressedBytesRemaining; // used in no-arg read method private final byte[] oneByte = new byte[1]; private boolean endReached = false; /** * Constructor using the default buffer size of 32k. * * @param is * An InputStream to read compressed data from * * @throws IOException */ public SnappyCompressorInputStream(final InputStream is) throws IOException { this(is, DEFAULT_BLOCK_SIZE); } /** * Constructor using a configurable buffer size. * * @param is * An InputStream to read compressed data from * @param blockSize * The block size used in compression * * @throws IOException */ public SnappyCompressorInputStream(final InputStream is, final int blockSize) throws IOException { this.in = is; this.blockSize = blockSize; this.decompressBuf = new byte[blockSize * 3]; this.writeIndex = readIndex = 0; uncompressedBytesRemaining = size = (int) readSize(); } /** {@inheritDoc} */ @Override public int read() throws IOException { return read(oneByte, 0, 1) == -1 ? -1 : oneByte[0] & 0xFF; } /** {@inheritDoc} */ @Override public void close() throws IOException { in.close(); } /** {@inheritDoc} */ @Override public int available() { return writeIndex - readIndex; } /** * {@inheritDoc} */ @Override public int read(byte[] b, int off, int len) throws IOException { if (endReached) { return -1; } final int avail = available(); if (len > avail) { fill(len - avail); } int readable = Math.min(len, available()); if (readable == 0 && len > 0) { return -1; } System.arraycopy(decompressBuf, readIndex, b, off, readable); readIndex += readable; if (readIndex > blockSize) { slideBuffer(); } return readable; } /** * Try to fill the buffer with enough bytes to satisfy the current * read request. * * @param len the number of uncompressed bytes to read */ private void fill(int len) throws IOException { if (uncompressedBytesRemaining == 0) { endReached = true; } int readNow = Math.min(len, uncompressedBytesRemaining); while (readNow > 0) { final int b = readOneByte(); int length = 0; long offset = 0; switch (b & TAG_MASK) { case 0x00: length = readLiteralLength(b); if (expandLiteral(length)) { return; } break; case 0x01: /* * These elements can encode lengths between [4..11] bytes and * offsets between [0..2047] bytes. (len-4) occupies three bits * and is stored in bits [2..4] of the tag byte. The offset * occupies 11 bits, of which the upper three are stored in the * upper three bits ([5..7]) of the tag byte, and the lower * eight are stored in a byte following the tag byte. */ length = 4 + ((b >> 2) & 0x07); offset = (b & 0xE0) << 3; offset |= readOneByte(); if (expandCopy(offset, length)) { return; } break; case 0x02: /* * These elements can encode lengths between [1..64] and offsets * from [0..65535]. (len-1) occupies six bits and is stored in * the upper six bits ([2..7]) of the tag byte. The offset is * stored as a little-endian 16-bit integer in the two bytes * following the tag byte. */ length = (b >> 2) + 1; offset = readOneByte(); offset |= readOneByte() << 8; if (expandCopy(offset, length)) { return; } break; case 0x03: /* * These are like the copies with 2-byte offsets (see previous * subsection), except that the offset is stored as a 32-bit * integer instead of a 16-bit integer (and thus will occupy * four bytes). */ length = (b >> 2) + 1; offset = readOneByte(); offset |= readOneByte() << 8; offset |= readOneByte() << 16; offset |= ((long) readOneByte()) << 24; if (expandCopy(offset, length)) { return; } break; } readNow -= length; uncompressedBytesRemaining -= length; } } /** * Slide buffer. * *

Move all bytes of the buffer after the first block down to * the beginning of the buffer.

*/ private void slideBuffer() { System.arraycopy(decompressBuf, blockSize, decompressBuf, 0, blockSize * 2); writeIndex -= blockSize; readIndex -= blockSize; } /* * For literals up to and including 60 bytes in length, the * upper six bits of the tag byte contain (len-1). The literal * follows immediately thereafter in the bytestream. - For * longer literals, the (len-1) value is stored after the tag * byte, little-endian. The upper six bits of the tag byte * describe how many bytes are used for the length; 60, 61, 62 * or 63 for 1-4 bytes, respectively. The literal itself follows * after the length. */ private int readLiteralLength(int b) throws IOException { int length; switch (b >> 2) { case 60: length = readOneByte(); break; case 61: length = readOneByte(); length |= readOneByte() << 8; break; case 62: length = readOneByte(); length |= readOneByte() << 8; length |= readOneByte() << 16; break; case 63: length = readOneByte(); length |= readOneByte() << 8; length |= readOneByte() << 16; length |= (((long) readOneByte()) << 24); break; default: length = b >> 2; break; } return length + 1; } /** * Literals are uncompressed data stored directly in the byte stream. * * @param length * The number of bytes to read from the underlying stream * * @throws IOException * If the first byte cannot be read for any reason other than * end of file, or if the input stream has been closed, or if * some other I/O error occurs. * @return True if the decompressed data should be flushed */ private boolean expandLiteral(final int length) throws IOException { int bytesRead = IOUtils.readFully(in, decompressBuf, writeIndex, length); count(bytesRead); if (length != bytesRead) { throw new IOException("Premature end of stream"); } writeIndex += length; return writeIndex >= 2 * this.blockSize; } /** * Copies are references back into previous decompressed data, telling the * decompressor to reuse data it has previously decoded. They encode two * values: The offset, saying how many bytes back from the current position * to read, and the length, how many bytes to copy. Offsets of zero can be * encoded, but are not legal; similarly, it is possible to encode * backreferences that would go past the end of the block (offset > current * decompressed position), which is also nonsensical and thus not allowed. * * @param off * The offset from the backward from the end of expanded stream * @param length * The number of bytes to copy * * @throws IOException * An the offset expands past the front of the decompression * buffer * @return True if the decompressed data should be flushed */ private boolean expandCopy(final long off, int length) throws IOException { if (off > blockSize) { throw new IOException("Offset is larger than block size"); } int offset = (int) off; if (offset == 1) { byte lastChar = decompressBuf[writeIndex - 1]; for (int i = 0; i < length; i++) { decompressBuf[writeIndex++] = lastChar; } } else if (length < offset) { System.arraycopy(decompressBuf, writeIndex - offset, decompressBuf, writeIndex, length); writeIndex += length; } else { int fullRotations = length / offset; int pad = length - (offset * fullRotations); while (fullRotations-- != 0) { System.arraycopy(decompressBuf, writeIndex - offset, decompressBuf, writeIndex, offset); writeIndex += offset; } if (pad > 0) { System.arraycopy(decompressBuf, writeIndex - offset, decompressBuf, writeIndex, pad); writeIndex += pad; } } return writeIndex >= 2 * this.blockSize; } /** * This helper method reads the next byte of data from the input stream. The * value byte is returned as an int in the range 0 * to 255. If no byte is available because the end of the * stream has been reached, an Exception is thrown. * * @return The next byte of data * @throws IOException * EOF is reached or error reading the stream */ private int readOneByte() throws IOException { int b = in.read(); if (b == -1) { throw new IOException("Premature end of stream"); } count(1); return b & 0xFF; } /** * The stream starts with the uncompressed length (up to a maximum of 2^32 - * 1), stored as a little-endian varint. Varints consist of a series of * bytes, where the lower 7 bits are data and the upper bit is set iff there * are more bytes to be read. In other words, an uncompressed length of 64 * would be stored as 0x40, and an uncompressed length of 2097150 (0x1FFFFE) * would be stored as 0xFE 0xFF 0x7F. * * @return The size of the uncompressed data * * @throws IOException * Could not read a byte */ private long readSize() throws IOException { int index = 0; long sz = 0; int b = 0; do { b = readOneByte(); sz |= (b & 0x7f) << (index++ * 7); } while (0 != (b & 0x80)); return sz; } /** * Get the uncompressed size of the stream * * @return the uncompressed size */ public int getSize() { return size; } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.pack200; import java.io.FilterOutputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; /** * Provides an InputStream to read all data written to this * OutputStream. * * @ThreadSafe * @since 1.3 */ abstract class StreamBridge extends FilterOutputStream { private InputStream input; private final Object INPUT_LOCK = new Object(); protected StreamBridge(OutputStream out) { super(out); } protected StreamBridge() { this(null); } /** * Provides the input view. */ InputStream getInput() throws IOException { synchronized (INPUT_LOCK) { if (input == null) { input = getInputView(); } } return input; } /** * Creates the input view. */ abstract InputStream getInputView() throws IOException; /** * Closes input and output and releases all associated resources. */ void stop() throws IOException { close(); synchronized (INPUT_LOCK) { if (input != null) { input.close(); input = null; } } } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.pack200; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; /** * StreamSwitcher that caches all data written to the output side in * a temporary file. * @since 1.3 */ class TempFileCachingStreamBridge extends StreamBridge { private final File f; TempFileCachingStreamBridge() throws IOException { f = File.createTempFile("commons-compress", "packtemp"); f.deleteOnExit(); out = new FileOutputStream(f); } @Override InputStream getInputView() throws IOException { out.close(); return new FileInputStream(f) { @Override public void close() throws IOException { super.close(); f.delete(); } }; } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.xz; import java.io.IOException; import java.io.InputStream; import org.tukaani.xz.XZ; import org.tukaani.xz.SingleXZInputStream; import org.tukaani.xz.XZInputStream; import org.apache.commons.compress.compressors.CompressorInputStream; /** * XZ decompressor. * @since 1.4 */ public class XZCompressorInputStream extends CompressorInputStream { private final InputStream in; /** * Checks if the signature matches what is expected for a .xz file. * * @param signature the bytes to check * @param length the number of bytes to check * @return true if signature matches the .xz magic bytes, false otherwise */ public static boolean matches(byte[] signature, int length) { if (length < XZ.HEADER_MAGIC.length) { return false; } for (int i = 0; i < XZ.HEADER_MAGIC.length; ++i) { if (signature[i] != XZ.HEADER_MAGIC[i]) { return false; } } return true; } /** * Creates a new input stream that decompresses XZ-compressed data * from the specified input stream. This doesn't support * concatenated .xz files. * * @param inputStream where to read the compressed data * * @throws IOException if the input is not in the .xz format, * the input is corrupt or truncated, the .xz * headers specify options that are not supported * by this implementation, or the underlying * inputStream throws an exception */ public XZCompressorInputStream(InputStream inputStream) throws IOException { this(inputStream, false); } /** * Creates a new input stream that decompresses XZ-compressed data * from the specified input stream. * * @param inputStream where to read the compressed data * @param decompressConcatenated * if true, decompress until the end of the * input; if false, stop after the first .xz * stream and leave the input position to point * to the next byte after the .xz stream * * @throws IOException if the input is not in the .xz format, * the input is corrupt or truncated, the .xz * headers specify options that are not supported * by this implementation, or the underlying * inputStream throws an exception */ public XZCompressorInputStream(InputStream inputStream, boolean decompressConcatenated) throws IOException { if (decompressConcatenated) { in = new XZInputStream(inputStream); } else { in = new SingleXZInputStream(inputStream); } } @Override public int read() throws IOException { int ret = in.read(); count(ret == -1 ? -1 : 1); return ret; } @Override public int read(byte[] buf, int off, int len) throws IOException { int ret = in.read(buf, off, len); count(ret); return ret; } @Override public long skip(long n) throws IOException { return in.skip(n); } @Override public int available() throws IOException { return in.available(); } @Override public void close() throws IOException { in.close(); } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.xz; import java.io.IOException; import java.io.OutputStream; import org.tukaani.xz.LZMA2Options; import org.tukaani.xz.XZOutputStream; import org.apache.commons.compress.compressors.CompressorOutputStream; /** * XZ compressor. * @since 1.4 */ public class XZCompressorOutputStream extends CompressorOutputStream { private final XZOutputStream out; /** * Creates a new XZ compressor using the default LZMA2 options. * This is equivalent to XZCompressorOutputStream(outputStream, 6). * @param outputStream the stream to wrap * @throws IOException on error */ public XZCompressorOutputStream(OutputStream outputStream) throws IOException { out = new XZOutputStream(outputStream, new LZMA2Options()); } /** * Creates a new XZ compressor using the specified LZMA2 preset level. *

* The presets 0-3 are fast presets with medium compression. * The presets 4-6 are fairly slow presets with high compression. * The default preset is 6. *

* The presets 7-9 are like the preset 6 but use bigger dictionaries * and have higher compressor and decompressor memory requirements. * Unless the uncompressed size of the file exceeds 8 MiB, * 16 MiB, or 32 MiB, it is waste of memory to use the * presets 7, 8, or 9, respectively. * @param outputStream the stream to wrap * @param preset the preset * @throws IOException on error */ public XZCompressorOutputStream(OutputStream outputStream, int preset) throws IOException { out = new XZOutputStream(outputStream, new LZMA2Options(preset)); } @Override public void write(int b) throws IOException { out.write(b); } @Override public void write(byte[] buf, int off, int len) throws IOException { out.write(buf, off, len); } /** * Flushes the encoder and calls outputStream.flush(). * All buffered pending data will then be decompressible from * the output stream. Calling this function very often may increase * the compressed file size a lot. */ @Override public void flush() throws IOException { out.flush(); } /** * Finishes compression without closing the underlying stream. * No more data can be written to this stream after finishing. */ public void finish() throws IOException { out.finish(); } @Override public void close() throws IOException { out.close(); } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.xz; import java.util.HashMap; import java.util.Map; import org.apache.commons.compress.compressors.FileNameUtil; /** * Utility code for the xz compression format. * @ThreadSafe * @since 1.4 */ public class XZUtils { private static final FileNameUtil fileNameUtil; /** * XZ Header Magic Bytes begin a XZ file. * *

This is a copy of {@code org.tukaani.xz.XZ.HEADER_MAGIC} in * XZ for Java version 1.5.

*/ private static final byte[] HEADER_MAGIC = { (byte) 0xFD, '7', 'z', 'X', 'Z', '\0' }; static enum CachedAvailability { DONT_CACHE, CACHED_AVAILABLE, CACHED_UNAVAILABLE } private static volatile CachedAvailability cachedXZAvailability; static { Map uncompressSuffix = new HashMap(); uncompressSuffix.put(".txz", ".tar"); uncompressSuffix.put(".xz", ""); uncompressSuffix.put("-xz", ""); fileNameUtil = new FileNameUtil(uncompressSuffix, ".xz"); cachedXZAvailability = CachedAvailability.DONT_CACHE; try { Class.forName("org.osgi.framework.BundleEvent"); } catch (Exception ex) { setCacheXZAvailablity(true); } } /** Private constructor to prevent instantiation of this utility class. */ private XZUtils() { } /** * Checks if the signature matches what is expected for a .xz file. * *

This is more or less a copy of the version found in {@link * XZCompressorInputStream} but doesn't depend on the presence of * XZ for Java.

* * @param signature the bytes to check * @param length the number of bytes to check * @return true if signature matches the .xz magic bytes, false otherwise * @since 1.9 */ public static boolean matches(byte[] signature, int length) { if (length < HEADER_MAGIC.length) { return false; } for (int i = 0; i < HEADER_MAGIC.length; ++i) { if (signature[i] != HEADER_MAGIC[i]) { return false; } } return true; } /** * Are the classes required to support XZ compression available? * @since 1.5 * @return true if the classes required to support XZ compression are available */ public static boolean isXZCompressionAvailable() { final CachedAvailability cachedResult = cachedXZAvailability; if (cachedResult != CachedAvailability.DONT_CACHE) { return cachedResult == CachedAvailability.CACHED_AVAILABLE; } return internalIsXZCompressionAvailable(); } private static boolean internalIsXZCompressionAvailable() { try { XZCompressorInputStream.matches(null, 0); return true; } catch (NoClassDefFoundError error) { return false; } } /** * Detects common xz suffixes in the given filename. * * @param filename name of a file * @return {@code true} if the filename has a common xz suffix, * {@code false} otherwise */ public static boolean isCompressedFilename(String filename) { return fileNameUtil.isCompressedFilename(filename); } /** * Maps the given name of a xz-compressed file to the name that the * file should have after uncompression. Commonly used file type specific * suffixes like ".txz" are automatically detected and * correctly mapped. For example the name "package.txz" is mapped to * "package.tar". And any filenames with the generic ".xz" suffix * (or any other generic xz suffix) is mapped to a name without that * suffix. If no xz suffix is detected, then the filename is returned * unmapped. * * @param filename name of a file * @return name of the corresponding uncompressed file */ public static String getUncompressedFilename(String filename) { return fileNameUtil.getUncompressedFilename(filename); } /** * Maps the given filename to the name that the file should have after * compression with xz. Common file types with custom suffixes for * compressed versions are automatically detected and correctly mapped. * For example the name "package.tar" is mapped to "package.txz". If no * custom mapping is applicable, then the default ".xz" suffix is appended * to the filename. * * @param filename name of a file * @return name of the corresponding compressed file */ public static String getCompressedFilename(String filename) { return fileNameUtil.getCompressedFilename(filename); } /** * Whether to cache the result of the XZ for Java check. * *

This defaults to {@code false} in an OSGi environment and {@code true} otherwise.

* @param doCache whether to cache the result * @since 1.9 */ public static void setCacheXZAvailablity(boolean doCache) { if (!doCache) { cachedXZAvailability = CachedAvailability.DONT_CACHE; } else if (cachedXZAvailability == CachedAvailability.DONT_CACHE) { final boolean hasXz = internalIsXZCompressionAvailable(); cachedXZAvailability = hasXz ? CachedAvailability.CACHED_AVAILABLE : CachedAvailability.CACHED_UNAVAILABLE; } } // only exists to support unit tests static CachedAvailability getCachedXZAvailability() { return cachedXZAvailability; } } /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.commons.compress.compressors.z; import java.io.IOException; import java.io.InputStream; import java.nio.ByteOrder; import org.apache.commons.compress.compressors.lzw.LZWInputStream; /** * Input stream that decompresses .Z files. * @NotThreadSafe * @since 1.7 */ public class ZCompressorInputStream extends LZWInputStream { private static final int MAGIC_1 = 0x1f; private static final int MAGIC_2 = 0x9d; private static final int BLOCK_MODE_MASK = 0x80; private static final int MAX_CODE_SIZE_MASK = 0x1f; private final boolean blockMode; private final int maxCodeSize; private long totalCodesRead = 0; public ZCompressorInputStream(InputStream inputStream) throws IOException { super(inputStream, ByteOrder.LITTLE_ENDIAN); int firstByte = (int) in.readBits(8); int secondByte = (int) in.readBits(8); int thirdByte = (int) in.readBits(8); if (firstByte != MAGIC_1 || secondByte != MAGIC_2 || thirdByte < 0) { throw new IOException("Input is not in .Z format"); } blockMode = (thirdByte & BLOCK_MODE_MASK) != 0; maxCodeSize = thirdByte & MAX_CODE_SIZE_MASK; if (blockMode) { setClearCode(DEFAULT_CODE_SIZE); } initializeTables(maxCodeSize); clearEntries(); } private void clearEntries() { setTableSize((1 << 8) + (blockMode ? 1 : 0)); } /** * {@inheritDoc} *

This method is only protected for technical reasons * and is not part of Commons Compress' published API. It may * change or disappear without warning.

*/ @Override protected int readNextCode() throws IOException { int code = super.readNextCode(); if (code >= 0) { ++totalCodesRead; } return code; } private void reAlignReading() throws IOException { // "compress" works in multiples of 8 symbols, each codeBits bits long. // When codeBits changes, the remaining unused symbols in the current // group of 8 are still written out, in the old codeSize, // as garbage values (usually zeroes) that need to be skipped. long codeReadsToThrowAway = 8 - (totalCodesRead % 8); if (codeReadsToThrowAway == 8) { codeReadsToThrowAway = 0; } for (long i = 0; i < codeReadsToThrowAway; i++) { readNextCode(); } in.clearBitCache(); } /** * {@inheritDoc} *

This method is only protected for technical reasons * and is not part of Commons Compress' published API. It may * change or disappear without warning.

*/ @Override protected int addEntry(int previousCode, byte character) throws IOException { final int maxTableSize = 1 << getCodeSize(); int r = addEntry(previousCode, character, maxTableSize); if (getTableSize() == maxTableSize && getCodeSize() < maxCodeSize) { reAlignReading(); incrementCodeSize(); } return r; } /** * {@inheritDoc} *

This method is only protected for technical reasons * and is not part of Commons Compress' published API. It may * change or disappear without warning.

*/ @Override protected int decompressNextSymbol() throws IOException { // // table entry table entry // _____________ _____ // table entry / \ / \ // ____________/ \ \ // / / \ / \ \ // +---+---+---+---+---+---+---+---+---+---+ // | . | . | . | . | . | . | . | . | . | . | // +---+---+---+---+---+---+---+---+---+---+ // |<--------->|<------------->|<----->|<->| // symbol symbol symbol symbol // final int code = readNextCode(); if (code < 0) { return -1; } else if (blockMode && code == getClearCode()) { clearEntries(); reAlignReading(); resetCodeSize(); resetPreviousCode(); return 0; } else { boolean addedUnfinishedEntry = false; if (code == getTableSize()) { addRepeatOfPreviousCode(); addedUnfinishedEntry = true; } else if (code > getTableSize()) { throw new IOException(String.format("Invalid %d bit code 0x%x", getCodeSize(), code)); } return expandCodeToOutputStack(code, addedUnfinishedEntry); } } /** * Checks if the signature matches what is expected for a Unix compress file. * * @param signature * the bytes to check * @param length * the number of bytes to check * @return true, if this stream is a Unix compress compressed * stream, false otherwise * * @since 1.9 */ public static boolean matches(byte[] signature, int length) { return length > 3 && signature[0] == MAGIC_1 && signature[1] == (byte) MAGIC_2; } }

Provides stream classes for compressing and decompressing streams using the BZip2 algorithm.

Provides a stream classes that allow (de)compressing streams using the DEFLATE algorithm.

Provides stream classes for compressing and decompressing streams using the GZip algorithm.

The classes in this package are wrappers around {@link java.util.zip.GZIPInputStream java.util.zip.GZIPInputStream} and {@link java.util.zip.GZIPOutputStream java.util.zip.GZIPOutputStream}.

Provides a stream class decompressing streams using the "stand-alone" LZMA algorithm.

The class in this package is a wrapper around {@link org.tukaani.xz.LZMAInputStream org.tukaani.xz.LZMAInputStream} and provided by the public domain XZ for Java library.

In general you should prefer the more modern and robust XZ format over stand-alone LZMA compression.

Generic LZW implementation.

Provides a unified API and factories for dealing with compressed streams.

Provides stream classes for compressing and decompressing streams using the Pack200 algorithm used to compress Java archives.

The streams of this package only work on JAR archives, i.e. a {@link org.apache.commons.compress.compressors.pack200.Pack200CompressorOutputStream Pack200CompressorOutputStream} expects to be wrapped around a stream that a valid JAR archive will be written to and a {@link org.apache.commons.compress.compressors.pack200.Pack200CompressorInputStream Pack200CompressorInputStream} provides a stream to read from a JAR archive.

JAR archives compressed with Pack200 will in general be different from the original archive when decompressed again. For details see the API documentation of Pack200.

The streams of this package work on non-deflated streams, i.e. archives like those created with the --no-gzip option of the JDK's pack200 command line tool. If you want to work on deflated streams you must use an additional stream layer - for example by using Apache Commons Compress' gzip package.

The Pack200 API provided by the Java class library doesn't lend itself to real stream processing. Pack200CompressorInputStream will uncompress its input immediately and then provide an InputStream to a cached result. Likewise Pack200CompressorOutputStream will not write anything to the given OutputStream until finish or close is called - at which point the cached output written so far gets compressed.

Two different caching modes are available - "in memory", which is the default, and "temporary file". By default data is cached in memory but you should switch to the temporary file option if your archives are really big.

Given there always is an intermediate result the getBytesRead and getCount methods of Pack200CompressorInputStream are meaningless (read from the real stream or from the intermediate result?) and always return 0.

During development of the initial version several attempts have been made to use a real streaming API based for example on Piped(In|Out)putStream or explicit stream pumping like Commons Exec's InputStreamPumper but they have all failed because they rely on the output end to be consumed completely or else the (un)pack will block forever. Especially for Pack200InputStream it is very likely that it will be wrapped in a ZipArchiveInputStream which will never read the archive completely as it is not interested in the ZIP central directory data at the end of the JAR archive.

Provides stream classes for decompressing streams using the Snappy algorithm.

The raw Snappy format which only contains the compressed data is supported by the SnappyCompressorInputStream class while the so called "framing format" is implemented by FramedSnappyCompressorInputStream. Note there have been different versions of the fraing format specification, the implementation in Commons Compress is based on the specification "Last revised: 2013-10-25".

Only the "framing format" can be auto-detected this means you have to speficy the format explicitly if you want to read a "raw" Snappy stream via CompressorStreamFactory.

Provides stream classes for compressing and decompressing streams using the XZ algorithm.

The classes in this package are wrappers around {@link org.tukaani.xz.XZInputStream org.tukaani.xz.XZInputStream} and {@link org.tukaani.xz.XZOutputStream org.tukaani.xz.XZOutputStream} provided by the public domain XZ for Java library.

Provides stream classes for decompressing streams using the "compress" algorithm used to write .Z files.

36^~3pqizHe3\¬:Y @z\ ˉ.(.ĉc߻t|1~ Q|,c ]u\0=I60rQǪVY򪠝>(v@ӆyp c7yѽ5 3k.jS4XsQйl;c7 gT Tk٧S~$~)CaAJH' CsYtu#jbAa,_xbec5mqOYy]֭_Luwb<tD[˨%q*x[ȁ}I#>t<6130LW|wѷ[}!j.WlU>_=uMky3 zh Zҽ Yaьp1e`," j45_d,sRIG+