commons-compress
ZipArchiveInputStream could forget the compression level has
changed under certain circumstances.
The example Expander class has been vulnerable to a path
traversal in the edge case that happens when the target
directory has a sibling directory and the name of the target
directory is a prefix of the sibling directory's name.
Changed the OSGi Import-Package to also optionally import
javax.crypto so encrypted archives can be read.
Changed various implementations of the close method to better
ensure all held resources get closed even if exceptions are
thrown during the closing the stream.
ZipArchiveInputStream can now detect the APK Signing Block
used in signed Android APK files and treats it as an "end of
archive" marker.
The cpio streams didn't handle archives using a multi-byte
encoding properly.
It is now possible to specify the arguments of zstd-jni's
ZstdOutputStream constructors via Commons Compress as well.
ZipArchiveInputStream#read would silently return -1 on a
corrupted stored entry and even return > 0 after hitting the
end of the archive.
ArArchiveInputStream#read would allow to read from the stream
without opening an entry at all.
Removed the objenesis dependency from the pom as it is not
needed at all.
Fixed resource leak in ParallelScatterZipCreator#writeTo.
Fixed some code examples.
Github Pull Request #63.
Certain errors when parsing ZIP extra fields in corrupt
archives are now turned into ZipException, they used to
manifest as ArrayIndexOutOfBoundsException before.
The streams returned by ZipFile and most other decompressing
streams now provide information about the number of compressed
and uncompressed bytes read so far. This may be used to detect
a ZipBomb if the compression ratio exceeds a certain
threshold, for example.
For SevenZFile a new method returns the statistics for the
current entry.
Added a unit test that is supposed to fail if we break the
OSGi manifest entries again.
Add a new SkipShieldingInputStream class that can be used with
streams that throw an IOException when skip is invoked.
IOUtils.copy now verifies the buffer size is bigger than 0.
New constructors have been added to SevenZFile that accept
char[]s rather than byte[]s in order to avoid a common error
of using the wrong encoding when creating the byte[]. This
change may break source compatibility for client code that
uses one of the constructors expecting a password and passes
in null as password. We recommend to change the code to use a
constructor without password argument.
Added a workaround for a bug in AdoptOpenJDK for S/390 to
BZip2CompressorInputStream.
ZipArchiveInputStream failed to read some files with stored
entries using a data descriptor.
Fixed the OSGi manifest entry for imports that has been broken
in 1.16.
Add read-only support for Zstandard compression based on the
Zstd-jni project.
Added auto-detection for Zstandard compressed streams.
Synchronized iteration over a synchronizedList in ParallelScatterZipCreator.
ZipFile could get stuck in an infinite loop when parsing ZIP
archives with certain strong encryption headers.
Replaces instanceof checks with a type marker in LZ77 support code.
Added write-support for Zstandard compression.
Added improved checks to detect corrupted bzip2 streams and
throw the expected IOException rather than obscure
RuntimeExceptions.
Updated XZ for Java dependency to 1.8 in order to pick up bug
fix to LZMA2InputStream's available method.
ZipArchiveEntry now exposes how the name or comment have been
determined when the entry was read.
Added read-only DEFLATE64 support to ZIP archives and as
stand-alone CompressorInputStream.
ZipFile.getInputStream will now always buffer the stream
internally in order to improve read performance.
Speed improvement for DEFLATE64 decompression.
Added read-only DEFLATE64 support to 7z archives.
Added a few extra sanity checks for the rarer compression
methods used in ZIP archives.
Simplified the special handling for the dummy byte required by
zlib when using java.util.zip.Inflater.
Various code cleanups.
Github Pull Request #61.
TarArchiveEntry's preserveLeadingSlashes constructor argument
has been renamed and can now also be used to preserve the
drive letter on Windows.
Make sure "version needed to extract" in local file header and
central directory of a ZIP archive agree with each other.
Also ensure the version is set to 2.0 if DEFLATE is used.
Don't use a data descriptor in ZIP archives when copying a raw
entry that already knows its size and CRC information.
Travis build redundantly repeats compilation and tests redundantly #43.
Added magic MANIFEST entry Automatic-Module-Name so the module
name will be org.apache.commons.compress when the jar is used
as an automatic module in Java9.
The MANIFEST of 1.14 lacks an OSGi Import-Package for XZ for
Java.
BUILDING.md now passes the RAT check.
Added a new utility class FixedLengthBlockOutputStream that
can be used to ensure writing always happens in blocks of a
given size.
Made sure ChecksumCalculatingInputStream receives valid
checksum and input stream instances via the constructor.
TarArchiveOutputStream now verifies the block and record sizes
specified at construction time are compatible with the tar
specification. In particular 512 is the only record size
accepted and the block size must be a multiple of 512.
At the same time the default block size in
TarArchiveOutputStream has been changed from 10240 to 512
bytes.
It is now possible to specify/read custom PAX headers when
writing/reading tar archives.
Fixed class names of CpioArchiveEntry and
CpioArchiveInputStream in various Javadocs.
The code of the extended timestamp zip extra field incorrectly
assumed the time was stored as unsigned 32-bit int and thus
created incorrect results for years after 2037.
Removed ZipEncoding code that became obsolete when we started
to require Java 5 as baseline long ago.
The tar package will no longer try to parse the major and
minor device numbers unless the entry represents a character
or block special file.
When reading tar headers with name fields containing embedded
NULs, the name will now be terminated at the first NUL byte.
Simplified TarArchiveOutputStream by replacing the internal
buffering with new class FixedLengthBlockOutputStream.
SnappyCompressorInputStream slides the window too early
leading to ArrayIndexOutOfBoundsExceptions for some streams.
Added write support for Snappy.
The blocksize for FramedSnappyCompressorInputStream can now be
configured as some IWA files seem to be using blocks larger
than the default 32k.
ZipArchiveEntry#isUnixSymlink now only returns true if the
corresponding link flag is the only file-type flag set.
Added support for LZ4 (block and frame format).
BZip2CompressorInputstream now uses BitInputStream internally.
Pull Request #13.
Fixed an integer overflow in CPIO's CRC calculation.
Pull Request #17.
Add static detect(InputStream in) to CompressorStreamFactory
and ArchiveStreamFactory
Make unit tests work on Windows paths with spaces in their names.
Improved performance for concurrent reads from ZipFile when
reading from a file.
Added a way to limit amount of memory ZCompressorStream may
use.
Added a way to limit amount of memory ZCompressorStream may
use.
Added a way to limit amount of memory LZMACompressorStream and
XZCompressorInputStream may use.
Internal location pointer in ZipFile could get incremented
even if nothing had been read.
Add Brotli decoder based on the Google Brotli library.
ZipEntry now exposes its data offset.
LZMACompressorOutputStream#flush would throw an exception
rather than be the NOP it promised to be.
Using ZipArchiveEntry's setAlignment it is now possible to
ensure the data offset of an entry starts at a file position
that at word or page boundaries.
A new extra field has been added for this purpose.
Update Java requirement from 6 to 7.
BitInputStream could return bad results when overflowing
internally - if two consecutive reads tried to read more than
64 bits.
Clarified which TarArchiveEntry methods are useless for
entries read from an archive.
ZipArchiveInputStream.closeEntry does not properly advance to
next entry if there are junk bytes at end of data section
SevenZFile, SevenZOutputFile, ZipFile and
ZipArchiveOutputStream can now work on non-file resources if
they can be accessed via SeekableByteChannel.
Allow compressor extensions through a standard JRE ServiceLoader.
Allow archive extensions through a standard JRE ServiceLoader.
Add write support for the legacy LZMA format, this requires XZ
for Java 1.6.
Add write support for the legacy LZMA stream to 7z, this
requires XZ for Java 1.6.
Allow the clients of ParallelScatterZipCreator to provide
ZipArchiveEntryRequestSupplier.
ZipArchiveInputStream now throws an Exception if it encounters
a broken ZIP archive rather than signaling end-of-archive.
ScatterZipOutputStream didn't close the StreamCompressor
causing a potential resource leak.
Add a version-independent link to the API docs of the latest
release.
Update requirement from Java 5 to 6.
TarArchiveEntry wastefully allocates empty arrays.
SevenZFile.read() throws an IllegalStateException for empty entries.
Javadoc for BZip2CompressorInputStream(InputStream, boolean) should refer to IOEx, not NPE.
PureJavaCrc32C in the snappy package is now final so it is now
safe to call a virtual method inside the constructor.
TarArchiveInputStream failed to parse PAX headers that
included blank lines.
TarArchiveInputStream failed to parse PAX headers whose tar
entry name ended with a slash.
FramedSnappyCompressorInputStream now supports the dialect of
Snappy used by the IWA files contained within the zip archives
used in Apple's iWork 13 files.
ZipArchiveInputStream and CpioArchiveInputStream could throw
exceptions who's messages contained potentially corrupt entry
names read from a broken archive. They will now sanitize the
names by replacing unprintable characters and restricting the
length to 255 characters.
BZip2CompressorOutputStream no longer tries to finish the
output stream in finalize. This is a breaking change for code
that relied on the finalizer.
TarArchiveInputStream now supports reading global PAX headers.
The PAX headers for sparse entries written by star are now
applied.
GNU sparse files using one of the PAX formats are now
detected, but cannot be extracted.
ArArchiveInputStream can now read GNU extended names that are
terminated with a NUL byte rather than a linefeed.
New method SevenZFile.getEntries can be used to list the
contents of a 7z archive.
Native Memory Leak in Sevenz-DeflateDecoder.
When using Zip64Mode.Always also use ZIP64 extensions inside
the central directory.
GitHub Pull Request #10
SevenZFile will now only try to drain an entry's content when
moving on to the next entry if data is read from the next
entry. This should improve performance for applications that
try to skip over entries.
file names of tar archives using the xstar format are now
parsed properly.
checksums of tars that pad the checksum field to the left are
now calculated properly.
ArArchiveInputStream failed to read past the first entry when
BSD long names have been used.
Added buffering for random access which speeds up 7Z support.
The checksum validation of TararchiveEntry is now as strict as
the validation of GNU tar, which eliminates a few cases of
false positives of ArchiveStreamFactory.
This behavior is a breaking change since the check has become
more strict but any archive that fails the checksum test now
would also fail it when extracted with other tools and must be
considered an invalid archive.
ZipFile.getRawInputStream() is now part of the public API
SnappyCompressorInputStream and
FramedSnappyCompressorInputStream returned 0 at the end of the
stream under certain circumstances.
Allow byte-for-byte replication of Zip entries.
GitHub Pull Request #6.
TarArchiveEntry's preserveLeadingSlashes is now a property and used
on later calls to setName, too.
This behavior is a breaking change.
Adjusted unit test to updates in Java8 and later that change
the logic of ZipEntry#getTime.
TarArchiveOutputStream will now recognize GNU long name and
link entries even if the special entry has a different name
than GNU tar uses itself. This seems to be the case for
archives created by star.
ArrayIndexOutOfBoundsException when InfoZIP type 7875 extra
fields are read from the central directory.
Added read-only support for bzip2 compression used inside of
ZIP archives.
GitHub Pull Request #4.
ArrayIndexOutOfBoundsException when ZIP extra fields are read
and the entry contains an UnparseableExtraField.
CompressorStreamFactory can now auto-detect DEFLATE streams
with ZLIB header.
TarArchiveInputStream can now read entries with group or
user ids > 0x80000000.
TarArchiveOutputStream can now write entries with group or
user ids > 0x80000000.
CompressorStreamFactory can now auto-detect LZMA streams.
TarArchiveEntry's constructor with a File and a String arg
didn't normalize the name.
ZipEncodingHelper no longer reads system properties directly
to determine the default charset.
BZip2CompressorInputStream#read would return -1 when asked to
read 0 bytes.
ArchiveStreamFactory fails to pass on the encoding when creating some streams.
* ArjArchiveInputStream
* CpioArchiveInputStream
* DumpArchiveInputStream
* JarArchiveInputStream
* TarArchiveInputStream
* JarArchiveOutputStream
Restore immutability/thread-safety to ArchiveStreamFactory.
The class is now immutable provided that the method setEntryEncoding is not used.
The class is thread-safe.
Restore immutability/thread-safety to CompressorStreamFactory.
The class is now immutable provided that the method setDecompressConcatenated is not used.
The class is thread-safe.
SevenZFile now throws the specific PasswordRequiredException
when it encounters an encrypted stream but no password has
been specified.
Improved error message when tar encounters a groupId that is
too big to write without using the STAR or POSIX format.
Added support for parallel compression. This low-level API allows
a client to build a zip/jar file by using the class
org.apache.commons.compress.archivers.zip.ParallelScatterZipCreator.
Zip documentation updated with further notes about parallel features.
Please note that some aspects of jar creation need to be
handled by client code and is not part of commons-compress for this
release.
Cut overall object instantiation in half by changing file
header generation algorithm, for a 10-15 percent performance
improvement.
Also extracted two private methods createLocalFileHeader
and createCentralFileHeader in ZipArchiveOutputStream.
These may have some interesting additional usages in the
near future.
ZipFile logs a warning in its finalizer when its constructor
has thrown an exception reading the file - for example if the
file doesn't exist.
New methods in ZipArchiveOutputStream and ZipFile allows
entries to be copied from one archive to another without
having to re-compress them.
Moved the package
org.apache.commons.compress.compressors.z._internal_ to
org.apache.commons.compress.compressors.lzw and made it part
of the API that is officially supported. This will break
existing code that uses the old package.
Added support for DEFLATE streams without any gzip framing.
When reading 7z files unknown file properties and properties
of type kDummy are now ignored.
Expanding 7z archives using LZMA compression could cause an
EOFException.
Checking for XZ for Java may be expensive. The result will
now be cached outside of an OSGi environment. You can use the
new XZUtils#setCacheXZAvailability to overrride this default
behavior.
Long-Name and -link or PAX-header entries in TAR archives
always had the current time as last modfication time, creating
archives that are different at the byte level each time an
archive was built.
The dependency on org.tukaani:xz is now marked as optional.
The snappy, ar and tar inputstreams might fail to read from a
non-buffered stream in certain cases.
CompressorStreamFactory can now auto-detect Unix compress
(".Z") streams.
IOUtils#skip might skip fewer bytes than requested even though
more could be read from the stream.
ArchiveStreams now validate there is a current entry before
reading or writing entry data.
ArjArchiveInputStream#canReadEntryData tested the current
entry of the stream rather than its argument.
ChangeSet#delete and deleteDir now properly deal with unnamed
entries.
Added a few null checks to improve robustness.
TarArchiveInputStream failed to read archives with empty
gid/uid fields.
TarArchiveInputStream now again throws an exception when it
encounters a truncated archive while reading from the last
entry.
Adapted TarArchiveInputStream#skip to the modified
IOUtils#skip method.
BZip2CompressorInputStream read fewer bytes than possible from
a truncated stream.
SevenZFile failed claiming the dictionary was too large when
archives used LZMA compression for headers and content and
certain non-default dictionary sizes.
CompressorStreamFactory.createCompressorInputStream with
explicit compression did not honor decompressConcatenated
GzipCompressorInputStream now provides access to the same
metadata that can be provided via GzipParameters when writing
a gzip stream.
TarArchiveInputStream will now read archives created by tar
implementations that encode big numbers by not adding a
trailing NUL.
ZipArchiveInputStream would return NUL bytes for the first 512
bytes of a STORED entry if it was the very first entry of the
archive.
When writing PAX/POSIX headers for TAR entries with
backslashes or certain non-ASCII characters in their name
TarArchiveOutputStream could fail.
ArchiveStreamFactory now throws a StreamingNotSupported - a
new subclass of ArchiveException - if it is asked to read from
or write to a stream and Commons Compress doesn't support
streaming for the format. This currently only applies to the
7z format.
SevenZOutputFile now supports chaining multiple
compression/encryption/filter methods and passing options to
the methods.
The (compression) method(s) can now be specified per entry in
SevenZOutputFile.
SevenZArchiveEntry "knows" which method(s) have been used to
write it to the archive.
The 7z package now supports the delta filter as method.
The 7z package now supports BCJ filters for several platforms.
You will need a version >= 1.5 of XZ for Java to read archives
using BCJ, though.
SevenZOutputFile#closeArchiveEntry throws an exception when
using LZMA2 compression on Java8.
Read-Only support for Snappy compression.
7z reading of big 64bit values could be wrong.
Read-Only support for .Z compressed files.
ZipFile and ZipArchiveInputStream now support reading entries compressed using the
SHRINKING method.
TarArchiveInputStream could fail to read an archive completely.
The time-setters in X5455_ExtendedTimestamp now set the
corresponding flags explicitly - i.e. they set the bit if the
valus is not-null and reset it otherwise. This may cause
incompatibilities if you use setFlags to unset a bit and later
set the time to a non-null value - the flag will now be set.
GzipCompressorOutputStream now supports setting the compression level and the header metadata
(filename, comment, modification time, operating system and extra flags)
ZipFile and ZipArchiveInputStream now support reading entries compressed using the IMPLODE method.
SevenZOutputFile would create invalid archives if more than
six empty files or directories were included.
ZipFile and the 7z file classes now implement Closeable and
can be used in try-with-resources constructs.
TarBuffer.tryToConsumeSecondEOFRecord could throw a
NullPointerException
Added support for 7z archives. Most compression algorithms
can be read and written, LZMA and encryption are only
supported when reading.
Added read-only support for ARJ archives that don't use
compression.
Parsing of zip64 extra fields has become more lenient in order
to be able to read archives created by DotNetZip and maybe
other archivers as well.
TAR will now properly read the names of symbolic links with
long names that use the GNU variant to specify the long file
name.
ZipFile#getInputStream could return null if the archive
contained duplicate entries.
The class now also provides two new methods to obtain all
entries of a given name rather than just the first one.
Readabilty patch to TarArchiveInputStream.
Performance improvements to TarArchiveInputStream, in
particular to the skip method.
CpioArchiveInputStream failed to read archives created by
Redline RPM.
TarArchiveOutputStream now properly handles link names that
are too long to fit into a traditional TAR header.
DumpArchiveInputStream now supports an encoding parameter that
can be used to specify the encoding of file names.
The CPIO streams now support an encoding parameter that can be
used to specify the encoding of file names.
Read-only support for LZMA standalone compression has been added.
The auto-detecting create*InputStream methods of Archive and
CompressorStreamFactory could fail to detect the format of
blocking input streams.
ZipEncodingHelper.isUTF8(String) does not check all UTF-8 aliases.
Typo in CompressorStreamFactory Javadoc
Improved exception message if a zip archive cannot be read
because of an unsupported compression method.
ArchiveStreamFactory has a setting for file name encoding that
sets up encoding for ZIP and TAR streams.
ArchiveStreamFactory's tar stream detection created false
positives for AIFF files.
TarArchiveEntry now has a method to verify its checksum.
XZ for Java didn't provide an OSGi bundle. Compress'
dependency on it has now been marked optional so Compress
itself can still be used in an OSGi context.
When specifying the encoding explicitly TarArchiveOutputStream
would write unreadable names in GNU mode or even cause errors
in POSIX mode for file names longer than 66 characters.
Writing TAR PAX headers failed if the generated entry name
ended with a "/".
ZipArchiveInputStream sometimes failed to provide input to the
Inflater when it needed it, leading to reads returning 0.
Split/spanned ZIP archives are now properly detected by
ArchiveStreamFactory but will cause an
UnsupportedZipFeatureException when read.
ZipArchiveInputStream now reads archives that start with a
"PK00" signature. Archives with this signatures are created
when the archiver was willing to split the archive but in the
end only needed a single segment - so didn't split anything.
TarArchiveEntry has a new constructor that allows setting
linkFlag and preserveLeadingSlashes at the same time.
ChangeSetPerformer has a new perform overload that uses a
ZipFile instance as input.
TarArchiveInputStream ignored the encoding for GNU long name
entries.
Garbage collection pressure has been reduced by reusing
temporary byte arrays in classes.
Can now handle zip extra field 0x5455 - Extended Timestamp.
handle zip extra field 0x7875 - Info Zip New Unix Extra Field.
ZipShort, ZipLong, ZipEightByteInteger should implement Serializable
better support for unix symlinks in ZipFile entries
ZipFile's initialization has been improved for non-Zip64
archives.
TarArchiveInputStream could leave the second EOF record
inside the stream it had just finished reading.
DumpArchiveInputStream no longer implicitly closes the
original input stream when it reaches the end of the
archive.
ZipArchiveInputStream now consumes the remainder of the
archive when getNextZipEntry returns null.
Unit tests could fail if the source tree was checked out to
a directory tree containign spaces.
Updated XZ for Java dependency to 1.2 as this version
provides proper OSGi manifest attributes.
Fixed a potential ArrayIndexOutOfBoundsException when
reading STORED entries from ZipArchiveInputStream.
CompressorStreamFactory can now be used without XZ for Java
being available.
CompressorStreamFactory has an option to create
decompressing streams that decompress the full input for
formats that support multiple concatenated streams.
Ported libbzip2's fallback sort algorithm to
BZip2CompressorOutputStream to speed up compression in certain
edge cases.
Using specially crafted inputs this can be used as a denial
of service attack. See the security reports page for details.
The tar package now allows the encoding of file names to be
specified and can optionally use PAX extension headers to
write non-ASCII file names.
The stream classes now write (or expect to read) archives that
use the platform's native encoding for file names. Apache
Commons Compress 1.3 used to strip everything but the lower
eight bits of each character which effectively only worked for
ASCII and ISO-8859-1 file names.
This new default behavior is a breaking change.
TarArchiveInputStream failed to parse PAX headers that
contained non-ASCII characters.
The tar package can now write archives that use star/GNU/BSD
extensions or use the POSIX/PAX variant to store numeric
values that don't fit into the traditional header fields.
Added a workaround for a Bug some tar implementations that add
a NUL byte as first byte in numeric header fields.
Added a workaround for a Bug in WinZIP which uses backslashes
as path separators in Unicode Extra Fields.
ArrayOutOfBounds while decompressing bz2. Added test case - code already seems to have been fixed.
TarArchiveInputStream throws IllegalArgumentException instead of IOException
TarUtils.formatLongOctalOrBinaryBytes() assumes the field will be 12 bytes long
GNU Tar sometimes uses binary encoding for UID and GID
ArchiveStreamFactory.createArchiveInputStream would claim
short text files were TAR archives.
Support for the XZ format has been added.
BZip2CompressorInputStream now optionally supports reading of
concatenated .bz2 files.
GZipCompressorInputStream now optionally supports reading of
concatenated .gz files.
ZipFile didn't work properly for archives using unicode extra
fields rather than UTF-8 filenames and the EFS-Flag.
The tar package can now read archives that use star/GNU/BSD
extensions for files that are longer than 8 GByte as well as
archives that use the POSIX/PAX variant.
The tar package can now write archives that use star/GNU/BSD
extensions for files that are longer than 8 GByte as well as
archives that use the POSIX/PAX variant.
The tar package can now use the POSIX/PAX variant for writing
entries with names longer than 100 characters.
For corrupt archives ZipFile would throw a RuntimeException in
some cases and an IOException in others. It will now
consistently throw an IOException.
Support for the Pack200 format has been added.
Read-only support for the format used by the Unix dump(8) tool
has been added.
The ZIP package now supports Zip64 extensions.
The AR package now supports the BSD dialect of storing file
names longer than 16 chars (both reading and writing).
BZip2CompressorInputStream's getBytesRead method always
returned 0.
ZipArchiveInputStream and ZipArchiveOutputStream could leak
resources on some JDKs.
TarArchiveOutputStream's getBytesWritten method didn't count
correctly.
ZipArchiveInputStream could fail with a "Truncated ZIP" error
message for entries between 2 GByte and 4 GByte in size.
TarArchiveInputStream now detects sparse entries using the
oldgnu format and properly reports it cannot extract their
contents.
ZipArchiveEntry has a new method getRawName that provides the
original bytes that made up the name. This may allow user
code to detect the encoding.
The Javadoc for ZipArchiveInputStream#skip now matches the
implementation, the code has been made more defensive.
ArArchiveInputStream fails if entries contain only blanks for
userId or groupId.
ZipFile may leak resources on some JDKs.
ZipFile now implements finalize which closes the underlying
file.
Certain tar files not recognised by ArchiveStreamFactory.
BZip2CompressorInputStream throws IOException if underlying stream returns available() == 0.
Removed the check.
Calling close() on inputStream returned by CompressorStreamFactory.createCompressorInputStream()
does not close the underlying input stream.
TarArchiveEntry provides access to the flags that determine
whether it is an archived symbolic link, pipe or other
"uncommon" file system object.
TarArchiveOutputStream#finish now writes all buffered data to the stream
Move acknowledgements from NOTICE to README
TarArchiveEntry.parseTarHeader() includes the trailing space/NUL when parsing the octal size
Command-line interface to list archive contents.
Usage: java -jar commons-compress-n.m.jar archive-name [zip|tar|etc]
TarUtils.parseName does not properly handle characters outside the range 0-127
ArArchiveInputStream does not handle GNU extended filename records (//)
Tar implementation does not support Pax headers
Added support for reading pax headers.
Note: does not support global pax headers
ArchiveStreamFactory does not recognise tar files created by Ant
Support "ustar" prefix field, which is used when file paths are longer
than 100 characters.
Document that the name of an ZipArchiveEntry determines whether
an entry is considered a directory or not.
If you don't use the constructor with the File argument the entry's
name must end in a "/" in order for the entry to be known as a directory.
ZipArchiveInputStream can optionally extract data that used
the STORED compression method and a data descriptor.
Doing so in a stream is not safe in general, so you have to
explicitly enable the feature. By default the stream will
throw an exception if it encounters such an entry.
ZipArchiveInputStream will throw an exception if it detects an
entry that uses a data descriptor for a STORED entry since it
cannot reliably find the end of data for this "compression"
method.
ZipArchiveInputStream should now properly read archives that
use data descriptors but without the "unofficial" signature.
The ZIP classes will throw specialized exceptions if any
attempt is made to read or write data that uses zip features
not supported (yet).
ZipFile#getEntries returns entries in a predictable order -
the order they appear inside the central directory.
A new method getEntriesInPhysicalOrder returns entries in
order of the entry data, i.e. the order ZipArchiveInputStream
would see.
The Archive*Stream and ZipFile classes now have
can(Read|Write)EntryData methods that can be used to check
whether a given entry's data can be read/written.
The method currently returns false for ZIP archives if an
entry uses an unsupported compression method or encryption.
The ZIP classes now detect encrypted entries.
Move DOS/Java time conversions into Zip utility class.
ZipArchiveInputStream failed to update the number of bytes
read properly.
ArchiveInputStream has a new method getBytesRead that should
be preferred over getCount since the later may truncate the
number of bytes read for big archives.
The cpio archives created by CpioArchiveOutputStream couldn't
be read by many existing native implementations because the
archives contained multiple entries with the same inode/device
combinations and weren't padded to a blocksize of 512 bytes.
ZipArchiveEntry, ZipFile and ZipArchiveInputStream are now
more lenient when parsing extra fields.
ZipArchiveInputStream does not show location in file where a problem occurred.
cpio is terribly slow.
Documented that buffered streams are needed for performance
Added autodetection of compression format to
CompressorStreamFactory.
Improved exception message if the extra field data in ZIP
archives cannot be parsed.
Tar format unspecified - current support documented.
Improve ExceptionMessages in ArchiveStreamFactory
ZipArchiveEntry's equals method was broken for entries created
with the String-arg constructor. This lead to broken ZIP
archives if two different entries had the same hash code.
ZipArchiveInputStream could repeatedly return 0 on read() when
the archive was truncated.
Tar archive entries holding the file name for names longer
than 100 characters in GNU longfile mode didn't properly
specify they'd be using the "oldgnu" extension.
A new constructor of TarArchiveEntry can create entries with
names that start with slashes - the default is to strip
leading slashes in order to create relative path names.
Delegate all read and write methods in GZip stream in order to
speed up operations.
ArchiveEntry now has a getLastModifiedDate method.
The ar and cpio streams now properly read and write last
modified times.
TarOutputStream can leave garbage at the end of the archive
Add a BZip2Utils class modelled after GZipUtils
Initial release
Updating the pom.xml for preparing a move to commons-proper