summaryrefslogtreecommitdiffstatshomepage
path: root/src/lib
diff options
context:
space:
mode:
author Aaron Giles <aaron@aarongiles.com>2012-02-16 09:47:18 +0000
committer Aaron Giles <aaron@aarongiles.com>2012-02-16 09:47:18 +0000
commitf0823886a66100e193d6eeb0402eb872a67fa07d (patch)
treea68b35942d63e5fcaf2311812dba976ad18cd5b6 /src/lib
parente6dad3759374373e3ce69a76869f16e83ba74df5 (diff)
Major CHD/chdman update. The CHD version number has been increased
from 4 to 5. This means any diff CHDs will no longer work. If you absolutely need to keep the data for any existing ones you have, find both the diff CHD and the original CHD for the game in question and upgrade using these commands: rename diff\game.dif diff\game-old.dif chdman copy -i diff\game-old.dif -ip roms\game.chd -o diff\game.dif -op roms\game.chd -c none Specifics regarding this change: Defined a new CHD version 5. New features/behaviors of this version: - support for up to 4 codecs; each block can use 1 of the 4 - new LZMA codec, which tends to do better than zlib overall - new FLAC codec, primarily used for CDs (but can be applied anywhere) - upgraded AVHuff codec now uses FLAC for encoding audio - new Huffman codec, used to catch more nearly-uncompressable blocks - compressed CHDs now use a compressed map for significant savings - CHDs now are aware of a "unit" size; each hunk holds 1 or more units (in general units map to sectors for hard disks/CDs) - diff'ing against a parent now diffs at the unit level, greatly improving compression Rewrote and modernized chd.c. CHD versions prior to 3 are unsupported, and version 3/4 CHDs are only supported for reading. Creating a new CHD now leaves the file open. Added methods to read and write at the unit and byte level, removing the need to handle this manually. Added metadata access methods that pass astrings and dynamic_buffers to simplify the interfaces. A companion class chd_compressor now implements full multithreaded compression, analyzing and compressing multiple hunks independently in parallel. Split the codec implementations out into a separate file chdcodec.* Updated harddisk.c and cdrom.c to rely on the caching/byte-level read/ write capabilities of the chd_file class. cdrom.c (and chdman) now also pad CDs to 4-frame boundaries instead of hunk boundaries, ensuring that the same SHA1 hashes are produced regardless of the hunk size. Rewrote chdman.exe entirely, switching from positional parameters to proper options. Use "chdman help" to get a list of commands, and "chdman help <command>" to get help for any particular command. Many redundant commands were removed now that additional flexibility is available. Some basic mappings: Old: chdman -createblankhd <out.chd> <cyls> <heads> <secs> New: chdman createhd -o <out.chd> -chs <cyls>,<heads>,<secs> Old: chdman -createuncomphd <in.raw> <out.chd> .... New: chdman createhd -i <in.raw> -o <out.chd> -c none .... Old: chdman -verifyfix <in.chd> New: chdman verify -i <in.chd> -f Old: chdman -merge <parent.chd> <diff.chd> <out.chd> New: chdman copy -i <diff.chd> -ip <parent.chd> -o <out.chd> Old: chdman -diff <parent.chd> <compare.chd> <diff.chd> New: chdman copy -i <compare.chd> -o <diff.chd> -op <parent.chd> Old: chdman -update <in.chd> <out.chd> New: chdman copy -i <in.chd> -o <out.chd> Added new core file coretmpl.h to hold core template classes. For now just one class, dynamic_array<> is defined, which acts like an array of a given object but which can be appended to and/or resized. Also defines dynamic_buffer as dynamic_array<UINT8> for holding an arbitrary buffer of bytes. Expect to see these used a lot. Added new core helper hashing.c/.h which defines classes for each of the common hashing methods and creator classes to wrap the computation of these hashes. A future work item is to reimplement the core emulator hashing code using these. Split bit buffer helpers out into C++ classes and into their own public header in bitstream.h. Updated huffman.c/.h to C++, and changed the interface to make it more flexible to use in nonstandard ways. Also added huffman compression of the static tree for slightly better compression rates. Created flac.c/.h as simplified C++ wrappers around the FLAC interface. A future work item is to convert the samples sound device to a modern device and leverage this for reading FLAC files. Renamed avcomp.* to avhuff.*, updated to C++, and added support for FLAC as the audio encoding mechanism. The old huffman audio is still supported for decode only. Added a variant of core_fload that loads to a dynamic_buffer. Tweaked winwork.c a bit to not limit the maximum number of processors unless the work queue was created with the WORK_QUEUE_FLAG_HIGH_FREQ option. Further adjustments here are likely going to be necessary. Fixed bug in aviio.c which caused errors when reading some AVI files.
Diffstat (limited to 'src/lib')
-rw-r--r--src/lib/lib.mak27
-rw-r--r--src/lib/lib7z/LzmaDec.c21
-rw-r--r--src/lib/lib7z/Ppmd7.c2
-rw-r--r--src/lib/util/astring.h2
-rw-r--r--src/lib/util/avcomp.c898
-rw-r--r--src/lib/util/avcomp.h140
-rw-r--r--src/lib/util/avhuff.c977
-rw-r--r--src/lib/util/avhuff.h231
-rw-r--r--src/lib/util/aviio.c2
-rw-r--r--src/lib/util/bitmap.c2
-rw-r--r--src/lib/util/bitmap.h16
-rw-r--r--src/lib/util/bitstream.h265
-rw-r--r--src/lib/util/cdrom.c224
-rw-r--r--src/lib/util/cdrom.h9
-rw-r--r--src/lib/util/chd.c5198
-rw-r--r--src/lib/util/chd.h656
-rw-r--r--src/lib/util/chdcd.c261
-rw-r--r--src/lib/util/chdcd.h25
-rw-r--r--src/lib/util/chdcodec.c1325
-rw-r--r--src/lib/util/chdcodec.h212
-rw-r--r--src/lib/util/corefile.c35
-rw-r--r--src/lib/util/corefile.h2
-rw-r--r--src/lib/util/coretmpl.h109
-rw-r--r--src/lib/util/flac.c598
-rw-r--r--src/lib/util/flac.h167
-rw-r--r--src/lib/util/harddisk.c61
-rw-r--r--src/lib/util/harddisk.h5
-rw-r--r--src/lib/util/hashing.c282
-rw-r--r--src/lib/util/hashing.h245
-rw-r--r--src/lib/util/huffman.c1841
-rw-r--r--src/lib/util/huffman.h217
-rw-r--r--src/lib/util/tagmap.h2
32 files changed, 7992 insertions, 6065 deletions
diff --git a/src/lib/lib.mak b/src/lib/lib.mak
index a52f1651a47..5cc276f529f 100644
--- a/src/lib/lib.mak
+++ b/src/lib/lib.mak
@@ -32,16 +32,19 @@ OBJDIRS += \
UTILOBJS = \
$(LIBOBJ)/util/astring.o \
- $(LIBOBJ)/util/avcomp.o \
+ $(LIBOBJ)/util/avhuff.o \
$(LIBOBJ)/util/aviio.o \
$(LIBOBJ)/util/bitmap.o \
$(LIBOBJ)/util/cdrom.o \
$(LIBOBJ)/util/chd.o \
$(LIBOBJ)/util/chdcd.o \
+ $(LIBOBJ)/util/chdcodec.o \
$(LIBOBJ)/util/corefile.o \
$(LIBOBJ)/util/corestr.o \
$(LIBOBJ)/util/coreutil.o \
+ $(LIBOBJ)/util/flac.o \
$(LIBOBJ)/util/harddisk.o \
+ $(LIBOBJ)/util/hashing.o \
$(LIBOBJ)/util/huffman.o \
$(LIBOBJ)/util/jedparse.o \
$(LIBOBJ)/util/md5.o \
@@ -73,7 +76,7 @@ EXPATOBJS = \
$(OBJ)/libexpat.a: $(EXPATOBJS)
-$(LIBOBJ)/expat/%.o: $(LIBSRC)/explat/%.c | $(OSPREBUILD)
+$(LIBOBJ)/expat/%.o: $(LIBSRC)/expat/%.c | $(OSPREBUILD)
@echo Compiling $<...
$(CC) $(CDEFS) $(CCOMFLAGS) $(CONLYFLAGS) -c $< -o $@
@@ -273,6 +276,7 @@ $(LIBOBJ)/libjpeg/%.o: $(LIBSRC)/libjpeg/%.c | $(OSPREBUILD)
$(CC) $(CDEFS) $(CCOMFLAGS) $(CONLYFLAGS) -I$(LIBSRC)/libjpeg -c $< -o $@
+
#-------------------------------------------------
# libflac library objects
#-------------------------------------------------
@@ -303,23 +307,12 @@ $(LIBOBJ)/libflac/%.o: $(LIBSRC)/libflac/libflac/%.c | $(OSPREBUILD)
$(CC) $(CDEFS) $(FLACOPTS) $(CONLYFLAGS) -I$(LIBSRC)/libflac/include -c $< -o $@
-# LIBFLACPPOBJS = \
-# $(LIBOBJ)/libflacpp/metadata.o \
-# $(LIBOBJ)/libflacpp/stream_decoder.o \
-# $(LIBOBJ)/libflacpp/stream_encoder.o
-
-# $(OBJ)/libflac++.a: $(LIBFLACPPOBJS)
-
-# $(LIBOBJ)/libflacpp/%.o: $(LIBSRC)/libflac/libflac++/%.cpp | $(OSPREBUILD)
-# @echo Compiling $<...
-# $(CC) $(CDEFS) $(FLACOPTS) $(CPPONLYFLAGS) -I$(LIBSRC)/libflac/include -c $< -o $@
-
#-------------------------------------------------
# lib7z library objects
#-------------------------------------------------
-7ZOPTS=-D_7ZIP_PPMD_SUPPPORT
+7ZOPTS=-D_7ZIP_PPMD_SUPPPORT -D_7ZIP_ST
LIB7ZOBJS = \
$(LIBOBJ)/lib7z/7zBuf.o \
@@ -331,6 +324,9 @@ LIB7ZOBJS = \
$(LIBOBJ)/lib7z/CpuArch.o \
$(LIBOBJ)/lib7z/LzmaDec.o \
$(LIBOBJ)/lib7z/Lzma2Dec.o \
+ $(LIBOBJ)/lib7z/LzmaEnc.o \
+ $(LIBOBJ)/lib7z/Lzma2Enc.o \
+ $(LIBOBJ)/lib7z/LzFind.o \
$(LIBOBJ)/lib7z/Bra.o \
$(LIBOBJ)/lib7z/Bra86.o \
$(LIBOBJ)/lib7z/Bcj2.o \
@@ -343,6 +339,3 @@ $(OBJ)/lib7z.a: $(LIB7ZOBJS)
$(LIBOBJ)/lib7z/%.o: $(LIBSRC)/lib7z/%.c | $(OSPREBUILD)
@echo Compiling $<...
$(CC) $(CDEFS) $(7ZOPTS) $(CONLYFLAGS) -I$(LIBSRC)/lib7z/ -c $< -o $@
-
-
-
diff --git a/src/lib/lib7z/LzmaDec.c b/src/lib/lib7z/LzmaDec.c
index 8c1a1486df1..a85edb7dfd0 100644
--- a/src/lib/lib7z/LzmaDec.c
+++ b/src/lib/lib7z/LzmaDec.c
@@ -967,6 +967,27 @@ SRes LzmaDec_Allocate(CLzmaDec *p, const Byte *props, unsigned propsSize, ISzAll
return SZ_OK;
}
+// why isn't there an interface to pass in the properties directly????
+SRes LzmaDec_Allocate_MAME(CLzmaDec *p, const CLzmaProps *propNew, ISzAlloc *alloc)
+{
+ SizeT dicBufSize;
+ RINOK(LzmaDec_AllocateProbs2(p, propNew, alloc));
+ dicBufSize = propNew->dicSize;
+ if (p->dic == 0 || dicBufSize != p->dicBufSize)
+ {
+ LzmaDec_FreeDict(p, alloc);
+ p->dic = (Byte *)alloc->Alloc(alloc, dicBufSize);
+ if (p->dic == 0)
+ {
+ LzmaDec_FreeProbs(p, alloc);
+ return SZ_ERROR_MEM;
+ }
+ }
+ p->dicBufSize = dicBufSize;
+ p->prop = *propNew;
+ return SZ_OK;
+}
+
SRes LzmaDecode(Byte *dest, SizeT *destLen, const Byte *src, SizeT *srcLen,
const Byte *propData, unsigned propSize, ELzmaFinishMode finishMode,
ELzmaStatus *status, ISzAlloc *alloc)
diff --git a/src/lib/lib7z/Ppmd7.c b/src/lib/lib7z/Ppmd7.c
index 060d86d2e0f..bba4d06f0ab 100644
--- a/src/lib/lib7z/Ppmd7.c
+++ b/src/lib/lib7z/Ppmd7.c
@@ -2,7 +2,7 @@
2010-03-12 : Igor Pavlov : Public domain
This code is based on PPMd var.H (2001): Dmitry Shkarin : Public domain */
-#include <memory.h>
+#include <string.h>
#include "Ppmd7.h"
diff --git a/src/lib/util/astring.h b/src/lib/util/astring.h
index 09236de8f5d..665a64786fb 100644
--- a/src/lib/util/astring.h
+++ b/src/lib/util/astring.h
@@ -52,7 +52,7 @@
// TYPE DEFINITIONS
//**************************************************************************
-// derived class for C++
+// basic allocated string class
class astring
{
public:
diff --git a/src/lib/util/avcomp.c b/src/lib/util/avcomp.c
deleted file mode 100644
index 91fa483d16c..00000000000
--- a/src/lib/util/avcomp.c
+++ /dev/null
@@ -1,898 +0,0 @@
-/***************************************************************************
-
- avcomp.c
-
- Audio/video compression and decompression helpers.
-
-****************************************************************************
-
- Copyright Aaron Giles
- All rights reserved.
-
- Redistribution and use in source and binary forms, with or without
- modification, are permitted provided that the following conditions are
- met:
-
- * Redistributions of source code must retain the above copyright
- notice, this list of conditions and the following disclaimer.
- * Redistributions in binary form must reproduce the above copyright
- notice, this list of conditions and the following disclaimer in
- the documentation and/or other materials provided with the
- distribution.
- * Neither the name 'MAME' nor the names of its contributors may be
- used to endorse or promote products derived from this software
- without specific prior written permission.
-
- THIS SOFTWARE IS PROVIDED BY AARON GILES ''AS IS'' AND ANY EXPRESS OR
- IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
- WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
- DISCLAIMED. IN NO EVENT SHALL AARON GILES BE LIABLE FOR ANY DIRECT,
- INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
- SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
- STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
- IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
- POSSIBILITY OF SUCH DAMAGE.
-
-****************************************************************************
-
- Each frame is compressed as a unit. The raw data is of the form:
- (all multibyte values are stored in big-endian format)
-
- +00 = 'chav' (4 bytes) - fixed header data to identify the format
- +04 = metasize (1 byte) - size of metadata in bytes (max=255 bytes)
- +05 = channels (1 byte) - number of audio channels
- +06 = samples (2 bytes) - number of samples per audio stream
- +08 = width (2 bytes) - width of video data
- +0A = height (2 bytes) - height of video data
- +0C = <metadata> - as raw bytes
- <audio stream 0> - as signed 16-bit samples
- <audio stream 1> - as signed 16-bit samples
- ...
- <video data> - as a raw array of 8-bit YUY data in (Cb,Y,Cr,Y) order
-
- When compressed, the data is stored as follows:
- (all multibyte values are stored in big-endian format)
-
- +00 = metasize (1 byte) - size of metadata in bytes
- +01 = channels (1 byte) - number of audio channels
- +02 = samples (2 bytes) - number of samples per audio stream
- +04 = width (2 bytes) - width of video data
- +06 = height (2 bytes) - height of video data
- +08 = audio huffman size (2 bytes) - size of audio huffman tables
- +0A = str0size (2 bytes) - compressed size of stream 0
- +0C = str1size (2 bytes) - compressed size of stream 1
- ...
- <metadata> - as raw data
- <audio huffman table> - Huffman table for audio decoding
- <audio stream 0 data> - Huffman-compressed deltas
- <audio stream 1 data> - Huffman-compressed deltas
- <...>
- <video huffman tables> - Huffman tables for video decoding
- <video data> - compressed data
-
-****************************************************************************
-
- Attempted techniques that have not been worthwhile:
-
- * Attempted to use integer DCTs from the IJG code; even the "slow"
- variants produce a lot of error and thus kill our compression ratio,
- since our compression is based on error not bitrate.
-
- * Tried various other predictors for the lossless video encoding, but
- none tended to give any significant gain over predicting the
- previous pixel.
-
-***************************************************************************/
-
-#include "avcomp.h"
-#include "huffman.h"
-#include "chd.h"
-
-#include <math.h>
-#include <stdlib.h>
-#include <new>
-
-
-/***************************************************************************
- CONSTANTS
-***************************************************************************/
-
-#define MAX_CHANNELS 4
-
-
-
-/***************************************************************************
- TYPE DEFINITIONS
-***************************************************************************/
-
-struct avcomp_state
-{
- avcomp_state()
- : maxwidth(0),
- maxheight(0),
- maxchannels(0),
- audiodata(NULL),
- ycontext(NULL),
- cbcontext(NULL),
- crcontext(NULL),
- audiohicontext(NULL),
- audiolocontext(NULL) { }
-
- /* video parameters */
- UINT32 maxwidth, maxheight;
-
- /* audio parameters */
- UINT32 maxchannels;
-
- /* intermediate data */
- UINT8 * audiodata;
-
- /* huffman contexts */
- huffman_context * ycontext;
- huffman_context * cbcontext;
- huffman_context * crcontext;
- huffman_context * audiohicontext;
- huffman_context * audiolocontext;
-
- /* configuration data */
- av_codec_compress_config compress;
- av_codec_decompress_config decompress;
-};
-
-
-
-/***************************************************************************
- PROTOTYPES
-***************************************************************************/
-
-/* encoding helpers */
-static avcomp_error encode_audio(avcomp_state *state, int channels, int samples, const UINT8 **source, int sourcexor, UINT8 *dest, UINT8 *sizes);
-static avcomp_error encode_video(avcomp_state *state, int width, int height, const UINT8 *source, UINT32 sstride, UINT32 sxor, UINT8 *dest, UINT32 *complength);
-static avcomp_error encode_video_lossless(avcomp_state *state, int width, int height, const UINT8 *source, UINT32 sstride, UINT32 sxor, UINT8 *dest, UINT32 *complength);
-
-/* decoding helpers */
-static avcomp_error decode_audio(avcomp_state *state, int channels, int samples, const UINT8 *source, UINT8 **dest, UINT32 dxor, const UINT8 *sizes);
-static avcomp_error decode_video(avcomp_state *state, int width, int height, const UINT8 *source, UINT32 complength, UINT8 *dest, UINT32 dstride, UINT32 dxor);
-static avcomp_error decode_video_lossless(avcomp_state *state, int width, int height, const UINT8 *source, UINT32 complength, UINT8 *dest, UINT32 deststride, UINT32 destxor);
-
-
-
-/***************************************************************************
- IMPLEMENTATION
-***************************************************************************/
-
-/*-------------------------------------------------
- avcomp_init - allocate and initialize a
- new state block for compression or
- decompression
--------------------------------------------------*/
-
-avcomp_state *avcomp_init(UINT32 maxwidth, UINT32 maxheight, UINT32 maxchannels)
-{
- huffman_error hufferr;
- avcomp_state *state;
-
- /* error if out of range */
- if (maxchannels > MAX_CHANNELS)
- return NULL;
-
- /* allocate memory for state block */
- state = new(std::nothrow) avcomp_state;
- if (state == NULL)
- return NULL;
-
- /* compute the core info */
- state->maxwidth = maxwidth;
- state->maxheight = maxheight;
- state->maxchannels = maxchannels;
-
- /* now allocate data buffers */
- state->audiodata = new(std::nothrow) UINT8[65536 * state->maxchannels * 2];
- if (state->audiodata == NULL)
- goto cleanup;
-
- /* create huffman contexts */
- hufferr = huffman_create_context(&state->ycontext, 16);
- if (hufferr != HUFFERR_NONE)
- goto cleanup;
- hufferr = huffman_create_context(&state->cbcontext, 16);
- if (hufferr != HUFFERR_NONE)
- goto cleanup;
- hufferr = huffman_create_context(&state->crcontext, 16);
- if (hufferr != HUFFERR_NONE)
- goto cleanup;
- hufferr = huffman_create_context(&state->audiohicontext, 16);
- if (hufferr != HUFFERR_NONE)
- goto cleanup;
- hufferr = huffman_create_context(&state->audiolocontext, 16);
- if (hufferr != HUFFERR_NONE)
- goto cleanup;
-
- return state;
-
-cleanup:
- avcomp_free(state);
- return NULL;
-}
-
-
-/*-------------------------------------------------
- avcomp_free - free a state block
--------------------------------------------------*/
-
-void avcomp_free(avcomp_state *state)
-{
- /* free the data buffers */
- delete[] state->audiodata;
-
- /* free the contexts */
- if (state->ycontext != NULL)
- huffman_free_context(state->ycontext);
- if (state->cbcontext != NULL)
- huffman_free_context(state->cbcontext);
- if (state->crcontext != NULL)
- huffman_free_context(state->crcontext);
- if (state->audiohicontext != NULL)
- huffman_free_context(state->audiohicontext);
- if (state->audiolocontext != NULL)
- huffman_free_context(state->audiolocontext);
-
- delete state;
-}
-
-
-/*-------------------------------------------------
- avcomp_config_compress - configure compression
- parameters
--------------------------------------------------*/
-
-void avcomp_config_compress(avcomp_state *state, av_codec_compress_config *config)
-{
- state->compress.video.wrap(config->video, config->video.cliprect());
- state->compress.channels = config->channels;
- state->compress.samples = config->samples;
- memcpy(state->compress.audio, config->audio, sizeof(state->compress.audio));
- state->compress.metalength = config->metalength;
- state->compress.metadata = config->metadata;
-}
-
-
-/*-------------------------------------------------
- avcomp_config_decompress - configure
- decompression parameters
--------------------------------------------------*/
-
-void avcomp_config_decompress(avcomp_state *state, av_codec_decompress_config *config)
-{
- state->decompress.video.wrap(config->video, config->video.cliprect());
- state->decompress.maxsamples = config->maxsamples;
- state->decompress.actsamples = config->actsamples;
- memcpy(state->decompress.audio, config->audio, sizeof(state->decompress.audio));
- state->decompress.maxmetalength = config->maxmetalength;
- state->decompress.actmetalength = config->actmetalength;
- state->decompress.metadata = config->metadata;
-}
-
-
-
-/***************************************************************************
- ENCODING/DECODING FRONTENDS
-***************************************************************************/
-
-/*-------------------------------------------------
- avcomp_encode_data - encode a block of data
- into a compressed data stream
--------------------------------------------------*/
-
-avcomp_error avcomp_encode_data(avcomp_state *state, const UINT8 *source, UINT8 *dest, UINT32 *complength)
-{
- const UINT8 *metastart, *videostart, *audiostart[MAX_CHANNELS];
- UINT32 metasize, channels, samples, width, height;
- UINT32 audioxor, videoxor, videostride;
- avcomp_error err;
- UINT32 dstoffs;
- int chnum;
-
- /* extract data from source if present */
- if (source != NULL)
- {
- /* validate the header */
- if (source[0] != 'c' || source[1] != 'h' || source[2] != 'a' || source[3] != 'v')
- return AVCERR_INVALID_DATA;
-
- /* extract info from the header */
- metasize = source[4];
- channels = source[5];
- samples = (source[6] << 8) + source[7];
- width = (source[8] << 8) + source[9];
- height = (source[10] << 8) + source[11];
-
- /* determine the start of each piece of data */
- source += 12;
- metastart = source;
- source += metasize;
- for (chnum = 0; chnum < channels; chnum++)
- {
- audiostart[chnum] = source;
- source += 2 * samples;
- }
- videostart = source;
-
- /* data is assumed to be big-endian already */
- audioxor = videoxor = 0;
- videostride = 2 * width;
- }
-
- /* otherwise, extract from the state */
- else
- {
- UINT16 betest = 0;
-
- /* extract metadata information */
- metastart = state->compress.metadata;
- metasize = state->compress.metalength;
- if ((metastart == NULL && metasize != 0) || (metastart != NULL && metasize == 0))
- return AVCERR_INVALID_CONFIGURATION;
-
- /* extract audio information */
- channels = state->compress.channels;
- samples = state->compress.samples;
- for (chnum = 0; chnum < channels; chnum++)
- audiostart[chnum] = (const UINT8 *)state->compress.audio[chnum];
-
- /* extract video information */
- videostart = NULL;
- videostride = width = height = 0;
- if (state->compress.video.valid())
- {
- videostart = reinterpret_cast<const UINT8 *>(&state->compress.video.pix(0));
- videostride = state->compress.video.rowpixels() * 2;
- width = state->compress.video.width();
- height = state->compress.video.height();
- }
-
- /* data is assumed to be native-endian */
- *(UINT8 *)&betest = 1;
- audioxor = videoxor = (betest == 1) ? 1 : 0;
- }
-
- /* validate the info from the header */
- if (width > state->maxwidth || height > state->maxheight)
- return AVCERR_VIDEO_TOO_LARGE;
- if (channels > state->maxchannels)
- return AVCERR_AUDIO_TOO_LARGE;
-
- /* write the basics to the new header */
- dest[0] = metasize;
- dest[1] = channels;
- dest[2] = samples >> 8;
- dest[3] = samples;
- dest[4] = width >> 8;
- dest[5] = width;
- dest[6] = height >> 8;
- dest[7] = height;
-
- /* starting offsets */
- dstoffs = 10 + 2 * channels;
-
- /* copy the metadata first */
- if (metasize > 0)
- {
- memcpy(dest + dstoffs, metastart, metasize);
- dstoffs += metasize;
- }
-
- /* encode the audio channels */
- if (channels > 0)
- {
- /* encode the audio */
- err = encode_audio(state, channels, samples, audiostart, audioxor, dest + dstoffs, &dest[8]);
- if (err != AVCERR_NONE)
- return err;
-
- /* advance the pointers past the data */
- dstoffs += (dest[8] << 8) + dest[9];
- for (chnum = 0; chnum < channels; chnum++)
- dstoffs += (dest[10 + 2 * chnum] << 8) + dest[11 + 2 * chnum];
- }
-
- /* encode the video data */
- if (width > 0 && height > 0)
- {
- UINT32 vidlength = 0;
-
- /* encode the video */
- err = encode_video(state, width, height, videostart, videostride, videoxor, dest + dstoffs, &vidlength);
- if (err != AVCERR_NONE)
- return err;
-
- /* advance the pointers past the data */
- dstoffs += vidlength;
- }
-
- /* set the total compression */
- *complength = dstoffs;
- return AVCERR_NONE;
-}
-
-
-/*-------------------------------------------------
- avcomp_decode_data - decode both
- audio and video from a raw data stream
--------------------------------------------------*/
-
-avcomp_error avcomp_decode_data(avcomp_state *state, const UINT8 *source, UINT32 complength, UINT8 *dest)
-{
- UINT8 *metastart, *videostart, *audiostart[MAX_CHANNELS];
- UINT32 metasize, channels, samples, width, height;
- UINT32 audioxor, videoxor, videostride;
- UINT32 srcoffs, totalsize;
- avcomp_error err;
- int chnum;
-
- /* extract info from the header */
- if (complength < 8)
- return AVCERR_INVALID_DATA;
- metasize = source[0];
- channels = source[1];
- samples = (source[2] << 8) + source[3];
- width = (source[4] << 8) + source[5];
- height = (source[6] << 8) + source[7];
-
- /* validate the info from the header */
- if (width > state->maxwidth || height > state->maxheight)
- return AVCERR_VIDEO_TOO_LARGE;
- if (channels > state->maxchannels)
- return AVCERR_AUDIO_TOO_LARGE;
-
- /* validate that the sizes make sense */
- if (complength < 10 + 2 * channels)
- return AVCERR_INVALID_DATA;
- totalsize = 10 + 2 * channels;
- totalsize += (source[8] << 8) | source[9];
- for (chnum = 0; chnum < channels; chnum++)
- totalsize += (source[10 + 2 * chnum] << 8) | source[11 + 2 * chnum];
- if (totalsize >= complength)
- return AVCERR_INVALID_DATA;
-
- /* starting offsets */
- srcoffs = 10 + 2 * channels;
-
- /* if we are decoding raw, set up the output parameters */
- if (dest != NULL)
- {
- /* create a header */
- dest[0] = 'c';
- dest[1] = 'h';
- dest[2] = 'a';
- dest[3] = 'v';
- dest[4] = metasize;
- dest[5] = channels;
- dest[6] = samples >> 8;
- dest[7] = samples;
- dest[8] = width >> 8;
- dest[9] = width;
- dest[10] = height >> 8;
- dest[11] = height;
-
- /* determine the start of each piece of data */
- dest += 12;
- metastart = dest;
- dest += metasize;
- for (chnum = 0; chnum < channels; chnum++)
- {
- audiostart[chnum] = dest;
- dest += 2 * samples;
- }
- videostart = dest;
-
- /* data is assumed to be big-endian already */
- audioxor = videoxor = 0;
- videostride = 2 * width;
- }
-
- /* otherwise, extract from the state */
- else
- {
- UINT16 betest = 0;
-
- /* determine the start of each piece of data */
- metastart = state->decompress.metadata;
- for (chnum = 0; chnum < channels; chnum++)
- audiostart[chnum] = (UINT8 *)state->decompress.audio[chnum];
- videostart = (state->decompress.video.valid()) ? reinterpret_cast<UINT8 *>(&state->decompress.video.pix(0)) : NULL;
- videostride = (state->decompress.video.valid()) ? state->decompress.video.rowpixels() * 2 : 0;
-
- /* data is assumed to be native-endian */
- *(UINT8 *)&betest = 1;
- audioxor = videoxor = (betest == 1) ? 1 : 0;
-
- /* verify against sizes */
- if (state->decompress.video.valid() && (state->decompress.video.width() < width || state->decompress.video.height() < height))
- return AVCERR_VIDEO_TOO_LARGE;
- for (chnum = 0; chnum < channels; chnum++)
- if (state->decompress.audio[chnum] != NULL && state->decompress.maxsamples < samples)
- return AVCERR_AUDIO_TOO_LARGE;
- if (state->decompress.metadata != NULL && state->decompress.maxmetalength < metasize)
- return AVCERR_METADATA_TOO_LARGE;
-
- /* set the output values */
- if (state->decompress.actsamples != NULL)
- *state->decompress.actsamples = samples;
- if (state->decompress.actmetalength != NULL)
- *state->decompress.actmetalength = metasize;
- }
-
- /* copy the metadata first */
- if (metasize > 0)
- {
- if (metastart != NULL)
- memcpy(metastart, source + srcoffs, metasize);
- srcoffs += metasize;
- }
-
- /* decode the audio channels */
- if (channels > 0)
- {
- /* decode the audio */
- err = decode_audio(state, channels, samples, source + srcoffs, audiostart, audioxor, &source[8]);
- if (err != AVCERR_NONE)
- return err;
-
- /* advance the pointers past the data */
- srcoffs += (source[8] << 8) + source[9];
- for (chnum = 0; chnum < channels; chnum++)
- srcoffs += (source[10 + 2 * chnum] << 8) + source[11 + 2 * chnum];
- }
-
- /* decode the video data */
- if (width > 0 && height > 0 && videostart != NULL)
- {
- /* decode the video */
- err = decode_video(state, width, height, source + srcoffs, complength - srcoffs, videostart, videostride, videoxor);
- if (err != AVCERR_NONE)
- return err;
- }
- return AVCERR_NONE;
-}
-
-
-
-/***************************************************************************
- ENCODING HELPERS
-***************************************************************************/
-
-/*-------------------------------------------------
- encode_audio - encode raw audio data
- to the destination
--------------------------------------------------*/
-
-static avcomp_error encode_audio(avcomp_state *state, int channels, int samples, const UINT8 **source, int sourcexor, UINT8 *dest, UINT8 *sizes)
-{
- UINT32 size, huffsize, totalsize;
- huffman_context *contexts[2];
- huffman_error hufferr;
- UINT8 *output = dest;
- int chnum, sampnum;
- UINT8 *deltabuf;
-
- /* iterate over channels to compute deltas */
- deltabuf = state->audiodata;
- for (chnum = 0; chnum < channels; chnum++)
- {
- const UINT8 *srcdata = source[chnum];
- INT16 prevsample = 0;
-
- /* extract audio data into hi and lo deltas stored in big-endian order */
- for (sampnum = 0; sampnum < samples; sampnum++)
- {
- INT16 newsample = (srcdata[0 ^ sourcexor] << 8) | srcdata[1 ^ sourcexor];
- INT16 delta = newsample - prevsample;
- prevsample = newsample;
- *deltabuf++ = delta >> 8;
- *deltabuf++ = delta;
- srcdata += 2;
- }
- }
-
- /* compute the trees */
- contexts[0] = state->audiohicontext;
- contexts[1] = state->audiolocontext;
- hufferr = huffman_compute_tree_interleaved(2, contexts, state->audiodata, samples * 2, channels, samples * 2, 0);
- if (hufferr != HUFFERR_NONE)
- return AVCERR_COMPRESSION_ERROR;
-
- /* export them to the output */
- hufferr = huffman_export_tree(state->audiohicontext, output, 256, &size);
- if (hufferr != HUFFERR_NONE)
- return AVCERR_COMPRESSION_ERROR;
- output += size;
- hufferr = huffman_export_tree(state->audiolocontext, output, 256, &size);
- if (hufferr != HUFFERR_NONE)
- return AVCERR_COMPRESSION_ERROR;
- output += size;
-
- /* note the size of the two trees */
- huffsize = output - dest;
- sizes[0] = huffsize >> 8;
- sizes[1] = huffsize;
-
- /* iterate over channels */
- totalsize = huffsize;
- for (chnum = 0; chnum < channels; chnum++)
- {
- const UINT8 *input = state->audiodata + chnum * samples * 2;
-
- /* encode the data */
- hufferr = huffman_encode_data_interleaved(2, contexts, input, samples * 2, 1, 0, 0, output, samples * 2, &size);
- if (hufferr != HUFFERR_NONE)
- return AVCERR_COMPRESSION_ERROR;
- output += size;
-
- /* store the size of this stream */
- totalsize += size;
- if (totalsize >= channels * samples * 2)
- break;
- sizes[chnum * 2 + 2] = size >> 8;
- sizes[chnum * 2 + 3] = size;
- }
-
- /* if we ran out of room, throw it all away and just store raw */
- if (chnum < channels)
- {
- memcpy(dest, state->audiodata, channels * samples * 2);
- size = samples * 2;
- sizes[0] = sizes[1] = 0;
- for (chnum = 0; chnum < channels; chnum++)
- {
- sizes[chnum * 2 + 2] = size >> 8;
- sizes[chnum * 2 + 3] = size;
- }
- }
-
- return AVCERR_NONE;
-}
-
-
-/*-------------------------------------------------
- encode_video - encode raw video data
- to the destination
--------------------------------------------------*/
-
-static avcomp_error encode_video(avcomp_state *state, int width, int height, const UINT8 *source, UINT32 sstride, UINT32 sxor, UINT8 *dest, UINT32 *complength)
-{
- /* only lossless supported at this time */
- return encode_video_lossless(state, width, height, source, sstride, sxor, dest, complength);
-}
-
-
-/*-------------------------------------------------
- encode_video_lossless - do a lossless video
- encoding using deltas and huffman encoding
--------------------------------------------------*/
-
-static avcomp_error encode_video_lossless(avcomp_state *state, int width, int height, const UINT8 *source, UINT32 sstride, UINT32 sxor, UINT8 *dest, UINT32 *complength)
-{
- UINT32 srcbytes = width * height * 2;
- huffman_context *contexts[4];
- huffman_error hufferr;
- UINT32 outbytes;
- UINT8 *output;
-
- /* set up the output; first byte is 0x80 to indicate lossless encoding */
- output = dest;
- *output++ = 0x80;
-
- /* now encode to the destination using two trees, one for the Y and one for the Cr/Cb */
- contexts[0] = state->ycontext;
- contexts[1] = state->cbcontext;
- contexts[2] = state->ycontext;
- contexts[3] = state->crcontext;
-
- /* compute the histograms for the data */
- hufferr = huffman_deltarle_compute_tree_interleaved(4, contexts, source, width * 2, height, sstride, sxor);
- if (hufferr != HUFFERR_NONE)
- return AVCERR_COMPRESSION_ERROR;
-
- /* export the trees to the data stream */
- hufferr = huffman_deltarle_export_tree(state->ycontext, output, 256, &outbytes);
- if (hufferr != HUFFERR_NONE)
- return AVCERR_COMPRESSION_ERROR;
- output += outbytes;
- hufferr = huffman_deltarle_export_tree(state->cbcontext, output, 256, &outbytes);
- if (hufferr != HUFFERR_NONE)
- return AVCERR_COMPRESSION_ERROR;
- output += outbytes;
- hufferr = huffman_deltarle_export_tree(state->crcontext, output, 256, &outbytes);
- if (hufferr != HUFFERR_NONE)
- return AVCERR_COMPRESSION_ERROR;
- output += outbytes;
-
- /* encode the data using the trees */
- hufferr = huffman_deltarle_encode_data_interleaved(4, contexts, source, width * 2, height, sstride, sxor, output, srcbytes, &outbytes);
- if (hufferr != HUFFERR_NONE)
- return AVCERR_COMPRESSION_ERROR;
- output += outbytes;
-
- /* set the final length */
- *complength = output - dest;
- return AVCERR_NONE;
-}
-
-
-
-/***************************************************************************
- DECODING HELPERS
-***************************************************************************/
-
-/*-------------------------------------------------
- decode_audio - decode audio from a
- compressed data stream
--------------------------------------------------*/
-
-static avcomp_error decode_audio(avcomp_state *state, int channels, int samples, const UINT8 *source, UINT8 **dest, UINT32 dxor, const UINT8 *sizes)
-{
- huffman_context *contexts[2];
- UINT32 actsize, huffsize;
- huffman_error hufferr;
- int chnum, sampnum;
- UINT16 size;
-
- /* if no huffman length, just copy the data */
- size = (sizes[0] << 8) | sizes[1];
- if (size == 0)
- {
- /* loop over channels */
- for (chnum = 0; chnum < channels; chnum++)
- {
- UINT8 *curdest = dest[chnum];
-
- /* extract the size of this channel */
- size = (sizes[chnum * 2 + 2] << 8) | sizes[chnum * 2 + 3];
-
- /* extract data from the deltas */
- if (dest[chnum] != NULL)
- {
- INT16 prevsample = 0;
- for (sampnum = 0; sampnum < samples; sampnum++)
- {
- INT16 delta = (source[0] << 8) | source[1];
- INT16 newsample = prevsample + delta;
- prevsample = newsample;
-
- curdest[0 ^ dxor] = newsample >> 8;
- curdest[1 ^ dxor] = newsample;
- source += 2;
- curdest += 2;
- }
- }
- else
- source += size;
- }
- return AVCERR_NONE;
- }
-
- /* extract the huffman trees */
- hufferr = huffman_import_tree(state->audiohicontext, source, size, &actsize);
- if (hufferr != HUFFERR_NONE)
- return AVCERR_INVALID_DATA;
- source += actsize;
- huffsize = actsize;
-
- hufferr = huffman_import_tree(state->audiolocontext, source, size, &actsize);
- if (hufferr != HUFFERR_NONE)
- return AVCERR_INVALID_DATA;
- source += actsize;
- huffsize += actsize;
- if (huffsize != size)
- return AVCERR_INVALID_DATA;
-
- /* set up the contexts */
- contexts[0] = state->audiohicontext;
- contexts[1] = state->audiolocontext;
-
- /* now loop over channels and decode their data */
- for (chnum = 0; chnum < channels; chnum++)
- {
- /* extract the size of this channel */
- size = (sizes[chnum * 2 + 2] << 8) | sizes[chnum * 2 + 3];
-
- /* decode the data */
- if (dest[chnum] != NULL)
- {
- UINT8 *deltabuf = state->audiodata + chnum * samples * 2;
- hufferr = huffman_decode_data_interleaved(2, contexts, source, size, deltabuf, samples * 2, 1, 0, 0, &actsize);
- if (hufferr != HUFFERR_NONE || actsize != size)
- return AVCERR_INVALID_DATA;
- }
-
- /* advance */
- source += size;
- }
-
- /* reassemble audio from the deltas */
- for (chnum = 0; chnum < channels; chnum++)
- if (dest[chnum] != NULL)
- {
- UINT8 *deltabuf = state->audiodata + chnum * samples * 2;
- UINT8 *curdest = dest[chnum];
- INT16 prevsample = 0;
-
- for (sampnum = 0; sampnum < samples; sampnum++)
- {
- INT16 delta = (deltabuf[0] << 8) | deltabuf[1];
- INT16 newsample = prevsample + delta;
- prevsample = newsample;
-
- curdest[0 ^ dxor] = newsample >> 8;
- curdest[1 ^ dxor] = newsample;
- deltabuf += 2;
- curdest += 2;
- }
- }
-
- return AVCERR_NONE;
-}
-
-
-/*-------------------------------------------------
- decode_video - decode video from a
- compressed data stream
--------------------------------------------------*/
-
-static avcomp_error decode_video(avcomp_state *state, int width, int height, const UINT8 *source, UINT32 complength, UINT8 *dest, UINT32 dstride, UINT32 dxor)
-{
- /* if the high bit of the first byte is set, we decode losslessly */
- if (source[0] & 0x80)
- return decode_video_lossless(state, width, height, source, complength, dest, dstride, dxor);
- else
- return AVCERR_INVALID_DATA;
-}
-
-
-/*-------------------------------------------------
- decode_video_lossless - do a lossless video
- decoding using deltas and huffman encoding
--------------------------------------------------*/
-
-static avcomp_error decode_video_lossless(avcomp_state *state, int width, int height, const UINT8 *source, UINT32 complength, UINT8 *dest, UINT32 deststride, UINT32 destxor)
-{
- const UINT8 *sourceend = source + complength;
- huffman_context *contexts[4];
- huffman_error hufferr;
- UINT32 actsize;
-
- /* skip the first byte */
- source++;
-
- /* import the tables */
- hufferr = huffman_deltarle_import_tree(state->ycontext, source, sourceend - source, &actsize);
- if (hufferr != HUFFERR_NONE)
- return AVCERR_INVALID_DATA;
- source += actsize;
- hufferr = huffman_deltarle_import_tree(state->cbcontext, source, sourceend - source, &actsize);
- if (hufferr != HUFFERR_NONE)
- return AVCERR_INVALID_DATA;
- source += actsize;
- hufferr = huffman_deltarle_import_tree(state->crcontext, source, sourceend - source, &actsize);
- if (hufferr != HUFFERR_NONE)
- return AVCERR_INVALID_DATA;
- source += actsize;
-
- /* set up the decoding contexts */
- contexts[0] = state->ycontext;
- contexts[1] = state->cbcontext;
- contexts[2] = state->ycontext;
- contexts[3] = state->crcontext;
-
- /* decode to the destination */
- hufferr = huffman_deltarle_decode_data_interleaved(4, contexts, source, sourceend - source, dest, width * 2, height, deststride, destxor, &actsize);
- if (hufferr != HUFFERR_NONE)
- return AVCERR_INVALID_DATA;
- if (actsize != sourceend - source)
- return AVCERR_INVALID_DATA;
-
- return AVCERR_NONE;
-}
diff --git a/src/lib/util/avcomp.h b/src/lib/util/avcomp.h
deleted file mode 100644
index ee9bba8a192..00000000000
--- a/src/lib/util/avcomp.h
+++ /dev/null
@@ -1,140 +0,0 @@
-/***************************************************************************
-
- avcomp.h
-
- Audio/video compression and decompression helpers.
-
-****************************************************************************
-
- Copyright Aaron Giles
- All rights reserved.
-
- Redistribution and use in source and binary forms, with or without
- modification, are permitted provided that the following conditions are
- met:
-
- * Redistributions of source code must retain the above copyright
- notice, this list of conditions and the following disclaimer.
- * Redistributions in binary form must reproduce the above copyright
- notice, this list of conditions and the following disclaimer in
- the documentation and/or other materials provided with the
- distribution.
- * Neither the name 'MAME' nor the names of its contributors may be
- used to endorse or promote products derived from this software
- without specific prior written permission.
-
- THIS SOFTWARE IS PROVIDED BY AARON GILES ''AS IS'' AND ANY EXPRESS OR
- IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
- WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
- DISCLAIMED. IN NO EVENT SHALL AARON GILES BE LIABLE FOR ANY DIRECT,
- INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
- SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
- STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
- IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
- POSSIBILITY OF SUCH DAMAGE.
-
-***************************************************************************/
-
-#ifndef __AVCOMP_H__
-#define __AVCOMP_H__
-
-#include "osdcore.h"
-#include "bitmap.h"
-
-
-/***************************************************************************
- CONSTANTS
-***************************************************************************/
-
-/* errors */
-enum _avcomp_error
-{
- AVCERR_NONE = 0,
- AVCERR_INVALID_DATA,
- AVCERR_VIDEO_TOO_LARGE,
- AVCERR_AUDIO_TOO_LARGE,
- AVCERR_METADATA_TOO_LARGE,
- AVCERR_OUT_OF_MEMORY,
- AVCERR_COMPRESSION_ERROR,
- AVCERR_TOO_MANY_CHANNELS,
- AVCERR_INVALID_CONFIGURATION
-};
-typedef enum _avcomp_error avcomp_error;
-
-/* default decompression parameters */
-#define AVCOMP_ENABLE_META (1 << 0)
-#define AVCOMP_ENABLE_VIDEO (1 << 1)
-#define AVCOMP_ENABLE_AUDIO(x) (1 << (2 + (x)))
-#define AVCOMP_ENABLE_DEFAULT (~0)
-
-
-
-/***************************************************************************
- TYPE DEFINITIONS
-***************************************************************************/
-
-/* compression configuration */
-struct av_codec_compress_config
-{
- av_codec_compress_config()
- : channels(0),
- samples(0),
- metalength(0),
- metadata(NULL)
- {
- memset(audio, 0, sizeof(audio));
- }
-
- bitmap_yuy16 video; /* pointer to video bitmap */
- UINT32 channels; /* number of channels */
- UINT32 samples; /* number of samples per channel */
- INT16 * audio[16]; /* pointer to individual audio channels */
- UINT32 metalength; /* length of metadata */
- UINT8 * metadata; /* pointer to metadata buffer */
-};
-
-
-/* decompression configuration */
-struct av_codec_decompress_config
-{
- av_codec_decompress_config()
- : maxsamples(0),
- actsamples(0),
- maxmetalength(0),
- actmetalength(0),
- metadata(NULL)
- {
- memset(audio, 0, sizeof(audio));
- }
-
- bitmap_yuy16 video; /* pointer to video bitmap */
- UINT32 maxsamples; /* maximum number of samples per channel */
- UINT32 * actsamples; /* actual number of samples per channel */
- INT16 * audio[16]; /* pointer to individual audio channels */
- UINT32 maxmetalength; /* maximum length of metadata */
- UINT32 * actmetalength; /* actual length of metadata */
- UINT8 * metadata; /* pointer to metadata buffer */
-};
-
-
-/* opaque state */
-struct avcomp_state;
-
-
-
-/***************************************************************************
- PROTOTYPES
-***************************************************************************/
-
-avcomp_state *avcomp_init(UINT32 maxwidth, UINT32 maxheight, UINT32 maxchannels);
-void avcomp_free(avcomp_state *state);
-
-void avcomp_config_compress(avcomp_state *state, av_codec_compress_config *config);
-void avcomp_config_decompress(avcomp_state *state, av_codec_decompress_config *config);
-
-avcomp_error avcomp_encode_data(avcomp_state *state, const UINT8 *source, UINT8 *dest, UINT32 *complength);
-avcomp_error avcomp_decode_data(avcomp_state *state, const UINT8 *source, UINT32 complength, UINT8 *dest);
-
-#endif
diff --git a/src/lib/util/avhuff.c b/src/lib/util/avhuff.c
new file mode 100644
index 00000000000..3e1126abea6
--- /dev/null
+++ b/src/lib/util/avhuff.c
@@ -0,0 +1,977 @@
+/***************************************************************************
+
+ avhuff.c
+
+ Audio/video compression and decompression helpers.
+
+****************************************************************************
+
+ Copyright Aaron Giles
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name 'MAME' nor the names of its contributors may be
+ used to endorse or promote products derived from this software
+ without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY AARON GILES ''AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ DISCLAIMED. IN NO EVENT SHALL AARON GILES BE LIABLE FOR ANY DIRECT,
+ INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
+ IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ POSSIBILITY OF SUCH DAMAGE.
+
+****************************************************************************
+
+ Each frame is compressed as a unit. The raw data is of the form:
+ (all multibyte values are stored in big-endian format)
+
+ +00 = 'chav' (4 bytes) - fixed header data to identify the format
+ +04 = metasize (1 byte) - size of metadata in bytes (max=255 bytes)
+ +05 = channels (1 byte) - number of audio channels
+ +06 = samples (2 bytes) - number of samples per audio stream
+ +08 = width (2 bytes) - width of video data
+ +0A = height (2 bytes) - height of video data
+ +0C = <metadata> - as raw bytes
+ <audio stream 0> - as signed 16-bit samples
+ <audio stream 1> - as signed 16-bit samples
+ ...
+ <video data> - as a raw array of 8-bit YUY data in (Cb,Y,Cr,Y) order
+
+ When compressed, the data is stored as follows:
+ (all multibyte values are stored in big-endian format)
+
+ +00 = metasize (1 byte) - size of metadata in bytes
+ +01 = channels (1 byte) - number of audio channels
+ +02 = samples (2 bytes) - number of samples per audio stream
+ +04 = width (2 bytes) - width of video data
+ +06 = height (2 bytes) - height of video data
+ +08 = audio huffman size (2 bytes) - size of audio huffman tables
+ (0x0000 => uncompressed deltas are used)
+ +0A = str0size (2 bytes) - compressed size of stream 0
+ +0C = str1size (2 bytes) - compressed size of stream 1
+ ...
+ <metadata> - as raw data
+ <audio huffman table> - Huffman table for audio decoding
+ <audio stream 0 data> - Huffman-compressed deltas
+ <audio stream 1 data> - Huffman-compressed deltas
+ <...>
+ <video huffman tables> - Huffman tables for video decoding
+ <video data> - compressed data
+
+****************************************************************************
+
+ Attempted techniques that have not been worthwhile:
+
+ * Attempted to use integer DCTs from the IJG code; even the "slow"
+ variants produce a lot of error and thus kill our compression ratio,
+ since our compression is based on error not bitrate.
+
+ * Tried various other predictors for the lossless video encoding, but
+ none tended to give any significant gain over predicting the
+ previous pixel.
+
+***************************************************************************/
+
+#include "avhuff.h"
+#include "huffman.h"
+#include "chd.h"
+
+#include <math.h>
+#include <stdlib.h>
+#include <new>
+
+
+
+//**************************************************************************
+// INLINE FUNCTIONS
+//**************************************************************************
+
+//-------------------------------------------------
+// code_to_rlecount - number of RLE repetitions
+// encoded in a given byte
+//-------------------------------------------------
+
+inline int code_to_rlecount(int code)
+{
+ if (code == 0x00)
+ return 1;
+ if (code <= 0x107)
+ return 8 + (code - 0x100);
+ return 16 << (code - 0x108);
+}
+
+
+//-------------------------------------------------
+// rlecount_to_byte - return a byte encoding
+// the maximum RLE count less than or equal to
+// the provided amount
+//-------------------------------------------------
+
+inline int rlecount_to_code(int rlecount)
+{
+ if (rlecount >= 2048)
+ return 0x10f;
+ if (rlecount >= 1024)
+ return 0x10e;
+ if (rlecount >= 512)
+ return 0x10d;
+ if (rlecount >= 256)
+ return 0x10c;
+ if (rlecount >= 128)
+ return 0x10b;
+ if (rlecount >= 64)
+ return 0x10a;
+ if (rlecount >= 32)
+ return 0x109;
+ if (rlecount >= 16)
+ return 0x108;
+ if (rlecount >= 8)
+ return 0x100 + (rlecount - 8);
+ return 0x00;
+}
+
+
+//-------------------------------------------------
+// encode_one - encode data
+//-------------------------------------------------
+
+inline void avhuff_encoder::deltarle_encoder::encode_one(bitstream_out &bitbuf, UINT16 *&rleptr)
+{
+ // return RLE data if we still have some
+ if (m_rlecount != 0)
+ {
+ m_rlecount--;
+ return;
+ }
+
+ // fetch the data and process
+ UINT16 data = *rleptr++;
+ m_encoder.encode_one(bitbuf, data);
+ if (data >= 0x100)
+ m_rlecount = code_to_rlecount(data) - 1;
+}
+
+
+//-------------------------------------------------
+// decode_one - decode data
+//-------------------------------------------------
+
+inline UINT32 avhuff_decoder::deltarle_decoder::decode_one(bitstream_in &bitbuf)
+{
+ // return RLE data if we still have some
+ if (m_rlecount != 0)
+ {
+ m_rlecount--;
+ return m_prevdata;
+ }
+
+ // fetch the data and process
+ int data = m_decoder.decode_one(bitbuf);
+ if (data < 0x100)
+ {
+ m_prevdata += UINT8(data);
+ return m_prevdata;
+ }
+ else
+ {
+ m_rlecount = code_to_rlecount(data);
+ m_rlecount--;
+ return m_prevdata;
+ }
+}
+
+
+
+//**************************************************************************
+// AVHUFF ENCODER
+//**************************************************************************
+
+//-------------------------------------------------
+// avhuff_encoder - constructor
+//-------------------------------------------------
+
+avhuff_encoder::avhuff_encoder()
+{
+m_flac_encoder.set_sample_rate(48000);
+m_flac_encoder.set_num_channels(1);
+m_flac_encoder.set_strip_metadata(true);
+}
+
+
+//-------------------------------------------------
+// encode_data - encode a block of data into a
+// compressed data stream
+//-------------------------------------------------
+
+avhuff_error avhuff_encoder::encode_data(const UINT8 *source, UINT8 *dest, UINT32 &complength)
+{
+ // validate the header
+ if (source[0] != 'c' || source[1] != 'h' || source[2] != 'a' || source[3] != 'v')
+ return AVHERR_INVALID_DATA;
+
+ // extract info from the header
+ UINT32 metasize = source[4];
+ UINT32 channels = source[5];
+ UINT32 samples = (source[6] << 8) + source[7];
+ UINT32 width = (source[8] << 8) + source[9];
+ UINT32 height = (source[10] << 8) + source[11];
+ source += 12;
+
+ // write the basics to the new header
+ dest[0] = metasize;
+ dest[1] = channels;
+ dest[2] = samples >> 8;
+ dest[3] = samples;
+ dest[4] = width >> 8;
+ dest[5] = width;
+ dest[6] = height >> 8;
+ dest[7] = height;
+
+ // starting offsets
+ UINT32 dstoffs = 10 + 2 * channels;
+
+ // copy the metadata first
+ if (metasize > 0)
+ {
+ memcpy(dest + dstoffs, source, metasize);
+ source += metasize;
+ dstoffs += metasize;
+ }
+
+ // encode the audio channels
+ if (channels > 0)
+ {
+ // encode the audio
+ avhuff_error err = encode_audio(source, channels, samples, dest + dstoffs, &dest[8]);
+ source += channels * samples * 2;
+ if (err != AVHERR_NONE)
+ return err;
+
+ // advance the pointers past the data
+ UINT16 treesize = (dest[8] << 8) + dest[9];
+ if (treesize != 0xffff)
+ dstoffs += treesize;
+ for (int chnum = 0; chnum < channels; chnum++)
+ dstoffs += (dest[10 + 2 * chnum] << 8) + dest[11 + 2 * chnum];
+ }
+
+ // encode the video data
+ if (width > 0 && height > 0)
+ {
+ // encode the video
+ UINT32 vidlength = 0;
+ avhuff_error err = encode_video(source, width, height, dest + dstoffs, vidlength);
+ if (err != AVHERR_NONE)
+ return err;
+
+ // advance the pointers past the data
+ dstoffs += vidlength;
+ }
+
+ // set the total compression
+ complength = dstoffs;
+ return AVHERR_NONE;
+}
+
+
+//-------------------------------------------------
+// raw_data_size - return the raw data size of
+// a raw stream based on the header
+//-------------------------------------------------
+
+UINT32 avhuff_encoder::raw_data_size(const UINT8 *data)
+{
+ // make sure we have a correct header
+ int size = 0;
+ if (data[0] == 'c' && data[1] == 'h' && data[2] == 'a' && data[3] == 'v')
+ {
+ // add in header size plus metadata length
+ size = 12 + data[4];
+
+ // add in channels * samples
+ size += 2 * data[5] * ((data[6] << 8) + data[7]);
+
+ // add in 2 * width * height
+ size += 2 * ((data[8] << 8) + data[9]) * (((data[10] << 8) + data[11]) & 0x7fff);
+ }
+ return size;
+}
+
+
+//-------------------------------------------------
+// assemble_data - assemble a datastream from raw
+// bits
+//-------------------------------------------------
+
+avhuff_error avhuff_encoder::assemble_data(UINT8 *dest, UINT32 dlength, bitmap_yuy16 &bitmap, UINT8 channels, UINT32 numsamples, INT16 **samples, UINT8 *metadata, UINT32 metadatasize)
+{
+ // sanity check the inputs
+ if (metadatasize > 255)
+ return AVHERR_METADATA_TOO_LARGE;
+ if (numsamples > 65535)
+ return AVHERR_AUDIO_TOO_LARGE;
+ if (bitmap.width() > 65535 || bitmap.height() > 65535)
+ return AVHERR_VIDEO_TOO_LARGE;
+ if (dlength < 12 + metadatasize + numsamples * channels * 2 + bitmap.width() * bitmap.height() * 2)
+ return AVHERR_BUFFER_TOO_SMALL;
+
+ // fill in the header
+ *dest++ = 'c';
+ *dest++ = 'h';
+ *dest++ = 'a';
+ *dest++ = 'v';
+ *dest++ = metadatasize;
+ *dest++ = channels;
+ *dest++ = numsamples >> 8;
+ *dest++ = numsamples & 0xff;
+ *dest++ = bitmap.width() >> 8;
+ *dest++ = bitmap.width() & 0xff;
+ *dest++ = bitmap.height() >> 8;
+ *dest++ = bitmap.height() & 0xff;
+
+ // copy the metadata
+ if (metadatasize > 0)
+ memcpy(dest, metadata, metadatasize);
+ dest += metadatasize;
+
+ // copy the audio streams
+ for (UINT8 curchan = 0; curchan < channels; curchan++)
+ for (UINT32 cursamp = 0; cursamp < numsamples; cursamp++)
+ {
+ *dest++ = samples[curchan][cursamp] >> 8;
+ *dest++ = samples[curchan][cursamp] & 0xff;
+ }
+
+ // copy the video data
+ for (INT32 y = 0; y < bitmap.height(); y++)
+ {
+ UINT16 *src = &bitmap.pix(y);
+ for (INT32 x = 0; x < bitmap.width(); x++)
+ {
+ *dest++ = src[x] >> 8;
+ *dest++ = src[x] & 0xff;
+ }
+ }
+ return AVHERR_NONE;
+}
+
+
+//-------------------------------------------------
+// encode_audio - encode raw audio data to the
+// destination
+//-------------------------------------------------
+
+avhuff_error avhuff_encoder::encode_audio(const UINT8 *source, int channels, int samples, UINT8 *dest, UINT8 *sizes)
+{
+#if AVHUFF_USE_FLAC
+
+ // input data is big-endian; determine our platform endianness
+ UINT16 be_test = 0;
+ *(UINT8 *)&be_test = 1;
+ bool swap_endian = (be_test == 1);
+
+ // set huffman tree size to 0xffff to indicate FLAC
+ sizes[0] = 0xff;
+ sizes[1] = 0xff;
+
+ // set the block size for this round and iterate over channels
+ m_flac_encoder.set_block_size(samples);
+ for (int chnum = 0; chnum < channels; chnum++)
+ {
+ // encode the data
+ m_flac_encoder.reset(dest, samples * 2);
+ if (!m_flac_encoder.encode_interleaved(reinterpret_cast<const INT16 *>(source) + chnum * samples, samples, swap_endian))
+ return AVHERR_COMPRESSION_ERROR;
+
+ // set the size for this channel
+ UINT32 cursize = m_flac_encoder.finish();
+ sizes[chnum * 2 + 2] = cursize >> 8;
+ sizes[chnum * 2 + 3] = cursize;
+ dest += cursize;
+ }
+
+#else
+
+ // expand the delta buffer if needed
+ m_audiobuffer.resize(channels * samples * 2);
+ UINT8 *deltabuf = m_audiobuffer;
+
+ // iterate over channels to compute deltas
+ m_audiohi_encoder.histo_reset();
+ m_audiolo_encoder.histo_reset();
+ for (int chnum = 0; chnum < channels; chnum++)
+ {
+ // extract audio data into hi and lo deltas stored in big-endian order
+ INT16 prevsample = 0;
+ for (int sampnum = 0; sampnum < samples; sampnum++)
+ {
+ INT16 newsample = (source[0] << 8) | source[1];
+ source += 2;
+
+ INT16 delta = newsample - prevsample;
+ prevsample = newsample;
+ m_audiohi_encoder.histo_one(*deltabuf++ = delta >> 8);
+ m_audiolo_encoder.histo_one(*deltabuf++ = delta);
+ }
+ }
+
+ // compute the trees
+ huffman_error hufferr = m_audiohi_encoder.compute_tree_from_histo();
+ if (hufferr != HUFFERR_NONE)
+ return AVHERR_COMPRESSION_ERROR;
+ hufferr = m_audiolo_encoder.compute_tree_from_histo();
+ if (hufferr != HUFFERR_NONE)
+ return AVHERR_COMPRESSION_ERROR;
+
+ // export the trees to the output
+ bitstream_out bitbuf(dest, 2 * channels * samples);
+ hufferr = m_audiohi_encoder.export_tree_rle(bitbuf);
+ if (hufferr != HUFFERR_NONE)
+ return AVHERR_COMPRESSION_ERROR;
+ bitbuf.flush();
+ hufferr = m_audiolo_encoder.export_tree_rle(bitbuf);
+ if (hufferr != HUFFERR_NONE)
+ return AVHERR_COMPRESSION_ERROR;
+
+ // note the size of the two trees
+ UINT32 huffsize = bitbuf.flush();
+ sizes[0] = huffsize >> 8;
+ sizes[1] = huffsize;
+
+ // iterate over channels
+ UINT32 totalsize = huffsize;
+ int chnum;
+ for (chnum = 0; chnum < channels; chnum++)
+ {
+ // encode the data
+ const UINT8 *input = m_audiobuffer + chnum * samples * 2;
+ for (int sampnum = 0; sampnum < samples; sampnum++)
+ {
+ m_audiohi_encoder.encode_one(bitbuf, *input++);
+ m_audiolo_encoder.encode_one(bitbuf, *input++);
+ }
+
+ // store the size of this stream
+ UINT32 cursize = bitbuf.flush() - totalsize;
+ totalsize += cursize;
+ if (totalsize >= channels * samples * 2)
+ break;
+ sizes[chnum * 2 + 2] = cursize >> 8;
+ sizes[chnum * 2 + 3] = cursize;
+ }
+
+ // if we ran out of room, throw it all away and just store raw
+ if (chnum < channels)
+ {
+ memcpy(dest, m_audiobuffer, channels * samples * 2);
+ UINT32 size = samples * 2;
+ sizes[0] = sizes[1] = 0;
+ for (chnum = 0; chnum < channels; chnum++)
+ {
+ sizes[chnum * 2 + 2] = size >> 8;
+ sizes[chnum * 2 + 3] = size;
+ }
+ }
+
+#endif
+
+ return AVHERR_NONE;
+}
+
+
+//-------------------------------------------------
+// encode_video - encode raw video data to the
+// destination
+//-------------------------------------------------
+
+avhuff_error avhuff_encoder::encode_video(const UINT8 *source, int width, int height, UINT8 *dest, UINT32 &complength)
+{
+ // only lossless supported at this time
+ return encode_video_lossless(source, width, height, dest, complength);
+}
+
+
+//-------------------------------------------------
+// encode_video_lossless - do a lossless video
+// encoding using deltas and huffman encoding
+//-------------------------------------------------
+
+avhuff_error avhuff_encoder::encode_video_lossless(const UINT8 *source, int width, int height, UINT8 *dest, UINT32 &complength)
+{
+ // set up the output; first byte is 0x80 to indicate lossless encoding
+ bitstream_out bitbuf(dest, width * height * 2);
+ bitbuf.write(0x80, 8);
+
+ // compute the histograms for the data
+ UINT16 *yrle = m_ycontext.rle_and_histo_bitmap(source + 0, width, 2, height);
+ UINT16 *cbrle = m_cbcontext.rle_and_histo_bitmap(source + 1, width / 2, 4, height);
+ UINT16 *crrle = m_crcontext.rle_and_histo_bitmap(source + 3, width / 2, 4, height);
+
+ // export the trees to the data stream
+ huffman_error hufferr = m_ycontext.export_tree_rle(bitbuf);
+ if (hufferr != HUFFERR_NONE)
+ return AVHERR_COMPRESSION_ERROR;
+ bitbuf.flush();
+ hufferr = m_cbcontext.export_tree_rle(bitbuf);
+ if (hufferr != HUFFERR_NONE)
+ return AVHERR_COMPRESSION_ERROR;
+ bitbuf.flush();
+ hufferr = m_crcontext.export_tree_rle(bitbuf);
+ if (hufferr != HUFFERR_NONE)
+ return AVHERR_COMPRESSION_ERROR;
+ bitbuf.flush();
+
+ // encode the data using the trees
+ for (UINT32 sy = 0; sy < height; sy++)
+ {
+ m_ycontext.flush_rle();
+ m_cbcontext.flush_rle();
+ m_crcontext.flush_rle();
+ for (UINT32 sx = 0; sx < width / 2; sx++)
+ {
+ m_ycontext.encode_one(bitbuf, yrle);
+ m_cbcontext.encode_one(bitbuf, cbrle);
+ m_ycontext.encode_one(bitbuf, yrle);
+ m_crcontext.encode_one(bitbuf, crrle);
+ }
+ }
+
+ // set the final length
+ complength = bitbuf.flush();
+ return AVHERR_NONE;
+}
+
+
+
+//**************************************************************************
+// DELTA-RLE ENCODER
+//**************************************************************************
+
+//-------------------------------------------------
+// rle_and_histo_bitmap - RLE compress and
+// histogram a bitmap's worth of data
+//-------------------------------------------------
+
+UINT16 *avhuff_encoder::deltarle_encoder::rle_and_histo_bitmap(const UINT8 *source, UINT32 items_per_row, UINT32 item_advance, UINT32 row_count)
+{
+ // resize our RLE buffer
+ m_rlebuffer.resize(items_per_row * row_count);
+ UINT16 *dest = m_rlebuffer;
+
+ // iterate over rows
+ m_encoder.histo_reset();
+ UINT8 prevdata = 0;
+ for (UINT32 row = 0; row < row_count; row++)
+ {
+ const UINT8 *end = source + items_per_row * item_advance;
+ for ( ; source < end; source += item_advance)
+ {
+ // fetch current data
+ UINT8 curdelta = *source - prevdata;
+ prevdata = *source;
+
+ // 0 deltas scan forward for a count
+ if (curdelta == 0)
+ {
+ int zerocount = 1;
+
+ // count the number of consecutive values
+ const UINT8 *scandata;
+ for (scandata = source + item_advance; scandata < end; scandata += item_advance)
+ if (*scandata == prevdata)
+ zerocount++;
+ else
+ break;
+
+ // if we hit the end of a row, maximize the count
+ if (scandata >= end && zerocount >= 8)
+ zerocount = 100000;
+
+ // encode the maximal count we can
+ int rlecode = rlecount_to_code(zerocount);
+ m_encoder.histo_one(*dest++ = rlecode);
+
+ // advance past the run
+ source += (code_to_rlecount(rlecode) - 1) * item_advance;
+ }
+
+ // otherwise, encode the actual data
+ else
+ m_encoder.histo_one(*dest++ = curdelta);
+ }
+
+ // advance to the next row
+ source = end;
+ }
+
+ // compute the tree for our histogram
+ m_encoder.compute_tree_from_histo();
+ return m_rlebuffer;
+}
+
+
+
+//**************************************************************************
+// AVHUFF DECODER
+//**************************************************************************
+
+//-------------------------------------------------
+// avhuff_decoder - constructor
+//-------------------------------------------------
+
+avhuff_decoder::avhuff_decoder()
+{
+}
+
+
+//-------------------------------------------------
+// configure - configure decompression parameters
+//-------------------------------------------------
+
+void avhuff_decoder::configure(const avhuff_decompress_config &config)
+{
+ m_config.video.wrap(config.video, config.video.cliprect());
+ m_config.maxsamples = config.maxsamples;
+ m_config.actsamples = config.actsamples;
+ memcpy(m_config.audio, config.audio, sizeof(m_config.audio));
+ m_config.maxmetalength = config.maxmetalength;
+ m_config.actmetalength = config.actmetalength;
+ m_config.metadata = config.metadata;
+}
+
+
+//-------------------------------------------------
+// decode_data - decode both audio and video from
+// a raw data stream
+//-------------------------------------------------
+
+avhuff_error avhuff_decoder::decode_data(const UINT8 *source, UINT32 complength, UINT8 *dest)
+{
+ // extract info from the header
+ if (complength < 8)
+ return AVHERR_INVALID_DATA;
+ UINT32 metasize = source[0];
+ UINT32 channels = source[1];
+ UINT32 samples = (source[2] << 8) + source[3];
+ UINT32 width = (source[4] << 8) + source[5];
+ UINT32 height = (source[6] << 8) + source[7];
+
+ // validate that the sizes make sense
+ if (complength < 10 + 2 * channels)
+ return AVHERR_INVALID_DATA;
+ UINT32 totalsize = 10 + 2 * channels;
+ totalsize += (source[8] << 8) | source[9];
+ for (int chnum = 0; chnum < channels; chnum++)
+ totalsize += (source[10 + 2 * chnum] << 8) | source[11 + 2 * chnum];
+ if (totalsize >= complength)
+ return AVHERR_INVALID_DATA;
+
+ // starting offsets
+ UINT32 srcoffs = 10 + 2 * channels;
+
+ // if we are decoding raw, set up the output parameters
+ UINT8 *metastart, *videostart, *audiostart[16];
+ UINT32 audioxor, videoxor, videostride;
+ if (dest != NULL)
+ {
+ // create a header
+ dest[0] = 'c';
+ dest[1] = 'h';
+ dest[2] = 'a';
+ dest[3] = 'v';
+ dest[4] = metasize;
+ dest[5] = channels;
+ dest[6] = samples >> 8;
+ dest[7] = samples;
+ dest[8] = width >> 8;
+ dest[9] = width;
+ dest[10] = height >> 8;
+ dest[11] = height;
+ dest += 12;
+
+ // determine the start of each piece of data
+ metastart = dest;
+ dest += metasize;
+ for (int chnum = 0; chnum < channels; chnum++)
+ {
+ audiostart[chnum] = dest;
+ dest += 2 * samples;
+ }
+ videostart = dest;
+
+ // data is assumed to be big-endian already
+ audioxor = videoxor = 0;
+ videostride = 2 * width;
+ }
+
+ // otherwise, extract from the state
+ else
+ {
+ // determine the start of each piece of data
+ metastart = m_config.metadata;
+ for (int chnum = 0; chnum < channels; chnum++)
+ audiostart[chnum] = (UINT8 *)m_config.audio[chnum];
+ videostart = (m_config.video.valid()) ? reinterpret_cast<UINT8 *>(&m_config.video.pix(0)) : NULL;
+ videostride = (m_config.video.valid()) ? m_config.video.rowpixels() * 2 : 0;
+
+ // data is assumed to be native-endian
+ UINT16 betest = 0;
+ *(UINT8 *)&betest = 1;
+ audioxor = videoxor = (betest == 1) ? 1 : 0;
+
+ // verify against sizes
+ if (m_config.video.valid() && (m_config.video.width() < width || m_config.video.height() < height))
+ return AVHERR_VIDEO_TOO_LARGE;
+ for (int chnum = 0; chnum < channels; chnum++)
+ if (m_config.audio[chnum] != NULL && m_config.maxsamples < samples)
+ return AVHERR_AUDIO_TOO_LARGE;
+ if (m_config.metadata != NULL && m_config.maxmetalength < metasize)
+ return AVHERR_METADATA_TOO_LARGE;
+
+ // set the output values
+ if (m_config.actsamples != NULL)
+ *m_config.actsamples = samples;
+ if (m_config.actmetalength != NULL)
+ *m_config.actmetalength = metasize;
+ }
+
+ // copy the metadata first
+ if (metasize > 0)
+ {
+ if (metastart != NULL)
+ memcpy(metastart, source + srcoffs, metasize);
+ srcoffs += metasize;
+ }
+
+ // decode the audio channels
+ if (channels > 0)
+ {
+ // decode the audio
+ avhuff_error err = decode_audio(channels, samples, source + srcoffs, audiostart, audioxor, &source[8]);
+ if (err != AVHERR_NONE)
+ return err;
+
+ // advance the pointers past the data
+ UINT32 treesize = (source[8] << 8) + source[9];
+ if (treesize != 0xffff)
+ srcoffs += treesize;
+ for (int chnum = 0; chnum < channels; chnum++)
+ srcoffs += (source[10 + 2 * chnum] << 8) + source[11 + 2 * chnum];
+ }
+
+ // decode the video data
+ if (width > 0 && height > 0 && videostart != NULL)
+ {
+ // decode the video
+ avhuff_error err = decode_video(width, height, source + srcoffs, complength - srcoffs, videostart, videostride, videoxor);
+ if (err != AVHERR_NONE)
+ return err;
+ }
+ return AVHERR_NONE;
+}
+
+
+//-------------------------------------------------
+// decode_audio - decode audio from a compressed
+// data stream
+//-------------------------------------------------
+
+avhuff_error avhuff_decoder::decode_audio(int channels, int samples, const UINT8 *source, UINT8 **dest, UINT32 dxor, const UINT8 *sizes)
+{
+ // extract the huffman trees
+ UINT16 treesize = (sizes[0] << 8) | sizes[1];
+
+#if AVHUFF_USE_FLAC
+
+ // if the tree size is 0xffff, the streams are FLAC-encoded
+ if (treesize == 0xffff)
+ {
+ // output data is big-endian; determine our platform endianness
+ UINT16 be_test = 0;
+ *(UINT8 *)&be_test = 1;
+ bool swap_endian = (be_test == 1);
+ if (dxor != 0)
+ swap_endian = !swap_endian;
+
+ // loop over channels
+ for (int chnum = 0; chnum < channels; chnum++)
+ {
+ // extract the size of this channel
+ UINT16 size = (sizes[chnum * 2 + 2] << 8) | sizes[chnum * 2 + 3];
+
+ // only process if the data is requested
+ UINT8 *curdest = dest[chnum];
+ if (curdest != NULL)
+ {
+ // reset and decode
+ if (!m_flac_decoder.reset(48000, 1, samples, source, size))
+ throw CHDERR_DECOMPRESSION_ERROR;
+ if (!m_flac_decoder.decode_interleaved(reinterpret_cast<INT16 *>(curdest), samples, swap_endian))
+ throw CHDERR_DECOMPRESSION_ERROR;
+
+ // finish up
+ m_flac_decoder.finish();
+ }
+
+ // advance to the next channel's data
+ source += size;
+ }
+ return AVHERR_NONE;
+ }
+
+#endif
+
+ // if we have a non-zero tree size, extract the trees
+ if (treesize != 0)
+ {
+ bitstream_in bitbuf(source, treesize);
+ huffman_error hufferr = m_audiohi_decoder.import_tree_rle(bitbuf);
+ if (hufferr != HUFFERR_NONE)
+ return AVHERR_INVALID_DATA;
+ bitbuf.flush();
+ hufferr = m_audiolo_decoder.import_tree_rle(bitbuf);
+ if (hufferr != HUFFERR_NONE)
+ return AVHERR_INVALID_DATA;
+ if (bitbuf.flush() != treesize)
+ return AVHERR_INVALID_DATA;
+ source += treesize;
+ }
+
+ // loop over channels
+ for (int chnum = 0; chnum < channels; chnum++)
+ {
+ // extract the size of this channel
+ UINT16 size = (sizes[chnum * 2 + 2] << 8) | sizes[chnum * 2 + 3];
+
+ // only process if the data is requested
+ UINT8 *curdest = dest[chnum];
+ if (curdest != NULL)
+ {
+ INT16 prevsample = 0;
+
+ // if no huffman length, just copy the data
+ if (treesize == 0)
+ {
+ const UINT8 *cursource = source;
+ for (int sampnum = 0; sampnum < samples; sampnum++)
+ {
+ INT16 delta = (cursource[0] << 8) | cursource[1];
+ cursource += 2;
+
+ INT16 newsample = prevsample + delta;
+ prevsample = newsample;
+
+ curdest[0 ^ dxor] = newsample >> 8;
+ curdest[1 ^ dxor] = newsample;
+ curdest += 2;
+ }
+ }
+
+ // otherwise, Huffman-decode the data
+ else
+ {
+ bitstream_in bitbuf(source, size);
+ for (int sampnum = 0; sampnum < samples; sampnum++)
+ {
+ INT16 delta = m_audiohi_decoder.decode_one(bitbuf) << 8;
+ delta |= m_audiolo_decoder.decode_one(bitbuf);
+
+ INT16 newsample = prevsample + delta;
+ prevsample = newsample;
+
+ curdest[0 ^ dxor] = newsample >> 8;
+ curdest[1 ^ dxor] = newsample;
+ curdest += 2;
+ }
+ if (bitbuf.overflow())
+ return AVHERR_INVALID_DATA;
+ }
+ }
+
+ // advance to the next channel's data
+ source += size;
+ }
+ return AVHERR_NONE;
+}
+
+
+//-------------------------------------------------
+// decode_video - decode video from a compressed
+// data stream
+//-------------------------------------------------
+
+avhuff_error avhuff_decoder::decode_video(int width, int height, const UINT8 *source, UINT32 complength, UINT8 *dest, UINT32 dstride, UINT32 dxor)
+{
+ // if the high bit of the first byte is set, we decode losslessly
+ if (source[0] & 0x80)
+ return decode_video_lossless(width, height, source, complength, dest, dstride, dxor);
+ else
+ return AVHERR_INVALID_DATA;
+}
+
+
+//-------------------------------------------------
+// decode_video_lossless - do a lossless video
+// decoding using deltas and huffman encoding
+//-------------------------------------------------
+
+avhuff_error avhuff_decoder::decode_video_lossless(int width, int height, const UINT8 *source, UINT32 complength, UINT8 *dest, UINT32 dstride, UINT32 dxor)
+{
+ // skip the first byte
+ bitstream_in bitbuf(source, complength);
+ bitbuf.read(8);
+
+ // import the tables
+ huffman_error hufferr = m_ycontext.import_tree_rle(bitbuf);
+ if (hufferr != HUFFERR_NONE)
+ return AVHERR_INVALID_DATA;
+ bitbuf.flush();
+ hufferr = m_cbcontext.import_tree_rle(bitbuf);
+ if (hufferr != HUFFERR_NONE)
+ return AVHERR_INVALID_DATA;
+ bitbuf.flush();
+ hufferr = m_crcontext.import_tree_rle(bitbuf);
+ if (hufferr != HUFFERR_NONE)
+ return AVHERR_INVALID_DATA;
+ bitbuf.flush();
+
+ // decode to the destination
+ m_ycontext.reset();
+ m_cbcontext.reset();
+ m_crcontext.reset();
+ for (UINT32 dy = 0; dy < height; dy++)
+ {
+ UINT8 *row = dest + dy * dstride;
+ for (UINT32 dx = 0; dx < width / 2; dx++)
+ {
+ row[0 ^ dxor] = m_ycontext.decode_one(bitbuf);
+ row[1 ^ dxor] = m_cbcontext.decode_one(bitbuf);
+ row[2 ^ dxor] = m_ycontext.decode_one(bitbuf);
+ row[3 ^ dxor] = m_crcontext.decode_one(bitbuf);
+ row += 4;
+ }
+ m_ycontext.flush_rle();
+ m_cbcontext.flush_rle();
+ m_crcontext.flush_rle();
+ }
+
+ // check for errors if we overflowed or decoded too little data
+ if (bitbuf.overflow() || bitbuf.flush() != complength)
+ return AVHERR_INVALID_DATA;
+ return AVHERR_NONE;
+}
diff --git a/src/lib/util/avhuff.h b/src/lib/util/avhuff.h
new file mode 100644
index 00000000000..6b192d3338e
--- /dev/null
+++ b/src/lib/util/avhuff.h
@@ -0,0 +1,231 @@
+/***************************************************************************
+
+ avhuff.h
+
+ Audio/video compression and decompression helpers.
+
+****************************************************************************
+
+ Copyright Aaron Giles
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name 'MAME' nor the names of its contributors may be
+ used to endorse or promote products derived from this software
+ without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY AARON GILES ''AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ DISCLAIMED. IN NO EVENT SHALL AARON GILES BE LIABLE FOR ANY DIRECT,
+ INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
+ IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#pragma once
+
+#ifndef __AVHUFF_H__
+#define __AVHUFF_H__
+
+#include "osdcore.h"
+#include "coretmpl.h"
+#include "bitmap.h"
+#include "huffman.h"
+#include "flac.h"
+
+
+//**************************************************************************
+// CONSTANTS
+//**************************************************************************
+
+#define AVHUFF_USE_FLAC (1)
+
+
+// errors
+enum avhuff_error
+{
+ AVHERR_NONE = 0,
+ AVHERR_INVALID_DATA,
+ AVHERR_VIDEO_TOO_LARGE,
+ AVHERR_AUDIO_TOO_LARGE,
+ AVHERR_METADATA_TOO_LARGE,
+ AVHERR_OUT_OF_MEMORY,
+ AVHERR_COMPRESSION_ERROR,
+ AVHERR_TOO_MANY_CHANNELS,
+ AVHERR_INVALID_CONFIGURATION,
+ AVHERR_INVALID_PARAMETER,
+ AVHERR_BUFFER_TOO_SMALL
+};
+
+
+
+//**************************************************************************
+// TYPE DEFINITIONS
+//**************************************************************************
+
+// ======================> av_codec_decompress_config
+
+// decompression configuration
+class avhuff_decompress_config
+{
+public:
+ avhuff_decompress_config()
+ : maxsamples(0),
+ actsamples(NULL),
+ maxmetalength(0),
+ actmetalength(NULL),
+ metadata(NULL)
+ {
+ memset(audio, 0, sizeof(audio));
+ }
+
+ bitmap_yuy16 video; // pointer to video bitmap
+ UINT32 maxsamples; // maximum number of samples per channel
+ UINT32 * actsamples; // actual number of samples per channel
+ INT16 * audio[16]; // pointer to individual audio channels
+ UINT32 maxmetalength; // maximum length of metadata
+ UINT32 * actmetalength; // actual length of metadata
+ UINT8 * metadata; // pointer to metadata buffer
+};
+
+
+// ======================> avhuff_encoder
+
+// core state for the codec
+class avhuff_encoder
+{
+public:
+ // construction/destruction
+ avhuff_encoder();
+
+ // encode/decode
+ avhuff_error encode_data(const UINT8 *source, UINT8 *dest, UINT32 &complength);
+
+ // static helpers
+ static UINT32 raw_data_size(const UINT8 *data);
+ static UINT32 raw_data_size(UINT32 width, UINT32 height, UINT8 channels, UINT32 numsamples, UINT32 metadatasize = 0) { return 12 + channels * numsamples * 2 + width * height * 2; }
+ static avhuff_error assemble_data(UINT8 *dest, UINT32 dlength, bitmap_yuy16 &bitmap, UINT8 channels, UINT32 numsamples, INT16 **samples, UINT8 *metadata = NULL, UINT32 metadatasize = 0);
+
+private:
+ // delta-RLE Huffman encoder
+ class deltarle_encoder
+ {
+ public:
+ // construction/destruction
+ deltarle_encoder()
+ : m_rlecount(0) { }
+
+ // histogramming
+ UINT16 *rle_and_histo_bitmap(const UINT8 *source, UINT32 items_per_row, UINT32 item_advance, UINT32 row_count);
+
+ // encoding
+ void flush_rle() { m_rlecount = 0; }
+ void encode_one(bitstream_out &bitbuf, UINT16 *&rleptr);
+ huffman_error export_tree_rle(bitstream_out &bitbuf) { return m_encoder.export_tree_rle(bitbuf); }
+
+ private:
+ // internal state
+ int m_rlecount;
+ huffman_encoder<256 + 16> m_encoder;
+ dynamic_array<UINT16> m_rlebuffer;
+ };
+
+ // internal helpers
+ avhuff_error encode_audio(const UINT8 *source, int channels, int samples, UINT8 *dest, UINT8 *sizes);
+ avhuff_error encode_video(const UINT8 *source, int width, int height, UINT8 *dest, UINT32 &complength);
+ avhuff_error encode_video_lossless(const UINT8 *source, int width, int height, UINT8 *dest, UINT32 &complength);
+
+ // video encoding contexts
+ deltarle_encoder m_ycontext;
+ deltarle_encoder m_cbcontext;
+ deltarle_encoder m_crcontext;
+
+ // audio encoding contexts
+ dynamic_buffer m_audiobuffer;
+#if AVHUFF_USE_FLAC
+ flac_encoder m_flac_encoder;
+#else
+ huffman_8bit_encoder m_audiohi_encoder;
+ huffman_8bit_encoder m_audiolo_encoder;
+#endif
+};
+
+
+// ======================> avhuff_decoder
+
+// core state for the codec
+class avhuff_decoder
+{
+public:
+ // construction/destruction
+ avhuff_decoder();
+
+ // configuration
+ void configure(const avhuff_decompress_config &config);
+
+ // encode/decode
+ avhuff_error decode_data(const UINT8 *source, UINT32 complength, UINT8 *dest);
+
+private:
+ // delta-RLE Huffman decoder
+ class deltarle_decoder
+ {
+ public:
+ // construction/destruction
+ deltarle_decoder()
+ : m_rlecount(0), m_prevdata(0) { }
+
+ // general
+ void reset() { m_rlecount = m_prevdata = 0; }
+
+ // decoding
+ void flush_rle() { m_rlecount = 0; }
+ UINT32 decode_one(bitstream_in &bitbuf);
+ huffman_error import_tree_rle(bitstream_in &bitbuf) { return m_decoder.import_tree_rle(bitbuf); }
+
+ private:
+ // internal state
+ int m_rlecount;
+ UINT8 m_prevdata;
+ huffman_decoder<256 + 16> m_decoder;
+ };
+
+
+ // internal helpers
+ avhuff_error decode_audio(int channels, int samples, const UINT8 *source, UINT8 **dest, UINT32 dxor, const UINT8 *sizes);
+ avhuff_error decode_video(int width, int height, const UINT8 *source, UINT32 complength, UINT8 *dest, UINT32 dstride, UINT32 dxor);
+ avhuff_error decode_video_lossless(int width, int height, const UINT8 *source, UINT32 complength, UINT8 *dest, UINT32 dstride, UINT32 dxor);
+
+ // internal state
+ avhuff_decompress_config m_config;
+
+ // video decoding contexts
+ deltarle_decoder m_ycontext;
+ deltarle_decoder m_cbcontext;
+ deltarle_decoder m_crcontext;
+
+ // audio decoding contexts
+ huffman_8bit_decoder m_audiohi_decoder;
+ huffman_8bit_decoder m_audiolo_decoder;
+#if AVHUFF_USE_FLAC
+ flac_decoder m_flac_decoder;
+#endif
+};
+
+
+#endif
diff --git a/src/lib/util/aviio.c b/src/lib/util/aviio.c
index 33dd799fcf7..d8c7edaf649 100644
--- a/src/lib/util/aviio.c
+++ b/src/lib/util/aviio.c
@@ -1573,7 +1573,7 @@ static avi_error parse_indx_chunk(avi_file *file, avi_stream *stream, avi_chunk
{
const UINT8 *base = &chunkdata[24 + entry * 4 * longs_per_entry];
UINT32 offset = fetch_32bits(&base[0]);
- UINT32 size = fetch_32bits(&base[4]);
+ UINT32 size = fetch_32bits(&base[4]) & 0x7fffffff; // bit 31 == NOT a keyframe
/* set the info for this chunk and advance */
avierr = set_stream_chunk_info(stream, stream->chunks++, baseoffset + offset - 8, size + 8);
diff --git a/src/lib/util/bitmap.c b/src/lib/util/bitmap.c
index a178e43beea..e8dffaa6b58 100644
--- a/src/lib/util/bitmap.c
+++ b/src/lib/util/bitmap.c
@@ -271,7 +271,7 @@ void bitmap_t::wrap(void *base, int width, int height, int rowpixels)
// bitmap does not own the memory
//-------------------------------------------------
-void bitmap_t::wrap(bitmap_t &source, const rectangle &subrect)
+void bitmap_t::wrap(const bitmap_t &source, const rectangle &subrect)
{
assert(m_format == source.m_format);
assert(m_bpp == source.m_bpp);
diff --git a/src/lib/util/bitmap.h b/src/lib/util/bitmap.h
index b6ae74b5a80..cbbc9191b4e 100644
--- a/src/lib/util/bitmap.h
+++ b/src/lib/util/bitmap.h
@@ -177,7 +177,7 @@ public:
protected:
// for use by subclasses only to ensure type correctness
void wrap(void *base, int width, int height, int rowpixels);
- void wrap(bitmap_t &source, const rectangle &subrect);
+ void wrap(const bitmap_t &source, const rectangle &subrect);
private:
// internal helpers
@@ -306,7 +306,7 @@ public:
bitmap_ind8(UINT8 *base, int width, int height, int rowpixels) : bitmap8_t(k_bitmap_format, base, width, height, rowpixels) { }
bitmap_ind8(bitmap_ind8 &source, const rectangle &subrect) : bitmap8_t(k_bitmap_format, source, subrect) { }
void wrap(UINT8 *base, int width, int height, int rowpixels) { bitmap_t::wrap(base, width, height, rowpixels); }
- void wrap(bitmap_ind8 &source, const rectangle &subrect) { bitmap_t::wrap(static_cast<bitmap_t &>(source), subrect); }
+ void wrap(const bitmap_ind8 &source, const rectangle &subrect) { bitmap_t::wrap(static_cast<const bitmap_t &>(source), subrect); }
// getters
bitmap_format format() const { return k_bitmap_format; }
@@ -323,7 +323,7 @@ public:
bitmap_ind16(UINT16 *base, int width, int height, int rowpixels) : bitmap16_t(k_bitmap_format, base, width, height, rowpixels) { }
bitmap_ind16(bitmap_ind16 &source, const rectangle &subrect) : bitmap16_t(k_bitmap_format, source, subrect) { }
void wrap(UINT16 *base, int width, int height, int rowpixels) { bitmap_t::wrap(base, width, height, rowpixels); }
- void wrap(bitmap_ind8 &source, const rectangle &subrect) { bitmap_t::wrap(static_cast<bitmap_t &>(source), subrect); }
+ void wrap(const bitmap_ind8 &source, const rectangle &subrect) { bitmap_t::wrap(static_cast<const bitmap_t &>(source), subrect); }
// getters
bitmap_format format() const { return k_bitmap_format; }
@@ -340,7 +340,7 @@ public:
bitmap_ind32(UINT32 *base, int width, int height, int rowpixels) : bitmap32_t(k_bitmap_format, base, width, height, rowpixels) { }
bitmap_ind32(bitmap_ind32 &source, const rectangle &subrect) : bitmap32_t(k_bitmap_format, source, subrect) { }
void wrap(UINT32 *base, int width, int height, int rowpixels) { bitmap_t::wrap(base, width, height, rowpixels); }
- void wrap(bitmap_ind8 &source, const rectangle &subrect) { bitmap_t::wrap(static_cast<bitmap_t &>(source), subrect); }
+ void wrap(const bitmap_ind8 &source, const rectangle &subrect) { bitmap_t::wrap(static_cast<const bitmap_t &>(source), subrect); }
// getters
bitmap_format format() const { return k_bitmap_format; }
@@ -357,7 +357,7 @@ public:
bitmap_ind64(UINT64 *base, int width, int height, int rowpixels) : bitmap64_t(k_bitmap_format, base, width, height, rowpixels) { }
bitmap_ind64(bitmap_ind64 &source, const rectangle &subrect) : bitmap64_t(k_bitmap_format, source, subrect) { }
void wrap(UINT64 *base, int width, int height, int rowpixels) { bitmap_t::wrap(base, width, height, rowpixels); }
- void wrap(bitmap_ind8 &source, const rectangle &subrect) { bitmap_t::wrap(static_cast<bitmap_t &>(source), subrect); }
+ void wrap(const bitmap_ind8 &source, const rectangle &subrect) { bitmap_t::wrap(static_cast<const bitmap_t &>(source), subrect); }
// getters
bitmap_format format() const { return k_bitmap_format; }
@@ -377,7 +377,7 @@ public:
bitmap_yuy16(UINT16 *base, int width, int height, int rowpixels) : bitmap16_t(k_bitmap_format, base, width, height, rowpixels) { }
bitmap_yuy16(bitmap_yuy16 &source, const rectangle &subrect) : bitmap16_t(k_bitmap_format, source, subrect) { }
void wrap(UINT16 *base, int width, int height, int rowpixels) { bitmap_t::wrap(base, width, height, rowpixels); }
- void wrap(bitmap_yuy16 &source, const rectangle &subrect) { bitmap_t::wrap(static_cast<bitmap_t &>(source), subrect); }
+ void wrap(const bitmap_yuy16 &source, const rectangle &subrect) { bitmap_t::wrap(static_cast<const bitmap_t &>(source), subrect); }
// getters
bitmap_format format() const { return k_bitmap_format; }
@@ -394,7 +394,7 @@ public:
bitmap_rgb32(UINT32 *base, int width, int height, int rowpixels) : bitmap32_t(k_bitmap_format, base, width, height, rowpixels) { }
bitmap_rgb32(bitmap_rgb32 &source, const rectangle &subrect) : bitmap32_t(k_bitmap_format, source, subrect) { }
void wrap(UINT32 *base, int width, int height, int rowpixels) { bitmap_t::wrap(base, width, height, rowpixels); }
- void wrap(bitmap_rgb32 &source, const rectangle &subrect) { bitmap_t::wrap(static_cast<bitmap_t &>(source), subrect); }
+ void wrap(const bitmap_rgb32 &source, const rectangle &subrect) { bitmap_t::wrap(static_cast<const bitmap_t &>(source), subrect); }
// getters
bitmap_format format() const { return k_bitmap_format; }
@@ -411,7 +411,7 @@ public:
bitmap_argb32(UINT32 *base, int width, int height, int rowpixels) : bitmap32_t(k_bitmap_format, base, width, height, rowpixels) { }
bitmap_argb32(bitmap_argb32 &source, const rectangle &subrect) : bitmap32_t(k_bitmap_format, source, subrect) { }
void wrap(UINT32 *base, int width, int height, int rowpixels) { bitmap_t::wrap(base, width, height, rowpixels); }
- void wrap(bitmap_argb32 &source, const rectangle &subrect) { bitmap_t::wrap(static_cast<bitmap_t &>(source), subrect); }
+ void wrap(const bitmap_argb32 &source, const rectangle &subrect) { bitmap_t::wrap(static_cast<const bitmap_t &>(source), subrect); }
// getters
bitmap_format format() const { return k_bitmap_format; }
diff --git a/src/lib/util/bitstream.h b/src/lib/util/bitstream.h
new file mode 100644
index 00000000000..d95cf6c6bd1
--- /dev/null
+++ b/src/lib/util/bitstream.h
@@ -0,0 +1,265 @@
+/***************************************************************************
+
+ bitstream.h
+
+ Helper classes for reading/writing at the bit level.
+
+****************************************************************************
+
+ Copyright Aaron Giles
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name 'MAME' nor the names of its contributors may be
+ used to endorse or promote products derived from this software
+ without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY AARON GILES ''AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ DISCLAIMED. IN NO EVENT SHALL AARON GILES BE LIABLE FOR ANY DIRECT,
+ INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
+ IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#pragma once
+
+#ifndef __BITSTREAM_H__
+#define __BITSTREAM_H__
+
+#include "osdcore.h"
+
+
+//**************************************************************************
+// TYPE DEFINITIONS
+//**************************************************************************
+
+// helper class for reading from a bit buffer
+class bitstream_in
+{
+public:
+ // construction/destruction
+ bitstream_in(const void *src, UINT32 srclength);
+
+ // getters
+ bool overflow() const { return ((m_doffset - m_bits / 8) > m_dlength); }
+ UINT32 read_offset() const;
+
+ // operations
+ UINT32 read(int numbits);
+ UINT32 peek(int numbits);
+ void remove(int numbits);
+ UINT32 flush();
+
+private:
+ // internal state
+ UINT32 m_buffer; // current bit accumulator
+ int m_bits; // number of bits in the accumulator
+ const UINT8 * m_read; // read pointer
+ UINT32 m_doffset; // byte offset within the data
+ UINT32 m_dlength; // length of the data
+};
+
+
+// helper class for writing to a bit buffer
+class bitstream_out
+{
+public:
+ // construction/destruction
+ bitstream_out(void *dest, UINT32 destlength);
+
+ // getters
+ bool overflow() const { return (m_doffset > m_dlength); }
+
+ // operations
+ void write(UINT32 newbits, int numbits);
+ UINT32 flush();
+
+private:
+ // internal state
+ UINT32 m_buffer; // current bit accumulator
+ int m_bits; // number of bits in the accumulator
+ UINT8 * m_write; // write pointer
+ UINT32 m_doffset; // byte offset within the data
+ UINT32 m_dlength; // length of the data
+};
+
+
+
+//**************************************************************************
+// INLINE FUNCTIONS
+//**************************************************************************
+
+//-------------------------------------------------
+// bitstream_in - constructor
+//-------------------------------------------------
+
+inline bitstream_in::bitstream_in(const void *src, UINT32 srclength)
+ : m_buffer(NULL),
+ m_bits(0),
+ m_read(reinterpret_cast<const UINT8 *>(src)),
+ m_doffset(0),
+ m_dlength(srclength)
+{
+}
+
+
+//-------------------------------------------------
+// peek - fetch the requested number of bits
+// but don't advance the input pointer
+//-------------------------------------------------
+
+inline UINT32 bitstream_in::peek(int numbits)
+{
+ // fetch data if we need more
+ if (numbits > m_bits)
+ {
+ while (m_bits <= 24)
+ {
+ if (m_doffset < m_dlength)
+ m_buffer |= m_read[m_doffset] << (24 - m_bits);
+ m_doffset++;
+ m_bits += 8;
+ }
+ }
+
+ // return the data
+ return m_buffer >> (32 - numbits);
+}
+
+
+//-------------------------------------------------
+// remove - advance the input pointer by the
+// specified number of bits
+//-------------------------------------------------
+
+inline void bitstream_in::remove(int numbits)
+{
+ m_buffer <<= numbits;
+ m_bits -= numbits;
+}
+
+
+//-------------------------------------------------
+// read - fetch the requested number of bits
+//-------------------------------------------------
+
+inline UINT32 bitstream_in::read(int numbits)
+{
+ UINT32 result = peek(numbits);
+ remove(numbits);
+ return result;
+}
+
+
+//-------------------------------------------------
+// read_offset - return the current read offset
+//-------------------------------------------------
+
+inline UINT32 bitstream_in::read_offset() const
+{
+ UINT32 result = m_doffset;
+ int bits = m_bits;
+ while (bits >= 8)
+ {
+ result--;
+ bits -= 8;
+ }
+ return result;
+}
+
+
+//-------------------------------------------------
+// flush - flush to the nearest byte
+//-------------------------------------------------
+
+inline UINT32 bitstream_in::flush()
+{
+ while (m_bits >= 8)
+ {
+ m_doffset--;
+ m_bits -= 8;
+ }
+ m_bits = m_buffer = 0;
+ return m_doffset;
+}
+
+
+//-------------------------------------------------
+// bitstream_out - constructor
+//-------------------------------------------------
+
+inline bitstream_out::bitstream_out(void *dest, UINT32 destlength)
+ : m_buffer(0),
+ m_bits(0),
+ m_write(reinterpret_cast<UINT8 *>(dest)),
+ m_doffset(0),
+ m_dlength(destlength)
+{
+}
+
+
+
+//-------------------------------------------------
+// write - write the given number of bits to the
+// data stream
+//-------------------------------------------------
+
+inline void bitstream_out::write(UINT32 newbits, int numbits)
+{
+ // flush the buffer if we're going to overflow it
+ if (m_bits + numbits > 32)
+ while (m_bits >= 8)
+ {
+ if (m_doffset < m_dlength)
+ m_write[m_doffset] = m_buffer >> 24;
+ m_doffset++;
+ m_buffer <<= 8;
+ m_bits -= 8;
+ }
+
+ // shift the bits to the top
+ newbits <<= 32 - numbits;
+
+ // now shift it down to account for the number of bits we already have and OR them in
+ m_buffer |= newbits >> m_bits;
+ m_bits += numbits;
+}
+
+
+//-------------------------------------------------
+// flush - output remaining bits and return the
+// final output size in bytes
+//-------------------------------------------------
+
+inline UINT32 bitstream_out::flush()
+{
+ while (m_bits > 0)
+ {
+ if (m_doffset < m_dlength)
+ m_write[m_doffset] = m_buffer >> 24;
+ m_doffset++;
+ m_buffer <<= 8;
+ m_bits -= 8;
+ }
+ m_bits = m_buffer = 0;
+ return m_doffset;
+}
+
+
+#endif
diff --git a/src/lib/util/cdrom.c b/src/lib/util/cdrom.c
index 17d6bf75c12..c14aa692f3a 100644
--- a/src/lib/util/cdrom.c
+++ b/src/lib/util/cdrom.c
@@ -40,7 +40,7 @@
IMPORTANT:
"physical" block addresses are the actual addresses on the emulated CD.
"chd" block addresses are the block addresses in the CHD file.
- Because we pad each track to a hunk boundry, these addressing
+ Because we pad each track to a 4-frame boundry, these addressing
schemes will differ after track 1!
***************************************************************************/
@@ -75,23 +75,12 @@ struct _cdrom_file
chd_file * chd; /* CHD file */
cdrom_toc cdtoc; /* TOC for the CD */
chdcd_track_input_info track_info; /* track info */
- UINT32 hunksectors; /* sectors per hunk */
- UINT32 cachehunk; /* which hunk is cached */
- UINT8 * cache; /* cache of the current hunk */
core_file * fhandle[CD_MAX_TRACKS];/* file handle */
};
/***************************************************************************
- FUNCTION PROTOTYPES
-***************************************************************************/
-
-static chd_error read_sector_into_cache(cdrom_file *file, UINT32 lbasector, UINT32 *sectoroffs, UINT32 *tracknum);
-
-
-
-/***************************************************************************
INLINE FUNCTIONS
***************************************************************************/
@@ -100,7 +89,7 @@ static chd_error read_sector_into_cache(cdrom_file *file, UINT32 lbasector, UINT
and the track number
-------------------------------------------------*/
-INLINE UINT32 physical_to_chd_lba(cdrom_file *file, UINT32 physlba, UINT32 *tracknum)
+INLINE UINT32 physical_to_chd_lba(cdrom_file *file, UINT32 physlba, UINT32 &tracknum)
{
UINT32 chdlba;
int track;
@@ -110,8 +99,7 @@ INLINE UINT32 physical_to_chd_lba(cdrom_file *file, UINT32 physlba, UINT32 *trac
if (physlba < file->cdtoc.tracks[track + 1].physframeofs)
{
chdlba = physlba - file->cdtoc.tracks[track].physframeofs + file->cdtoc.tracks[track].chdframeofs;
- if (tracknum != NULL)
- *tracknum = track;
+ tracknum = track;
return chdlba;
}
@@ -136,31 +124,29 @@ cdrom_file *cdrom_open(const char *inputfile)
return NULL;
/* setup the CDROM module and get the disc info */
- chd_error err = chdcd_parse_toc(inputfile, &file->cdtoc, &file->track_info);
+ chd_error err = chdcd_parse_toc(inputfile, file->cdtoc, file->track_info);
if (err != CHDERR_NONE)
{
- fprintf(stderr, "Error reading input file: %s\n", chd_error_string(err));
+ fprintf(stderr, "Error reading input file: %s\n", chd_file::error_string(err));
return NULL;
}
/* fill in the data */
file->chd = NULL;
- file->hunksectors = 1;
- file->cachehunk = -1;
LOG(("CD has %d tracks\n", file->cdtoc.numtrks));
for (i = 0; i < file->cdtoc.numtrks; i++)
{
- file_error filerr = core_fopen(file->track_info.fname[i], OPEN_FLAG_READ, &file->fhandle[i]);
+ file_error filerr = core_fopen(file->track_info.track[i].fname, OPEN_FLAG_READ, &file->fhandle[i]);
if (filerr != FILERR_NONE)
{
- fprintf(stderr, "Unable to open file: %s\n", file->track_info.fname[i]);
+ fprintf(stderr, "Unable to open file: %s\n", file->track_info.track[i].fname.cstr());
return NULL;
}
}
/* calculate the starting frame for each track, keeping in mind that CHDMAN
- pads tracks out with extra frames to fit hunk size boundries
+ pads tracks out with extra frames to fit 4-frame size boundries
*/
physofs = 0;
for (i = 0; i < file->cdtoc.numtrks; i++)
@@ -185,14 +171,6 @@ cdrom_file *cdrom_open(const char *inputfile)
file->cdtoc.tracks[i].physframeofs = physofs;
file->cdtoc.tracks[i].chdframeofs = 0;
- /* allocate a cache */
- file->cache = (UINT8 *)malloc(CD_FRAME_SIZE);
- if (file->cache == NULL)
- {
- free(file);
- return NULL;
- }
-
return file;
}
@@ -203,7 +181,6 @@ cdrom_file *cdrom_open(const char *inputfile)
cdrom_file *cdrom_open(chd_file *chd)
{
- const chd_header *header = chd_get_header(chd);
int i;
cdrom_file *file;
UINT32 physofs, chdofs;
@@ -214,7 +191,9 @@ cdrom_file *cdrom_open(chd_file *chd)
return NULL;
/* validate the CHD information */
- if (header->hunkbytes % CD_FRAME_SIZE != 0)
+ if (chd->hunk_bytes() % CD_FRAME_SIZE != 0)
+ return NULL;
+ if (chd->unit_bytes() != CD_FRAME_SIZE)
return NULL;
/* allocate memory for the CD-ROM file */
@@ -224,8 +203,6 @@ cdrom_file *cdrom_open(chd_file *chd)
/* fill in the data */
file->chd = chd;
- file->hunksectors = header->hunkbytes / CD_FRAME_SIZE;
- file->cachehunk = -1;
/* read the CD-ROM metadata */
err = cdrom_parse_metadata(chd, &file->cdtoc);
@@ -238,7 +215,7 @@ cdrom_file *cdrom_open(chd_file *chd)
LOG(("CD has %d tracks\n", file->cdtoc.numtrks));
/* calculate the starting frame for each track, keeping in mind that CHDMAN
- pads tracks out with extra frames to fit hunk size boundries
+ pads tracks out with extra frames to fit 4-frame size boundries
*/
physofs = chdofs = 0;
for (i = 0; i < file->cdtoc.numtrks; i++)
@@ -265,14 +242,6 @@ cdrom_file *cdrom_open(chd_file *chd)
file->cdtoc.tracks[i].physframeofs = physofs;
file->cdtoc.tracks[i].chdframeofs = chdofs;
- /* allocate a cache */
- file->cache = (UINT8 *)malloc(chd_get_header(chd)->hunkbytes);
- if (file->cache == NULL)
- {
- free(file);
- return NULL;
- }
-
return file;
}
@@ -286,10 +255,6 @@ void cdrom_close(cdrom_file *file)
if (file == NULL)
return;
- /* free the cache */
- if (file->cache)
- free(file->cache);
-
if (file->chd == NULL)
{
for (int i = 0; i < file->cdtoc.numtrks; i++)
@@ -307,6 +272,37 @@ void cdrom_close(cdrom_file *file)
CORE READ ACCESS
***************************************************************************/
+chd_error read_partial_sector(cdrom_file *file, void *dest, UINT32 chdsector, UINT32 tracknum, UINT32 startoffs, UINT32 length)
+{
+ // if a CHD, just read
+ if (file->chd != NULL)
+ return file->chd->read_bytes(UINT64(chdsector) * UINT64(CD_FRAME_SIZE) + startoffs, dest, length);
+
+ // else read from the appropriate file
+ core_file *srcfile = file->fhandle[tracknum];
+
+ UINT64 sourcefileoffset = file->track_info.track[tracknum].offset;
+ int bytespersector = file->cdtoc.tracks[tracknum].datasize + file->cdtoc.tracks[tracknum].subsize;
+
+ sourcefileoffset += chdsector * bytespersector + startoffs;
+
+ core_fseek(srcfile, sourcefileoffset, SEEK_SET);
+ core_fread(srcfile, dest, length);
+
+ if (file->track_info.track[tracknum].swap)
+ {
+ UINT8 *buffer = (UINT8 *)dest - startoffs;
+ for (int swapindex = startoffs; swapindex < 2352; swapindex += 2 )
+ {
+ int swaptemp = buffer[ swapindex ];
+ buffer[ swapindex ] = buffer[ swapindex + 1 ];
+ buffer[ swapindex + 1 ] = swaptemp;
+ }
+ }
+ return CHDERR_NONE;
+}
+
+
/*-------------------------------------------------
cdrom_read_data - read one or more sectors
from a CD-ROM
@@ -314,31 +310,25 @@ void cdrom_close(cdrom_file *file)
UINT32 cdrom_read_data(cdrom_file *file, UINT32 lbasector, void *buffer, UINT32 datatype)
{
- UINT32 tracktype, tracknum, sectoroffs;
- chd_error err;
- static const UINT8 syncbytes[12] = {0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00};
-
if (file == NULL)
return 0;
- /* cache in the sector */
- err = read_sector_into_cache(file, lbasector, &sectoroffs, &tracknum);
- if (err != CHDERR_NONE)
- return 0;
+ // compute CHD sector and tracknumber
+ UINT32 tracknum = 0;
+ UINT32 chdsector = physical_to_chd_lba(file, lbasector, tracknum);
/* copy out the requested sector */
- tracktype = file->cdtoc.tracks[tracknum].trktype;
+ UINT32 tracktype = file->cdtoc.tracks[tracknum].trktype;
if ((datatype == tracktype) || (datatype == CD_TRACK_RAW_DONTCARE))
{
- memcpy(buffer, &file->cache[sectoroffs * CD_FRAME_SIZE], file->cdtoc.tracks[tracknum].datasize);
+ return (read_partial_sector(file, buffer, chdsector, tracknum, 0, file->cdtoc.tracks[tracknum].datasize) == CHDERR_NONE);
}
else
{
/* return 2048 bytes of mode 1 data from a 2352 byte mode 1 raw sector */
if ((datatype == CD_TRACK_MODE1) && (tracktype == CD_TRACK_MODE1_RAW))
{
- memcpy(buffer, &file->cache[(sectoroffs * CD_FRAME_SIZE) + 16], 2048);
- return 1;
+ return (read_partial_sector(file, buffer, chdsector, tracknum, 16, 2048) == CHDERR_NONE);
}
/* return 2352 byte mode 1 raw sector from 2048 bytes of mode 1 data */
@@ -347,34 +337,31 @@ UINT32 cdrom_read_data(cdrom_file *file, UINT32 lbasector, void *buffer, UINT32
UINT8 *bufptr = (UINT8 *)buffer;
UINT32 msf = lba_to_msf(lbasector);
+ static const UINT8 syncbytes[12] = {0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00};
memcpy(bufptr, syncbytes, 12);
bufptr[12] = msf>>16;
bufptr[13] = msf>>8;
bufptr[14] = msf&0xff;
bufptr[15] = 1; // mode 1
- memcpy(bufptr+16, &file->cache[(sectoroffs * CD_FRAME_SIZE)], 2048);
LOG(("CDROM: promotion of mode1/form1 sector to mode1 raw is not complete!\n"));
- return 1;
+ return (read_partial_sector(file, bufptr+16, chdsector, tracknum, 0, 2048) == CHDERR_NONE);
}
/* return 2048 bytes of mode 1 data from a mode2 form1 or raw sector */
if ((datatype == CD_TRACK_MODE1) && ((tracktype == CD_TRACK_MODE2_FORM1)||(tracktype == CD_TRACK_MODE2_RAW)))
{
- memcpy(buffer, &file->cache[(sectoroffs * CD_FRAME_SIZE) + 24], 2048);
- return 1;
+ return (read_partial_sector(file, buffer, chdsector, tracknum, 24, 2048) == CHDERR_NONE);
}
/* return mode 2 2336 byte data from a 2352 byte mode 1 or 2 raw sector (skip the header) */
if ((datatype == CD_TRACK_MODE2) && ((tracktype == CD_TRACK_MODE1_RAW) || (tracktype == CD_TRACK_MODE2_RAW)))
{
- memcpy(buffer, &file->cache[(sectoroffs * CD_FRAME_SIZE) + 16], 2336);
- return 1;
+ return (read_partial_sector(file, buffer, chdsector, tracknum, 16, 2336) == CHDERR_NONE);
}
LOG(("CDROM: Conversion from type %d to type %d not supported!\n", tracktype, datatype));
return 0;
}
- return 1;
}
@@ -385,20 +372,18 @@ UINT32 cdrom_read_data(cdrom_file *file, UINT32 lbasector, void *buffer, UINT32
UINT32 cdrom_read_subcode(cdrom_file *file, UINT32 lbasector, void *buffer)
{
- UINT32 sectoroffs, tracknum;
- chd_error err;
-
if (file == NULL)
return ~0;
- /* cache in the sector */
- err = read_sector_into_cache(file, lbasector, &sectoroffs, &tracknum);
- if (err != CHDERR_NONE)
- return 0;
-
- /* copy out the requested data */
- memcpy(buffer, &file->cache[(sectoroffs * CD_FRAME_SIZE) + file->cdtoc.tracks[tracknum].datasize], file->cdtoc.tracks[tracknum].subsize);
- return 1;
+ // compute CHD sector and tracknumber
+ UINT32 tracknum = 0;
+ UINT32 chdsector = physical_to_chd_lba(file, lbasector, tracknum);
+ if (file->cdtoc.tracks[tracknum].subsize == 0)
+ return 1;
+
+ // read the data
+ chd_error err = read_partial_sector(file, buffer, chdsector, tracknum, file->cdtoc.tracks[tracknum].datasize, file->cdtoc.tracks[tracknum].subsize);
+ return (err == CHDERR_NONE);
}
@@ -420,7 +405,7 @@ UINT32 cdrom_get_track(cdrom_file *file, UINT32 frame)
return ~0;
/* convert to a CHD sector offset and get track information */
- physical_to_chd_lba(file, frame, &track);
+ physical_to_chd_lba(file, frame, track);
return track;
}
@@ -704,81 +689,27 @@ const char *cdrom_get_subtype_string(UINT32 subtype)
***************************************************************************/
/*-------------------------------------------------
- read_sector_into_cache - cache a sector at
- the given physical LBA
--------------------------------------------------*/
-
-static chd_error read_sector_into_cache(cdrom_file *file, UINT32 lbasector, UINT32 *sectoroffs, UINT32 *tracknum)
-{
- UINT32 chdsector, hunknum;
- chd_error err;
-
- /* convert to a CHD sector offset and get track information */
- *tracknum = 0;
- chdsector = physical_to_chd_lba(file, lbasector, tracknum);
- hunknum = chdsector / file->hunksectors;
- *sectoroffs = chdsector % file->hunksectors;
-
- /* if we haven't cached this hunk, read it now */
- if (file->cachehunk != hunknum)
- {
- if (file->chd) {
- err = chd_read(file->chd, hunknum, file->cache);
- if (err != CHDERR_NONE)
- return err;
- } else {
- core_file *srcfile = file->fhandle[*tracknum];
-
- UINT64 sourcefileoffset = file->track_info.offset[*tracknum];
- int bytespersector = file->cdtoc.tracks[*tracknum].datasize + file->cdtoc.tracks[*tracknum].subsize;
-
- sourcefileoffset += chdsector * bytespersector;
-
- core_fseek(srcfile, sourcefileoffset, SEEK_SET);
- core_fread(srcfile, file->cache, bytespersector);
-
- if (file->track_info.swap[*tracknum])
- {
- for (int swapindex = 0; swapindex < 2352; swapindex += 2 )
- {
- int swaptemp = file->cache[ swapindex ];
- file->cache[ swapindex ] = file->cache[ swapindex + 1 ];
- file->cache[ swapindex + 1 ] = swaptemp;
- }
- }
- }
-
- file->cachehunk = hunknum;
- }
- return CHDERR_NONE;
-}
-
-
-/*-------------------------------------------------
cdrom_parse_metadata - parse metadata into the
TOC structure
-------------------------------------------------*/
chd_error cdrom_parse_metadata(chd_file *chd, cdrom_toc *toc)
{
- static UINT32 oldmetadata[CD_METADATA_WORDS], *mrp;
- const chd_header *header = chd_get_header(chd);
- UINT32 hunksectors = header->hunkbytes / CD_FRAME_SIZE;
- char metadata[512];
+ astring metadata;
chd_error err;
int i;
/* start with no tracks */
for (toc->numtrks = 0; toc->numtrks < CD_MAX_TRACKS; toc->numtrks++)
{
- int tracknum = -1, frames = 0, hunks, pregap, postgap;
+ int tracknum = -1, frames = 0, pregap, postgap;
char type[16], subtype[16], pgtype[16], pgsub[16];
cdrom_track_info *track;
pregap = postgap = 0;
/* fetch the metadata for this track */
- err = chd_get_metadata(chd, CDROM_TRACK_METADATA_TAG, toc->numtrks, metadata, sizeof(metadata), NULL, NULL, NULL);
+ err = chd->read_metadata(CDROM_TRACK_METADATA_TAG, toc->numtrks, metadata);
if (err == CHDERR_NONE)
{
/* parse the metadata */
@@ -791,7 +722,7 @@ chd_error cdrom_parse_metadata(chd_file *chd, cdrom_toc *toc)
}
else
{
- err = chd_get_metadata(chd, CDROM_TRACK_METADATA2_TAG, toc->numtrks, metadata, sizeof(metadata), NULL, NULL, NULL);
+ err = chd->read_metadata(CDROM_TRACK_METADATA2_TAG, toc->numtrks, metadata);
if (err != CHDERR_NONE)
break;
/* parse the metadata */
@@ -818,8 +749,8 @@ chd_error cdrom_parse_metadata(chd_file *chd, cdrom_toc *toc)
/* set the frames and extra frames data */
track->frames = frames;
- hunks = (frames + hunksectors - 1) / hunksectors;
- track->extraframes = hunks * hunksectors - frames;
+ int padded = (frames + CD_TRACK_PADDING - 1) / CD_TRACK_PADDING;
+ track->extraframes = padded * CD_TRACK_PADDING - frames;
/* set the pregap info */
track->pregap = pregap;
@@ -839,12 +770,13 @@ chd_error cdrom_parse_metadata(chd_file *chd, cdrom_toc *toc)
return CHDERR_NONE;
/* look for old-style metadata */
- err = chd_get_metadata(chd, CDROM_OLD_METADATA_TAG, 0, oldmetadata, sizeof(oldmetadata), NULL, NULL, NULL);
+ dynamic_buffer oldmetadata;
+ err = chd->read_metadata(CDROM_OLD_METADATA_TAG, 0, oldmetadata);
if (err != CHDERR_NONE)
return err;
/* reconstruct the TOC from it */
- mrp = &oldmetadata[0];
+ UINT32 *mrp = reinterpret_cast<UINT32 *>(&oldmetadata[0]);
toc->numtrks = *mrp++;
for (i = 0; i < CD_MAX_TRACKS; i++)
@@ -855,6 +787,12 @@ chd_error cdrom_parse_metadata(chd_file *chd, cdrom_toc *toc)
toc->tracks[i].subsize = *mrp++;
toc->tracks[i].frames = *mrp++;
toc->tracks[i].extraframes = *mrp++;
+ toc->tracks[i].pregap = 0;
+ toc->tracks[i].postgap = 0;
+ toc->tracks[i].pgtype = 0;
+ toc->tracks[i].pgsub = 0;
+ toc->tracks[i].pgdatasize = 0;
+ toc->tracks[i].pgsubsize = 0;
}
/* TODO: I don't know why sometimes the data is one endian and sometimes another */
@@ -888,13 +826,13 @@ chd_error cdrom_write_metadata(chd_file *chd, const cdrom_toc *toc)
/* write the metadata */
for (i = 0; i < toc->numtrks; i++)
{
- char metadata[512];
- sprintf(metadata, CDROM_TRACK_METADATA2_FORMAT, i + 1, cdrom_get_type_string(toc->tracks[i].trktype),
+ astring metadata;
+ metadata.format(CDROM_TRACK_METADATA2_FORMAT, i + 1, cdrom_get_type_string(toc->tracks[i].trktype),
cdrom_get_subtype_string(toc->tracks[i].subtype), toc->tracks[i].frames, toc->tracks[i].pregap,
cdrom_get_type_string(toc->tracks[i].pgtype), cdrom_get_subtype_string(toc->tracks[i].pgsub),
toc->tracks[i].postgap);
- err = chd_set_metadata(chd, CDROM_TRACK_METADATA2_TAG, i, metadata, strlen(metadata) + 1, CHD_MDFLAGS_CHECKSUM);
+ err = chd->write_metadata(CDROM_TRACK_METADATA2_TAG, i, metadata);
if (err != CHDERR_NONE)
return err;
}
diff --git a/src/lib/util/cdrom.h b/src/lib/util/cdrom.h
index 3ff7c9f063c..02f032a699c 100644
--- a/src/lib/util/cdrom.h
+++ b/src/lib/util/cdrom.h
@@ -51,6 +51,9 @@
CONSTANTS
***************************************************************************/
+// tracks are padded to a multiple of this many frames
+const UINT32 CD_TRACK_PADDING = 4;
+
#define CD_MAX_TRACKS (99) /* AFAIK the theoretical limit */
#define CD_MAX_SECTOR_DATA (2352)
#define CD_MAX_SUBCODE_DATA (96)
@@ -90,8 +93,7 @@ enum
typedef struct _cdrom_file cdrom_file;
-typedef struct _cdrom_track_info cdrom_track_info;
-struct _cdrom_track_info
+struct cdrom_track_info
{
/* fields used by CHDMAN and in MAME */
UINT32 trktype; /* track type */
@@ -113,8 +115,7 @@ struct _cdrom_track_info
};
-typedef struct _cdrom_toc cdrom_toc;
-struct _cdrom_toc
+struct cdrom_toc
{
UINT32 numtrks; /* number of tracks */
cdrom_track_info tracks[CD_MAX_TRACKS];
diff --git a/src/lib/util/chd.c b/src/lib/util/chd.c
index 3f3424acb3c..dfb870f849c 100644
--- a/src/lib/util/chd.c
+++ b/src/lib/util/chd.c
@@ -38,3532 +38,2716 @@
***************************************************************************/
#include "chd.h"
-#include "avcomp.h"
-#include "md5.h"
-#include "sha1.h"
+#include "avhuff.h"
+#include "hashing.h"
+#include "flac.h"
#include "cdrom.h"
+#include "coretmpl.h"
#include <zlib.h>
#include <time.h>
#include <stddef.h>
#include <stdlib.h>
#include <new>
-#include "../../lib/libflac/include/flac/all.h"
-
-
-/***************************************************************************
- DEBUGGING
-***************************************************************************/
-
-#define PRINTF_MAX_HUNK (0)
-
-
-
-/***************************************************************************
- CONSTANTS
-***************************************************************************/
-
-#define MAP_STACK_ENTRIES 512 /* max number of entries to use on the stack */
-#define MAP_ENTRY_SIZE 16 /* V3 and later */
-#define OLD_MAP_ENTRY_SIZE 8 /* V1-V2 */
-#define METADATA_HEADER_SIZE 16 /* metadata header size */
-#define CRCMAP_HASH_SIZE 4095 /* number of CRC hashtable entries */
-
-#define MAP_ENTRY_FLAG_TYPE_MASK 0x0f /* what type of hunk */
-#define MAP_ENTRY_FLAG_NO_CRC 0x10 /* no CRC is present */
-#define MAP_ENTRY_FLAG_HALF_HUNK 0x20 /* only the first half of this hunk is included in the SHA1 calculation (workaround for CD track padding issue) */
-
-#define MAP_ENTRY_TYPE_INVALID 0x00 /* invalid type */
-#define MAP_ENTRY_TYPE_COMPRESSED 0x01 /* standard compression */
-#define MAP_ENTRY_TYPE_UNCOMPRESSED 0x02 /* uncompressed data */
-#define MAP_ENTRY_TYPE_MINI 0x03 /* mini: use offset as raw data */
-#define MAP_ENTRY_TYPE_SELF_HUNK 0x04 /* same as another hunk in this file */
-#define MAP_ENTRY_TYPE_PARENT_HUNK 0x05 /* same as a hunk in the parent file */
-#define MAP_ENTRY_TYPE_2ND_COMPRESSED 0x06 /* compressed with secondary algorithm (usually FLAC CDDA) */
-
-#define CHD_V1_SECTOR_SIZE 512 /* size of a "sector" in the V1 header */
-
-#define COOKIE_VALUE 0xbaadf00d
-#define MAX_ZLIB_ALLOCS 64
-
-#define END_OF_LIST_COOKIE "EndOfListCookie"
-
-#define NO_MATCH (~0)
+//**************************************************************************
+// CONSTANTS
+//**************************************************************************
+// standard metadata formats
+const char *HARD_DISK_METADATA_FORMAT = "CYLS:%d,HEADS:%d,SECS:%d,BPS:%d";
+const char *CDROM_TRACK_METADATA_FORMAT = "TRACK:%d TYPE:%s SUBTYPE:%s FRAMES:%d";
+const char *CDROM_TRACK_METADATA2_FORMAT = "TRACK:%d TYPE:%s SUBTYPE:%s FRAMES:%d PREGAP:%d PGTYPE:%s PGSUB:%s POSTGAP:%d";
+const char *AV_METADATA_FORMAT = "FPS:%d.%06d WIDTH:%d HEIGHT:%d INTERLACED:%d CHANNELS:%d SAMPLERATE:%d";
-/***************************************************************************
- MACROS
-***************************************************************************/
-
-#define EARLY_EXIT(x) do { (void)(x); goto cleanup; } while (0)
-
-
-
-/***************************************************************************
- TYPE DEFINITIONS
-***************************************************************************/
-
-/* interface to a codec */
-typedef struct _codec_interface codec_interface;
-struct _codec_interface
-{
- UINT32 compression; /* type of compression */
- const char *compname; /* name of the algorithm */
- UINT8 lossy; /* is this a lossy algorithm? */
- chd_error (*init)(chd_file *chd); /* codec initialize */
- void (*free)(chd_file *chd); /* codec free */
- chd_error (*compress)(chd_file *chd, const void *src, UINT32 *complen); /* compress data */
- chd_error (*decompress)(chd_file *chd, UINT32 complen, void *dst); /* decompress data */
- chd_error (*config)(chd_file *chd, int param, void *config); /* configure */
-
- chd_error (*secondary_compress)(chd_file *chd, const void *src, UINT32 *complen); /* secondary compress data */
- chd_error (*secondary_decompress)(chd_file *chd, UINT32 complen, void *dst); /* secondary decompress data */
-
-};
-
-
-/* a single map entry */
-typedef struct _map_entry map_entry;
-struct _map_entry
-{
- UINT64 offset; /* offset within the file of the data */
- UINT32 crc; /* 32-bit CRC of the data */
- UINT32 length; /* length of the data */
- UINT8 flags; /* misc flags */
-};
+static const UINT32 METADATA_HEADER_SIZE = 16; // metadata header size
+static const UINT8 V34_MAP_ENTRY_FLAG_TYPE_MASK = 0x0f; // what type of hunk
+static const UINT8 V34_MAP_ENTRY_FLAG_NO_CRC = 0x10; // no CRC is present
-/* simple linked-list of hunks used for our CRC map */
-typedef struct _crcmap_entry crcmap_entry;
-struct _crcmap_entry
-{
- UINT32 hunknum; /* hunk number */
- crcmap_entry * next; /* next entry in list */
-};
-/* a single metadata entry */
-typedef struct _metadata_entry metadata_entry;
-struct _metadata_entry
+// V3-V4 entry types
+enum
{
- UINT64 offset; /* offset within the file of the header */
- UINT64 next; /* offset within the file of the next header */
- UINT64 prev; /* offset within the file of the previous header */
- UINT32 length; /* length of the metadata */
- UINT32 metatag; /* metadata tag */
- UINT8 flags; /* flag bits */
+ V34_MAP_ENTRY_TYPE_INVALID = 0, // invalid type
+ V34_MAP_ENTRY_TYPE_COMPRESSED = 1, // standard compression
+ V34_MAP_ENTRY_TYPE_UNCOMPRESSED = 2, // uncompressed data
+ V34_MAP_ENTRY_TYPE_MINI = 3, // mini: use offset as raw data
+ V34_MAP_ENTRY_TYPE_SELF_HUNK = 4, // same as another hunk in this file
+ V34_MAP_ENTRY_TYPE_PARENT_HUNK = 5, // same as a hunk in the parent file
+ V34_MAP_ENTRY_TYPE_2ND_COMPRESSED = 6 // compressed with secondary algorithm (usually FLAC CDDA)
};
-
-/* internal representation of an open CHD file */
-struct _chd_file
-{
- UINT32 cookie; /* cookie, should equal COOKIE_VALUE */
-
- core_file * file; /* handle to the open core file */
- UINT8 owns_file; /* flag indicating if this file should be closed on chd_close() */
- chd_header header; /* header, extracted from file */
-
- chd_file * parent; /* pointer to parent file, or NULL */
-
- map_entry * map; /* array of map entries */
-
- UINT8 * cache; /* hunk cache pointer */
- UINT32 cachehunk; /* index of currently cached hunk */
-
- UINT8 * compare; /* hunk compare pointer */
- UINT32 comparehunk; /* index of current compare data */
-
- UINT8 * compressed; /* pointer to buffer for compressed data */
- const codec_interface * codecintf; /* interface to the codec */
- void * codecdata; /* opaque pointer to codec data */
-
- crcmap_entry * crcmap; /* CRC map entries */
- crcmap_entry * crcfree; /* free list CRC entries */
- crcmap_entry ** crctable; /* table of CRC entries */
-
- UINT32 maxhunk; /* maximum hunk accessed */
-
- UINT8 compressing; /* are we compressing? */
- struct MD5Context compmd5; /* running MD5 during compression */
- struct sha1_ctx compsha1; /* running SHA1 during compression */
- UINT32 comphunk; /* next hunk we will compress */
-
- UINT8 verifying; /* are we verifying? */
- struct MD5Context vermd5; /* running MD5 during verification */
- struct sha1_ctx versha1; /* running SHA1 during verification */
- UINT32 verhunk; /* next hunk we will verify */
-
- osd_work_queue * workqueue; /* pointer to work queue for async operations */
- osd_work_item * workitem; /* active work item, or NULL if none */
- UINT32 async_hunknum; /* hunk index for asynchronous operations */
- void * async_buffer; /* buffer pointer for asynchronous operations */
+// V5 compression types
+enum
+{
+ // these types are live when running
+ COMPRESSION_TYPE_0 = 0, // codec #0
+ COMPRESSION_TYPE_1 = 1, // codec #1
+ COMPRESSION_TYPE_2 = 2, // codec #2
+ COMPRESSION_TYPE_3 = 3, // codec #3
+ COMPRESSION_NONE = 4, // no compression; implicit length = hunkbytes
+ COMPRESSION_SELF = 5, // same as another block in this chd
+ COMPRESSION_PARENT = 6, // same as a hunk's worth of units in the parent chd
+
+ // these additional pseudo-types are used for compressed encodings:
+ COMPRESSION_RLE_SMALL, // start of small RLE run (4-bit length)
+ COMPRESSION_RLE_LARGE, // start of large RLE run (8-bit length)
+ COMPRESSION_SELF_0, // same as the last COMPRESSION_SELF block
+ COMPRESSION_SELF_1, // same as the last COMPRESSION_SELF block + 1
+ COMPRESSION_PARENT_SELF, // same block in the parent
+ COMPRESSION_PARENT_0, // same as the last COMPRESSION_PARENT block
+ COMPRESSION_PARENT_1 // same as the last COMPRESSION_PARENT block + 1
};
-/* codec-private data for the ZLIB codec */
-typedef struct _zlib_codec_data zlib_codec_data;
-struct _zlib_codec_data
-{
- z_stream inflater;
- z_stream deflater;
- UINT32 * allocptr[MAX_ZLIB_ALLOCS];
-};
-
-/* codec-private data for the A/V codec */
-struct av_codec_data
-{
- avcomp_state * compstate;
- av_codec_compress_config compress;
- av_codec_decompress_config decompress;
-};
+//**************************************************************************
+// TYPE DEFINITIONS
+//**************************************************************************
+// ======================> metadata_entry
-/* a single metadata hash entry */
-typedef struct _metadata_hash metadata_hash;
-struct _metadata_hash
+// description of where a metadata entry lives within the file
+struct chd_file::metadata_entry
{
- UINT8 tag[4]; /* tag of the metadata in big-endian */
- UINT8 sha1[CHD_SHA1_BYTES]; /* hash */
+ UINT64 offset; // offset within the file of the header
+ UINT64 next; // offset within the file of the next header
+ UINT64 prev; // offset within the file of the previous header
+ UINT32 length; // length of the metadata
+ UINT32 metatag; // metadata tag
+ UINT8 flags; // flag bits
};
+// ======================> metadata_hash
-/***************************************************************************
- GLOBAL VARIABLES
-***************************************************************************/
-
-static const UINT8 nullmd5[CHD_MD5_BYTES] = { 0 };
-static const UINT8 nullsha1[CHD_SHA1_BYTES] = { 0 };
-
-
-
-/***************************************************************************
- PROTOTYPES
-***************************************************************************/
-
-/* internal async operations */
-static void *async_read_callback(void *param, int threadid);
-static void *async_write_callback(void *param, int threadid);
-
-/* internal header operations */
-static chd_error header_validate(const chd_header *header);
-static chd_error header_read(core_file *file, chd_header *header);
-static chd_error header_write(core_file *file, const chd_header *header);
-
-/* internal hunk read/write */
-static chd_error hunk_read_into_cache(chd_file *chd, UINT32 hunknum);
-static chd_error hunk_read_into_memory(chd_file *chd, UINT32 hunknum, UINT8 *dest);
-static chd_error hunk_write_from_memory(chd_file *chd, UINT32 hunknum, const UINT8 *src, int is_half_hunk = 0);
-
-/* internal map access */
-static chd_error map_write_initial(core_file *file, chd_file *parent, const chd_header *header);
-static chd_error map_read(chd_file *chd);
-
-/* internal CRC map access */
-static void crcmap_init(chd_file *chd, int prepopulate);
-static void crcmap_add_entry(chd_file *chd, UINT32 hunknum);
-static UINT32 crcmap_find_hunk(chd_file *chd, UINT32 hunknum, UINT32 crc, const UINT8 *rawdata);
-
-/* metadata management */
-static chd_error metadata_find_entry(chd_file *chd, UINT32 metatag, UINT32 metaindex, metadata_entry *metaentry);
-static chd_error metadata_set_previous_next(chd_file *chd, UINT64 prevoffset, UINT64 nextoffset);
-static chd_error metadata_set_length(chd_file *chd, UINT64 offset, UINT32 length);
-static chd_error metadata_compute_hash(chd_file *chd, const UINT8 *rawsha1, UINT8 *finalsha1);
-static int CLIB_DECL metadata_hash_compare(const void *elem1, const void *elem2);
-
-/* zlib compression codec */
-static chd_error zlib_codec_init(chd_file *chd);
-static void zlib_codec_free(chd_file *chd);
-static chd_error zlib_codec_compress(chd_file *chd, const void *src, UINT32 *length);
-static chd_error zlib_codec_decompress(chd_file *chd, UINT32 srclength, void *dest);
-static voidpf zlib_fast_alloc(voidpf opaque, uInt items, uInt size);
-static void zlib_fast_free(voidpf opaque, voidpf address);
-
-/* flac compression codec */
-static chd_error flac_codec_compress(chd_file *chd, const void *src, UINT32 *length, int swap);
-static chd_error flac_codec_compress_normal(chd_file *chd, const void *src, UINT32 *length);
-static chd_error flac_codec_decompress(chd_file *chd, UINT32 srclength, void *dest);
-
-
-/* A/V compression codec */
-static chd_error av_codec_init(chd_file *chd);
-static void av_codec_free(chd_file *chd);
-static chd_error av_codec_compress(chd_file *chd, const void *src, UINT32 *length);
-static chd_error av_codec_decompress(chd_file *chd, UINT32 srclength, void *dest);
-static chd_error av_codec_config(chd_file *chd, int param, void *config);
-static chd_error av_codec_postinit(chd_file *chd);
-
-
-
-/***************************************************************************
- CODEC INTERFACES
-***************************************************************************/
-
-static const codec_interface codec_interfaces[] =
+struct chd_file::metadata_hash
{
- /* "none" or no compression */
- {
- CHDCOMPRESSION_NONE,
- "none",
- FALSE,
- NULL,
- NULL,
- NULL,
- NULL,
- NULL,
- NULL,
- NULL
- },
-
- /* standard zlib compression */
- {
- CHDCOMPRESSION_ZLIB,
- "zlib",
- FALSE,
- zlib_codec_init,
- zlib_codec_free,
- zlib_codec_compress,
- zlib_codec_decompress,
- NULL,
- NULL,
- NULL
- },
-
- /* zlib+ compression */
- {
- CHDCOMPRESSION_ZLIB_PLUS,
- "zlib+",
- FALSE,
- zlib_codec_init,
- zlib_codec_free,
- zlib_codec_compress,
- zlib_codec_decompress,
- NULL,
- NULL,
- NULL
- },
-
- /* a/v compression */
- {
- CHDCOMPRESSION_AV,
- "A/V",
- TRUE,
- av_codec_init,
- av_codec_free,
- av_codec_compress,
- av_codec_decompress,
- av_codec_config,
- NULL,
- NULL
- },
-
- /* zlib+ with FLAC compression */
- {
- CHDCOMPRESSION_ZLIB_PLUS_WITH_FLAC,
- "zlib+ with FLAC",
- FALSE,
- zlib_codec_init,
- zlib_codec_free,
- zlib_codec_compress,
- zlib_codec_decompress,
- NULL,
- flac_codec_compress_normal,
- flac_codec_decompress,
- },
+ UINT8 tag[4]; // tag of the metadata in big-endian
+ sha1_t sha1; // hash data
};
-/***************************************************************************
- INLINE FUNCTIONS
-***************************************************************************/
+//**************************************************************************
+// INLINE FUNCTIONS
+//**************************************************************************
-/*-------------------------------------------------
- get_bigendian_uint64 - fetch a UINT64 from
- the data stream in bigendian order
--------------------------------------------------*/
+//-------------------------------------------------
+// be_read - extract a big-endian number from
+// a byte buffer
+//-------------------------------------------------
-INLINE UINT64 get_bigendian_uint64(const UINT8 *base)
+inline UINT64 chd_file::be_read(const UINT8 *base, int numbytes)
{
- return ((UINT64)base[0] << 56) | ((UINT64)base[1] << 48) | ((UINT64)base[2] << 40) | ((UINT64)base[3] << 32) |
- ((UINT64)base[4] << 24) | ((UINT64)base[5] << 16) | ((UINT64)base[6] << 8) | (UINT64)base[7];
+ UINT64 result = 0;
+ while (numbytes--)
+ result = (result << 8) | *base++;
+ return result;
}
-/*-------------------------------------------------
- put_bigendian_uint64 - write a UINT64 to
- the data stream in bigendian order
--------------------------------------------------*/
+//-------------------------------------------------
+// be_write - write a big-endian number to a byte
+// buffer
+//-------------------------------------------------
-INLINE void put_bigendian_uint64(UINT8 *base, UINT64 value)
+inline void chd_file::be_write(UINT8 *base, UINT64 value, int numbytes)
{
- base[0] = value >> 56;
- base[1] = value >> 48;
- base[2] = value >> 40;
- base[3] = value >> 32;
- base[4] = value >> 24;
- base[5] = value >> 16;
- base[6] = value >> 8;
- base[7] = value;
+ base += numbytes;
+ while (numbytes--)
+ {
+ *--base = value;
+ value >>= 8;
+ }
}
-/*-------------------------------------------------
- get_bigendian_uint32 - fetch a UINT32 from
- the data stream in bigendian order
--------------------------------------------------*/
+//-------------------------------------------------
+// be_read_sha1 - fetch a sha1_t from a data
+// stream in bigendian order
+//-------------------------------------------------
-INLINE UINT32 get_bigendian_uint32(const UINT8 *base)
+inline sha1_t chd_file::be_read_sha1(const UINT8 *base)
{
- return (base[0] << 24) | (base[1] << 16) | (base[2] << 8) | base[3];
+ sha1_t result;
+ memcpy(&result.m_raw[0], base, sizeof(result.m_raw));
+ return result;
}
-/*-------------------------------------------------
- put_bigendian_uint32 - write a UINT32 to
- the data stream in bigendian order
--------------------------------------------------*/
+//-------------------------------------------------
+// be_write_sha1 - write a sha1_t to a data
+// stream in bigendian order
+//-------------------------------------------------
-INLINE void put_bigendian_uint32(UINT8 *base, UINT32 value)
+inline void chd_file::be_write_sha1(UINT8 *base, sha1_t value)
{
- base[0] = value >> 24;
- base[1] = value >> 16;
- base[2] = value >> 8;
- base[3] = value;
+ memcpy(base, &value.m_raw[0], sizeof(value.m_raw));
}
-/*-------------------------------------------------
- get_bigendian_uint16 - fetch a UINT16 from
- the data stream in bigendian order
--------------------------------------------------*/
+//-------------------------------------------------
+// file_read - read from the file at the given
+// offset; on failure throw an error
+//-------------------------------------------------
-INLINE UINT16 get_bigendian_uint16(const UINT8 *base)
+inline void chd_file::file_read(UINT64 offset, void *dest, UINT32 length)
{
- return (base[0] << 8) | base[1];
+ // no file = failure
+ if (m_file == NULL)
+ throw CHDERR_NOT_OPEN;
+
+ // seek and read
+ core_fseek(m_file, offset, SEEK_SET);
+ UINT32 count = core_fread(m_file, dest, length);
+ if (count != length)
+ throw CHDERR_READ_ERROR;
}
-/*-------------------------------------------------
- put_bigendian_uint16 - write a UINT16 to
- the data stream in bigendian order
--------------------------------------------------*/
+//-------------------------------------------------
+// file_write - write to the file at the given
+// offset; on failure throw an error
+//-------------------------------------------------
-INLINE void put_bigendian_uint16(UINT8 *base, UINT16 value)
+inline void chd_file::file_write(UINT64 offset, const void *source, UINT32 length)
{
- base[0] = value >> 8;
- base[1] = value;
+ // no file = failure
+ if (m_file == NULL)
+ throw CHDERR_NOT_OPEN;
+
+ // seek and write
+ core_fseek(m_file, offset, SEEK_SET);
+ UINT32 count = core_fwrite(m_file, source, length);
+ if (count != length)
+ throw CHDERR_WRITE_ERROR;
}
-/*-------------------------------------------------
- map_extract - extract a single map
- entry from the datastream
--------------------------------------------------*/
+//-------------------------------------------------
+// file_append - append to the file at the given
+// offset, ensuring we start at the given
+// alignment; on failure throw an error
+//-------------------------------------------------
-INLINE void map_extract(const UINT8 *base, map_entry *entry)
+inline UINT64 chd_file::file_append(const void *source, UINT32 length, UINT32 alignment)
{
- entry->offset = get_bigendian_uint64(&base[0]);
- entry->crc = get_bigendian_uint32(&base[8]);
- entry->length = get_bigendian_uint16(&base[12]) | (base[14] << 16);
- entry->flags = base[15];
-}
-
+ // no file = failure
+ if (m_file == NULL)
+ throw CHDERR_NOT_OPEN;
-/*-------------------------------------------------
- map_assemble - write a single map
- entry to the datastream
--------------------------------------------------*/
+ // seek to the end and align if necessary
+ core_fseek(m_file, 0, SEEK_END);
+ if (alignment != 0)
+ {
+ UINT64 offset = core_ftell(m_file);
+ UINT32 delta = offset % alignment;
+ if (delta != 0)
+ {
+ // pad with 0's from a local buffer
+ UINT8 buffer[1024];
+ memset(buffer, 0, sizeof(buffer));
+ delta = alignment - delta;
+ while (delta != 0)
+ {
+ UINT32 bytes_to_write = MIN(sizeof(buffer), delta);
+ UINT32 count = core_fwrite(m_file, buffer, bytes_to_write);
+ if (count != length)
+ throw CHDERR_WRITE_ERROR;
+ delta -= bytes_to_write;
+ }
+ }
+ }
-INLINE void map_assemble(UINT8 *base, map_entry *entry)
-{
- put_bigendian_uint64(&base[0], entry->offset);
- put_bigendian_uint32(&base[8], entry->crc);
- put_bigendian_uint16(&base[12], entry->length);
- base[14] = entry->length >> 16;
- base[15] = entry->flags;
+ // write the real data
+ UINT64 offset = core_ftell(m_file);
+ UINT32 count = core_fwrite(m_file, source, length);
+ if (count != length)
+ throw CHDERR_READ_ERROR;
+ return offset;
}
-/*-------------------------------------------------
- map_extract_old - extract a single map
- entry in old format from the datastream
--------------------------------------------------*/
+//-------------------------------------------------
+// bits_for_value - return the number of bits
+// necessary to represent all numbers 0..value
+//-------------------------------------------------
-INLINE void map_extract_old(const UINT8 *base, map_entry *entry, UINT32 hunkbytes)
+inline UINT8 chd_file::bits_for_value(UINT64 value)
{
- entry->offset = get_bigendian_uint64(&base[0]);
- entry->crc = 0;
- entry->length = entry->offset >> 44;
- entry->flags = MAP_ENTRY_FLAG_NO_CRC | ((entry->length == hunkbytes) ? MAP_ENTRY_TYPE_UNCOMPRESSED : MAP_ENTRY_TYPE_COMPRESSED);
-#ifdef __MWERKS__
- entry->offset = entry->offset & 0x00000FFFFFFFFFFFLL;
-#else
- entry->offset = (entry->offset << 20) >> 20;
-#endif
+ UINT8 result = 0;
+ while (value != 0)
+ value >>= 1, result++;
+ return result;
}
-/*-------------------------------------------------
- queue_async_operation - queue a new work
- item
--------------------------------------------------*/
-
-INLINE int queue_async_operation(chd_file *chd, osd_work_callback callback)
-{
- /* if no queue yet, create one on the fly */
- if (chd->workqueue == NULL)
- {
- chd->workqueue = osd_work_queue_alloc(WORK_QUEUE_FLAG_IO);
- if (chd->workqueue == NULL)
- return FALSE;
- }
- /* make sure we cleared out the previous item */
- if (chd->workitem != NULL)
- return FALSE;
+//**************************************************************************
+// CHD FILE MANAGEMENT
+//**************************************************************************
- /* create a new work item to run the job */
- chd->workitem = osd_work_item_queue(chd->workqueue, callback, chd, 0);
- if (chd->workitem == NULL)
- return FALSE;
+//-------------------------------------------------
+// chd_file - constructor
+//-------------------------------------------------
- return TRUE;
+chd_file::chd_file()
+ : m_file(NULL)
+{
+ // reset state
+ memset(m_decompressor, 0, sizeof(m_decompressor));
+ close();
}
-/*-------------------------------------------------
- wait_for_pending_async - wait for any pending
- async
--------------------------------------------------*/
+//-------------------------------------------------
+// ~chd_file - destructor
+//-------------------------------------------------
-INLINE void wait_for_pending_async(chd_file *chd)
+chd_file::~chd_file()
{
- /* if something is pending, wait for it */
- if (chd->workitem != NULL)
- {
- /* 10 seconds should be enough for anything! */
- int wait_successful = osd_work_item_wait(chd->workitem, 10 * osd_ticks_per_second());
- if (!wait_successful)
- osd_break_into_debugger("Pending async operation never completed!");
- }
+ // close any open files
+ close();
}
+//-------------------------------------------------
+// sha1 - return our SHA1 value
+//-------------------------------------------------
-/***************************************************************************
- CHD FILE MANAGEMENT
-***************************************************************************/
-
-/*-------------------------------------------------
- chd_create_file - create a new CHD file
--------------------------------------------------*/
-
-chd_error chd_create_file(core_file *file, UINT64 logicalbytes, UINT32 hunkbytes, UINT32 compression, chd_file *parent)
+sha1_t chd_file::sha1()
{
- chd_file *newchd = NULL;
- chd_header header;
- chd_error err;
- int intfnum;
-
- /* verify parameters */
- if (file == NULL)
- EARLY_EXIT(err = CHDERR_INVALID_PARAMETER);
- if (parent == NULL && (logicalbytes == 0 || hunkbytes == 0))
- EARLY_EXIT(err = CHDERR_INVALID_PARAMETER);
-
- /* verify the compression type */
- for (intfnum = 0; intfnum < ARRAY_LENGTH(codec_interfaces); intfnum++)
- if (codec_interfaces[intfnum].compression == compression)
- break;
- if (intfnum == ARRAY_LENGTH(codec_interfaces))
- EARLY_EXIT(err = CHDERR_INVALID_PARAMETER);
-
- /* if we have a parent, the sizes come from there */
- if (parent != NULL)
+ try
{
- logicalbytes = parent->header.logicalbytes;
- hunkbytes = parent->header.hunkbytes;
+ // read the big-endian version
+ UINT8 rawbuf[sizeof(sha1_t)];
+ file_read(m_sha1_offset, rawbuf, sizeof(rawbuf));
+ return be_read_sha1(rawbuf);
}
-
- /* if we have a parent, it must be V3 or later */
- if (parent != NULL && parent->header.version < 3)
- EARLY_EXIT(err = CHDERR_UNSUPPORTED_VERSION);
-
- /* build the header */
- memset(&header, 0, sizeof(header));
- header.length = CHD_V4_HEADER_SIZE;
- header.version = CHD_HEADER_VERSION;
- header.flags = CHDFLAGS_IS_WRITEABLE;
- header.compression = compression;
- header.hunkbytes = hunkbytes;
- header.totalhunks = (logicalbytes + hunkbytes - 1) / hunkbytes;
- header.logicalbytes = logicalbytes;
-
- /* tweaks if there is a parent */
- if (parent != NULL)
+ catch (chd_error &)
{
- header.flags |= CHDFLAGS_HAS_PARENT;
- memcpy(&header.parentmd5[0], &parent->header.md5[0], sizeof(header.parentmd5));
- memcpy(&header.parentsha1[0], &parent->header.sha1[0], sizeof(header.parentsha1));
+ // on failure, return NULL
+ return sha1_t::null;
}
-
- /* validate it */
- err = header_validate(&header);
- if (err != CHDERR_NONE)
- EARLY_EXIT(err);
-
- /* write the resulting header */
- err = header_write(file, &header);
- if (err != CHDERR_NONE)
- EARLY_EXIT(err);
-
- /* create an initial map */
- err = map_write_initial(file, parent, &header);
- if (err != CHDERR_NONE)
- EARLY_EXIT(err);
-
- /* if we have a parent, clone the metadata */
- if (parent != NULL)
- {
- /* open the new CHD via the standard mechanism */
- err = chd_open_file(file, CHD_OPEN_READWRITE, parent, &newchd);
- if (err != CHDERR_NONE)
- EARLY_EXIT(err);
-
- /* close the metadata */
- err = chd_clone_metadata(parent, newchd);
- if (err != CHDERR_NONE)
- EARLY_EXIT(err);
-
- /* close the CHD */
- chd_close(newchd);
- }
-
- return CHDERR_NONE;
-
-cleanup:
- if (newchd != NULL)
- chd_close(newchd);
- return err;
}
-/*-------------------------------------------------
- chd_open_file - open a CHD file for access
--------------------------------------------------*/
+//-------------------------------------------------
+// raw_sha1 - return our raw SHA1 value
+//-------------------------------------------------
-chd_error chd_open_file(core_file *file, int mode, chd_file *parent, chd_file **chd)
+sha1_t chd_file::raw_sha1()
{
- chd_file *newchd = NULL;
- chd_error err;
- int intfnum;
-
- /* verify parameters */
- if (file == NULL)
- EARLY_EXIT(err = CHDERR_INVALID_PARAMETER);
-
- /* punt if invalid parent */
- if (parent != NULL && parent->cookie != COOKIE_VALUE)
- EARLY_EXIT(err = CHDERR_INVALID_PARAMETER);
-
- /* allocate memory for the final result */
- newchd = (chd_file *)malloc(sizeof(**chd));
- if (newchd == NULL)
- EARLY_EXIT(err = CHDERR_OUT_OF_MEMORY);
- memset(newchd, 0, sizeof(*newchd));
- newchd->cookie = COOKIE_VALUE;
- newchd->parent = parent;
- newchd->file = file;
-
- /* now attempt to read the header */
- err = header_read(newchd->file, &newchd->header);
- if (err != CHDERR_NONE)
- EARLY_EXIT(err);
-
- /* validate the header */
- err = header_validate(&newchd->header);
- if (err != CHDERR_NONE)
- EARLY_EXIT(err);
-
- /* make sure we don't open a read-only file writeable */
- if (mode == CHD_OPEN_READWRITE && !(newchd->header.flags & CHDFLAGS_IS_WRITEABLE))
- EARLY_EXIT(err = CHDERR_FILE_NOT_WRITEABLE);
-
- /* also, never open an older version writeable */
- if (mode == CHD_OPEN_READWRITE && newchd->header.version < CHD_HEADER_VERSION)
- EARLY_EXIT(err = CHDERR_UNSUPPORTED_VERSION);
-
- /* if we need a parent, make sure we have one */
- if (parent == NULL && (newchd->header.flags & CHDFLAGS_HAS_PARENT))
- EARLY_EXIT(err = CHDERR_REQUIRES_PARENT);
-
- /* make sure we have a valid parent */
- if (parent != NULL)
+ try
{
- /* check MD5 if it isn't empty */
- if (memcmp(nullmd5, newchd->header.parentmd5, sizeof(newchd->header.parentmd5)) != 0 &&
- memcmp(nullmd5, newchd->parent->header.md5, sizeof(newchd->parent->header.md5)) != 0 &&
- memcmp(newchd->parent->header.md5, newchd->header.parentmd5, sizeof(newchd->header.parentmd5)) != 0)
- EARLY_EXIT(err = CHDERR_INVALID_PARENT);
-
- /* check SHA1 if it isn't empty */
- if (memcmp(nullsha1, newchd->header.parentsha1, sizeof(newchd->header.parentsha1)) != 0 &&
- memcmp(nullsha1, newchd->parent->header.sha1, sizeof(newchd->parent->header.sha1)) != 0 &&
- memcmp(newchd->parent->header.sha1, newchd->header.parentsha1, sizeof(newchd->header.parentsha1)) != 0)
- EARLY_EXIT(err = CHDERR_INVALID_PARENT);
+ // determine offset within the file for data-only
+ if (m_rawsha1_offset == 0)
+ throw CHDERR_UNSUPPORTED_VERSION;
+
+ // read the big-endian version
+ UINT8 rawbuf[sizeof(sha1_t)];
+ file_read(m_rawsha1_offset, rawbuf, sizeof(rawbuf));
+ return be_read_sha1(rawbuf);
}
-
- /* now read the hunk map */
- err = map_read(newchd);
- if (err != CHDERR_NONE)
- EARLY_EXIT(err);
-
- /* allocate and init the hunk cache */
- newchd->cache = (UINT8 *)malloc(newchd->header.hunkbytes);
- newchd->compare = (UINT8 *)malloc(newchd->header.hunkbytes);
- if (newchd->cache == NULL || newchd->compare == NULL)
- EARLY_EXIT(err = CHDERR_OUT_OF_MEMORY);
- newchd->cachehunk = ~0;
- newchd->comparehunk = ~0;
-
- /* allocate the temporary compressed buffer */
- newchd->compressed = (UINT8 *)malloc(newchd->header.hunkbytes);
- if (newchd->compressed == NULL)
- EARLY_EXIT(err = CHDERR_OUT_OF_MEMORY);
-
- /* find the codec interface */
- for (intfnum = 0; intfnum < ARRAY_LENGTH(codec_interfaces); intfnum++)
- if (codec_interfaces[intfnum].compression == newchd->header.compression)
- {
- newchd->codecintf = &codec_interfaces[intfnum];
- break;
- }
- if (intfnum == ARRAY_LENGTH(codec_interfaces))
- EARLY_EXIT(err = CHDERR_UNSUPPORTED_FORMAT);
-
- /* initialize the codec */
- if (newchd->codecintf->init != NULL)
- err = (*newchd->codecintf->init)(newchd);
- if (err != CHDERR_NONE)
- EARLY_EXIT(err);
-
- /* all done */
- *chd = newchd;
- return CHDERR_NONE;
-
-cleanup:
- if (newchd != NULL)
- chd_close(newchd);
- return err;
-}
-
-
-/*-------------------------------------------------
- chd_create - create a CHD file by
- filename
--------------------------------------------------*/
-
-chd_error chd_create(const char *filename, UINT64 logicalbytes, UINT32 hunkbytes, UINT32 compression, chd_file *parent)
-{
- core_file *file = NULL;
- file_error filerr;
- chd_error chderr;
-
- filerr = core_fopen(filename, OPEN_FLAG_READ | OPEN_FLAG_WRITE | OPEN_FLAG_CREATE, &file);
- if (filerr != FILERR_NONE)
+ catch (chd_error &)
{
- chderr = CHDERR_FILE_NOT_FOUND;
- goto cleanup;
+ // on failure, return NULL
+ return sha1_t::null;
}
-
- chderr = chd_create_file(file, logicalbytes, hunkbytes, compression, parent);
- if (chderr != CHDERR_NONE)
- goto cleanup;
-
-cleanup:
- if (file != NULL)
- core_fclose(file);
- return chderr;
}
-/*-------------------------------------------------
- chd_open - open a CHD file by
- filename
--------------------------------------------------*/
+//-------------------------------------------------
+// parent_sha1 - return our parent's SHA1 value
+//-------------------------------------------------
-chd_error chd_open(const char *filename, int mode, chd_file *parent, chd_file **chd)
+sha1_t chd_file::parent_sha1()
{
- chd_error err;
- file_error filerr;
- core_file *file = NULL;
- UINT32 openflags;
-
- /* choose the proper mode */
- switch(mode)
+ try
{
- case CHD_OPEN_READ:
- openflags = OPEN_FLAG_READ;
- break;
-
- case CHD_OPEN_READWRITE:
- openflags = OPEN_FLAG_READ | OPEN_FLAG_WRITE;
- break;
-
- default:
- err = CHDERR_INVALID_PARAMETER;
- goto cleanup;
+ // determine offset within the file
+ if (m_parentsha1_offset == 0)
+ throw CHDERR_UNSUPPORTED_VERSION;
+
+ // read the big-endian version
+ UINT8 rawbuf[sizeof(sha1_t)];
+ file_read(m_parentsha1_offset, rawbuf, sizeof(rawbuf));
+ return be_read_sha1(rawbuf);
}
-
- /* open the file */
- filerr = core_fopen(filename, openflags, &file);
- if (filerr != FILERR_NONE)
+ catch (chd_error &)
{
- err = CHDERR_FILE_NOT_FOUND;
- goto cleanup;
+ // on failure, return NULL
+ return sha1_t::null;
}
-
- /* now open the CHD */
- err = chd_open_file(file, mode, parent, chd);
- if (err != CHDERR_NONE)
- goto cleanup;
-
- /* we now own this file */
- (*chd)->owns_file = TRUE;
-
-cleanup:
- if ((err != CHDERR_NONE) && (file != NULL))
- core_fclose(file);
- return err;
}
-/*-------------------------------------------------
- chd_close - close a CHD file for access
--------------------------------------------------*/
+//-------------------------------------------------
+// hunk_info - return information about this
+// hunk
+//-------------------------------------------------
-void chd_close(chd_file *chd)
+chd_error chd_file::hunk_info(UINT32 hunknum, chd_codec_type &compressor, UINT32 &compbytes)
{
- /* punt if NULL or invalid */
- if (chd == NULL || chd->cookie != COOKIE_VALUE)
- return;
-
- /* wait for any pending async operations */
- wait_for_pending_async(chd);
-
- /* kill the work queue and any work item */
- if (chd->workitem != NULL)
- osd_work_item_release(chd->workitem);
- if (chd->workqueue != NULL)
- osd_work_queue_free(chd->workqueue);
-
- /* deinit the codec */
- if (chd->codecintf != NULL && chd->codecintf->free != NULL)
- (*chd->codecintf->free)(chd);
-
- /* free the compressed data buffer */
- if (chd->compressed != NULL)
- free(chd->compressed);
-
- /* free the hunk cache and compare data */
- if (chd->compare != NULL)
- free(chd->compare);
- if (chd->cache != NULL)
- free(chd->cache);
-
- /* free the hunk map */
- if (chd->map != NULL)
- free(chd->map);
-
- /* free the CRC table */
- if (chd->crctable != NULL)
- free(chd->crctable);
-
- /* free the CRC map */
- if (chd->crcmap != NULL)
- free(chd->crcmap);
-
- /* close the file */
- if (chd->owns_file && chd->file != NULL)
- core_fclose(chd->file);
-
- if (PRINTF_MAX_HUNK) printf("Max hunk = %d/%d\n", chd->maxhunk, chd->header.totalhunks);
-
- /* free our memory */
- free(chd);
-}
-
-
-/*-------------------------------------------------
- chd_core_file - return the associated
- core_file
--------------------------------------------------*/
+ // error if invalid
+ if (hunknum >= m_hunkcount)
+ return CHDERR_HUNK_OUT_OF_RANGE;
+
+ // get the map pointer
+ UINT8 *rawmap;
+ switch (m_version)
+ {
+ // v3/v4 map entries
+ case 3:
+ case 4:
+ rawmap = m_rawmap + 16 * hunknum;
+ switch (rawmap[15] & V34_MAP_ENTRY_FLAG_TYPE_MASK)
+ {
+ case V34_MAP_ENTRY_TYPE_COMPRESSED:
+ compressor = CHD_CODEC_ZLIB;
+ compbytes = be_read(&rawmap[12], 2) + (rawmap[14] << 16);
+ break;
+
+ case V34_MAP_ENTRY_TYPE_UNCOMPRESSED:
+ compressor = CHD_CODEC_NONE;
+ compbytes = m_hunkbytes;
+ break;
+
+ case V34_MAP_ENTRY_TYPE_MINI:
+ compressor = CHD_CODEC_MINI;
+ compbytes = 0;
+ break;
+
+ case V34_MAP_ENTRY_TYPE_SELF_HUNK:
+ compressor = CHD_CODEC_SELF;
+ compbytes = 0;
+ break;
-core_file *chd_core_file(chd_file *chd)
-{
- return chd->file;
-}
+ case V34_MAP_ENTRY_TYPE_PARENT_HUNK:
+ compressor = CHD_CODEC_PARENT;
+ compbytes = 0;
+ break;
+ }
+ break;
+
+ // v5 map entries
+ case 5:
+ rawmap = m_rawmap + m_mapentrybytes * hunknum;
+ // uncompressed case
+ if (!compressed())
+ {
+ if (be_read(&rawmap[0], 4) == 0)
+ {
+ compressor = CHD_CODEC_PARENT;
+ compbytes = 0;
+ }
+ else
+ {
+ compressor = CHD_CODEC_NONE;
+ compbytes = m_hunkbytes;
+ }
+ break;
+ }
+
+ // compressed case
+ switch (rawmap[0])
+ {
+ case COMPRESSION_TYPE_0:
+ case COMPRESSION_TYPE_1:
+ case COMPRESSION_TYPE_2:
+ case COMPRESSION_TYPE_3:
+ compressor = m_compression[rawmap[0]];
+ compbytes = be_read(&rawmap[1], 3);
+ break;
-/*-------------------------------------------------
- chd_error_string - return an error string for
- the given CHD error
--------------------------------------------------*/
+ case COMPRESSION_NONE:
+ compressor = CHD_CODEC_NONE;
+ compbytes = m_hunkbytes;
+ break;
-const char *chd_error_string(chd_error err)
-{
- switch (err)
- {
- case CHDERR_NONE: return "no error";
- case CHDERR_NO_INTERFACE: return "no drive interface";
- case CHDERR_OUT_OF_MEMORY: return "out of memory";
- case CHDERR_INVALID_FILE: return "invalid file";
- case CHDERR_INVALID_PARAMETER: return "invalid parameter";
- case CHDERR_INVALID_DATA: return "invalid data";
- case CHDERR_FILE_NOT_FOUND: return "file not found";
- case CHDERR_REQUIRES_PARENT: return "requires parent";
- case CHDERR_FILE_NOT_WRITEABLE: return "file not writeable";
- case CHDERR_READ_ERROR: return "read error";
- case CHDERR_WRITE_ERROR: return "write error";
- case CHDERR_CODEC_ERROR: return "codec error";
- case CHDERR_INVALID_PARENT: return "invalid parent";
- case CHDERR_HUNK_OUT_OF_RANGE: return "hunk out of range";
- case CHDERR_DECOMPRESSION_ERROR: return "decompression error";
- case CHDERR_COMPRESSION_ERROR: return "compression error";
- case CHDERR_CANT_CREATE_FILE: return "can't create file";
- case CHDERR_CANT_VERIFY: return "can't verify file";
- case CHDERR_NOT_SUPPORTED: return "operation not supported";
- case CHDERR_METADATA_NOT_FOUND: return "can't find metadata";
- case CHDERR_INVALID_METADATA_SIZE: return "invalid metadata size";
- case CHDERR_UNSUPPORTED_VERSION: return "unsupported CHD version";
- case CHDERR_VERIFY_INCOMPLETE: return "incomplete verify";
- case CHDERR_INVALID_METADATA: return "invalid metadata";
- case CHDERR_INVALID_STATE: return "invalid state";
- case CHDERR_OPERATION_PENDING: return "operation pending";
- case CHDERR_NO_ASYNC_OPERATION: return "no async operation in progress";
- case CHDERR_UNSUPPORTED_FORMAT: return "unsupported format";
- default: return "undocumented error";
+ case COMPRESSION_SELF:
+ compressor = CHD_CODEC_SELF;
+ compbytes = 0;
+ break;
+
+ case COMPRESSION_PARENT:
+ compressor = CHD_CODEC_PARENT;
+ compbytes = 0;
+ break;
+ }
+ break;
}
-}
-
-
-
-/***************************************************************************
- CHD HEADER MANAGEMENT
-***************************************************************************/
-
-/*-------------------------------------------------
- chd_get_header - return a pointer to the
- extracted header data
--------------------------------------------------*/
-
-const chd_header *chd_get_header(chd_file *chd)
-{
- /* punt if NULL or invalid */
- if (chd == NULL || chd->cookie != COOKIE_VALUE)
- return NULL;
-
- return &chd->header;
-}
-
-
-/*-------------------------------------------------
- chd_set_header_file - write the current header to
- the file
--------------------------------------------------*/
-
-chd_error chd_set_header_file(core_file *file, const chd_header *header)
-{
- chd_header oldheader;
- chd_error err;
-
- /* validate the header */
- err = header_validate(header);
- if (err != CHDERR_NONE)
- EARLY_EXIT(err);
-
- /* read the old header */
- err = header_read(file, &oldheader);
- if (err != CHDERR_NONE)
- EARLY_EXIT(err);
-
- /* make sure we're only making valid changes */
- if (header->length != oldheader.length)
- EARLY_EXIT(err = CHDERR_INVALID_PARAMETER);
- if (header->version != oldheader.version)
- EARLY_EXIT(err = CHDERR_INVALID_PARAMETER);
- if (header->compression != oldheader.compression)
- EARLY_EXIT(err = CHDERR_INVALID_PARAMETER);
- if (header->hunkbytes != oldheader.hunkbytes)
- EARLY_EXIT(err = CHDERR_INVALID_PARAMETER);
- if (header->totalhunks != oldheader.totalhunks)
- EARLY_EXIT(err = CHDERR_INVALID_PARAMETER);
- if (header->metaoffset != oldheader.metaoffset)
- EARLY_EXIT(err = CHDERR_INVALID_PARAMETER);
- if (header->obsolete_hunksize != oldheader.obsolete_hunksize)
- EARLY_EXIT(err = CHDERR_INVALID_PARAMETER);
-
- /* write the new header */
- err = header_write(file, header);
- if (err != CHDERR_NONE)
- EARLY_EXIT(err);
-
return CHDERR_NONE;
-
-cleanup:
- return err;
}
-/*-------------------------------------------------
- chd_set_header - write the current
- header to the file
--------------------------------------------------*/
+//-------------------------------------------------
+// set_raw_sha1 - set our SHA1 values
+//-------------------------------------------------
-chd_error chd_set_header(const char *filename, const chd_header *header)
+void chd_file::set_raw_sha1(sha1_t rawdata)
{
- core_file *file = NULL;
- file_error filerr;
- chd_error err;
-
- filerr = core_fopen(filename, OPEN_FLAG_READ | OPEN_FLAG_WRITE, &file);
- if (filerr != FILERR_NONE)
- {
- err = CHDERR_FILE_NOT_FOUND;
- goto cleanup;
- }
-
- err = chd_set_header_file(file, header);
- if (err != CHDERR_NONE)
- goto cleanup;
-
-cleanup:
- if (file != NULL)
- core_fclose(file);
- return err;
+ // create a big-endian version
+ UINT8 rawbuf[sizeof(sha1_t)];
+ be_write_sha1(rawbuf, rawdata);
+
+ // write to the header
+ UINT64 offset = (m_rawsha1_offset != 0) ? m_rawsha1_offset : m_sha1_offset;
+ assert(offset != 0);
+ file_write(offset, rawbuf, sizeof(rawbuf));
+
+ // if we have a separate rawsha1_offset, update the full sha1 as well
+ if (m_rawsha1_offset != 0)
+ metadata_update_hash();
}
+//-------------------------------------------------
+// set_parent_sha1 - set the parent SHA1 value
+//-------------------------------------------------
-/***************************************************************************
- CORE DATA READ/WRITE
-***************************************************************************/
-
-/*-------------------------------------------------
- chd_read - read a single hunk from the CHD
- file
--------------------------------------------------*/
-
-chd_error chd_read(chd_file *chd, UINT32 hunknum, void *buffer)
+void chd_file::set_parent_sha1(sha1_t parent)
{
- /* punt if NULL or invalid */
- if (chd == NULL || chd->cookie != COOKIE_VALUE)
- return CHDERR_INVALID_PARAMETER;
-
- /* if we're past the end, fail */
- if (hunknum >= chd->header.totalhunks)
- return CHDERR_HUNK_OUT_OF_RANGE;
+ // if no file, fail
+ if (m_file == NULL)
+ throw CHDERR_INVALID_FILE;
- /* wait for any pending async operations */
- wait_for_pending_async(chd);
-
- /* perform the read */
- return hunk_read_into_memory(chd, hunknum, (UINT8 *)buffer);
+ // create a big-endian version
+ UINT8 rawbuf[sizeof(sha1_t)];
+ be_write_sha1(rawbuf, parent);
+
+ // write to the header
+ assert(m_parentsha1_offset != 0);
+ file_write(m_parentsha1_offset, rawbuf, sizeof(rawbuf));
}
-/*-------------------------------------------------
- chd_read_async - read a single hunk from the
- CHD file asynchronously
--------------------------------------------------*/
+//-------------------------------------------------
+// create - create a new file with no parent
+// using an existing opened file handle
+//-------------------------------------------------
-chd_error chd_read_async(chd_file *chd, UINT32 hunknum, void *buffer)
+chd_error chd_file::create(core_file &file, UINT64 logicalbytes, UINT32 hunkbytes, UINT32 unitbytes, chd_codec_type compression[4])
{
- /* punt if NULL or invalid */
- if (chd == NULL || chd->cookie != COOKIE_VALUE)
- return CHDERR_INVALID_PARAMETER;
-
- /* if we're past the end, fail */
- if (hunknum >= chd->header.totalhunks)
- return CHDERR_HUNK_OUT_OF_RANGE;
+ // make sure we don't already have a file open
+ if (m_file != NULL)
+ return CHDERR_ALREADY_OPEN;
- /* wait for any pending async operations */
- wait_for_pending_async(chd);
+ // set the header parameters
+ m_version = HEADER_VERSION;
+ m_logicalbytes = logicalbytes;
+ m_metaoffset = 0;
+ m_hunkbytes = hunkbytes;
+ m_hunkcount = (m_logicalbytes + m_hunkbytes - 1) / m_hunkbytes;
+ m_unitbytes = unitbytes;
+ memcpy(m_compression, compression, sizeof(m_compression));
+ m_parent = NULL;
- /* set the async parameters */
- chd->async_hunknum = hunknum;
- chd->async_buffer = buffer;
-
- /* queue the work item */
- if (queue_async_operation(chd, async_read_callback))
- return CHDERR_OPERATION_PENDING;
-
- /* if we fail, fall back on the sync version */
- return chd_read(chd, hunknum, buffer);
+ // take ownership of the file
+ m_file = &file;
+ m_owns_file = false;
+ return create_common();
}
-/*-------------------------------------------------
- chd_write - write a single hunk to the CHD
- file
--------------------------------------------------*/
+//-------------------------------------------------
+// create - create a new file with a parent
+// using an existing opened file handle
+//-------------------------------------------------
-chd_error chd_write(chd_file *chd, UINT32 hunknum, const void *buffer)
+chd_error chd_file::create(core_file &file, UINT64 logicalbytes, UINT32 hunkbytes, chd_codec_type compression[4], chd_file &parent)
{
- /* punt if NULL or invalid */
- if (chd == NULL || chd->cookie != COOKIE_VALUE)
- return CHDERR_INVALID_PARAMETER;
-
- /* if we're past the end, fail */
- if (hunknum >= chd->header.totalhunks)
- return CHDERR_HUNK_OUT_OF_RANGE;
+ // make sure we don't already have a file open
+ if (m_file != NULL)
+ return CHDERR_ALREADY_OPEN;
- /* wait for any pending async operations */
- wait_for_pending_async(chd);
+ // set the header parameters
+ m_version = HEADER_VERSION;
+ m_logicalbytes = logicalbytes;
+ m_metaoffset = 0;
+ m_hunkbytes = hunkbytes;
+ m_hunkcount = (m_logicalbytes + m_hunkbytes - 1) / m_hunkbytes;
+ m_unitbytes = parent.unit_bytes();
+ memcpy(m_compression, compression, sizeof(m_compression));
+ m_parent = &parent;
- /* then write out the hunk */
- return hunk_write_from_memory(chd, hunknum, (const UINT8 *)buffer);
+ // take ownership of the file
+ m_file = &file;
+ m_owns_file = false;
+ return create_common();
}
-/*-------------------------------------------------
- chd_write_async - write a single hunk to the
- CHD file asynchronously
--------------------------------------------------*/
+//-------------------------------------------------
+// create - create a new file with no parent
+// using a filename
+//-------------------------------------------------
-chd_error chd_write_async(chd_file *chd, UINT32 hunknum, const void *buffer)
+chd_error chd_file::create(const char *filename, UINT64 logicalbytes, UINT32 hunkbytes, UINT32 unitbytes, chd_codec_type compression[4])
{
- /* punt if NULL or invalid */
- if (chd == NULL || chd->cookie != COOKIE_VALUE)
- return CHDERR_INVALID_PARAMETER;
-
- /* if we're past the end, fail */
- if (hunknum >= chd->header.totalhunks)
- return CHDERR_HUNK_OUT_OF_RANGE;
-
- /* wait for any pending async operations */
- wait_for_pending_async(chd);
+ // make sure we don't already have a file open
+ if (m_file != NULL)
+ return CHDERR_ALREADY_OPEN;
- /* set the async parameters */
- chd->async_hunknum = hunknum;
- chd->async_buffer = (void *)buffer;
-
- /* queue the work item */
- if (queue_async_operation(chd, async_write_callback))
- return CHDERR_OPERATION_PENDING;
+ // create the new file
+ core_file *file = NULL;
+ file_error filerr = core_fopen(filename, OPEN_FLAG_READ | OPEN_FLAG_WRITE | OPEN_FLAG_CREATE, &file);
+ if (filerr != FILERR_NONE)
+ return CHDERR_FILE_NOT_FOUND;
- /* if we fail, fall back on the sync version */
- return chd_write(chd, hunknum, buffer);
+ // create the file normally, then claim the file
+ chd_error chderr = create(*file, logicalbytes, hunkbytes, unitbytes, compression);
+ m_owns_file = true;
+
+ // if an error happened, close and delete the file
+ if (chderr != CHDERR_NONE)
+ {
+ core_fclose(file);
+ osd_rmfile(filename);
+ }
+ return chderr;
}
-/*-------------------------------------------------
- chd_async_complete - get the result of a
- completed work item and clear it out of the
- system
--------------------------------------------------*/
+//-------------------------------------------------
+// create - create a new file with a parent
+// using a filename
+//-------------------------------------------------
-chd_error chd_async_complete(chd_file *chd)
+chd_error chd_file::create(const char *filename, UINT64 logicalbytes, UINT32 hunkbytes, chd_codec_type compression[4], chd_file &parent)
{
- void *result;
+ // make sure we don't already have a file open
+ if (m_file != NULL)
+ return CHDERR_ALREADY_OPEN;
- /* if nothing present, return an error */
- if (chd->workitem == NULL)
- return CHDERR_NO_ASYNC_OPERATION;
-
- /* wait for the work to complete */
- wait_for_pending_async(chd);
-
- /* get the result and free the work item */
- result = osd_work_item_result(chd->workitem);
- osd_work_item_release(chd->workitem);
- chd->workitem = NULL;
+ // create the new file
+ core_file *file = NULL;
+ file_error filerr = core_fopen(filename, OPEN_FLAG_READ | OPEN_FLAG_WRITE | OPEN_FLAG_CREATE, &file);
+ if (filerr != FILERR_NONE)
+ return CHDERR_FILE_NOT_FOUND;
- return (chd_error)(ptrdiff_t)result;
+ // create the file normally, then claim the file
+ chd_error chderr = create(*file, logicalbytes, hunkbytes, compression, parent);
+ m_owns_file = true;
+
+ // if an error happened, close and delete the file
+ if (chderr != CHDERR_NONE)
+ {
+ core_fclose(file);
+ osd_rmfile(filename);
+ }
+ return chderr;
}
+//-------------------------------------------------
+// open - open an existing file for read or
+// read/write
+//-------------------------------------------------
-/***************************************************************************
- METADATA MANAGEMENT
-***************************************************************************/
-
-/*-------------------------------------------------
- chd_get_metadata - get the indexed metadata
- of the given type
--------------------------------------------------*/
-
-chd_error chd_get_metadata(chd_file *chd, UINT32 searchtag, UINT32 searchindex, void *output, UINT32 outputlen, UINT32 *resultlen, UINT32 *resulttag, UINT8 *resultflags)
+chd_error chd_file::open(const char *filename, bool writeable, chd_file *parent)
{
- metadata_entry metaentry;
- chd_error err;
- UINT32 count;
+ // make sure we don't already have a file open
+ if (m_file != NULL)
+ return CHDERR_ALREADY_OPEN;
- /* wait for any pending async operations */
- wait_for_pending_async(chd);
+ // open the file
+ UINT32 openflags = writeable ? (OPEN_FLAG_READ | OPEN_FLAG_WRITE) : OPEN_FLAG_READ;
+ core_file *file = NULL;
+ file_error filerr = core_fopen(filename, openflags, &file);
+ if (filerr != FILERR_NONE)
+ return CHDERR_FILE_NOT_FOUND;
- /* if we didn't find it, just return */
- err = metadata_find_entry(chd, searchtag, searchindex, &metaentry);
+ // now open the CHD
+ chd_error err = open(*file, writeable, parent);
if (err != CHDERR_NONE)
{
- /* unless we're an old version and they are requesting hard disk metadata */
- if (chd->header.version < 3 && (searchtag == HARD_DISK_METADATA_TAG || searchtag == CHDMETATAG_WILDCARD) && searchindex == 0)
- {
- char faux_metadata[256];
- UINT32 faux_length;
-
- /* fill in the faux metadata */
- sprintf(faux_metadata, HARD_DISK_METADATA_FORMAT, chd->header.obsolete_cylinders, chd->header.obsolete_heads, chd->header.obsolete_sectors, chd->header.hunkbytes / chd->header.obsolete_hunksize);
- faux_length = (UINT32)strlen(faux_metadata) + 1;
-
- /* copy the metadata itself */
- memcpy(output, faux_metadata, MIN(outputlen, faux_length));
-
- /* return the length of the data and the tag */
- if (resultlen != NULL)
- *resultlen = faux_length;
- if (resulttag != NULL)
- *resulttag = HARD_DISK_METADATA_TAG;
- return CHDERR_NONE;
- }
+ core_fclose(file);
return err;
}
-
- /* read the metadata */
- outputlen = MIN(outputlen, metaentry.length);
- core_fseek(chd->file, metaentry.offset + METADATA_HEADER_SIZE, SEEK_SET);
- count = core_fread(chd->file, output, outputlen);
- if (count != outputlen)
- return CHDERR_READ_ERROR;
-
- /* return the length of the data and the tag */
- if (resultlen != NULL)
- *resultlen = metaentry.length;
- if (resulttag != NULL)
- *resulttag = metaentry.metatag;
- if (resultflags != NULL)
- *resultflags = metaentry.flags;
- return CHDERR_NONE;
+
+ // we now own this file
+ m_owns_file = true;
+ return err;
}
-/*-------------------------------------------------
- chd_set_metadata - write the indexed metadata
- of the given type
--------------------------------------------------*/
+//-------------------------------------------------
+// open - open an existing file for read or
+// read/write
+//-------------------------------------------------
-chd_error chd_set_metadata(chd_file *chd, UINT32 metatag, UINT32 metaindex, const void *inputbuf, UINT32 inputlen, UINT8 flags)
+chd_error chd_file::open(core_file &file, bool writeable, chd_file *parent)
{
- UINT8 raw_meta_header[METADATA_HEADER_SIZE];
- metadata_entry metaentry = { 0 };
- chd_error err;
- UINT64 offset;
- UINT32 count;
-
- /* if the disk is an old version, punt */
- if (chd->header.version < 3)
- return CHDERR_NOT_SUPPORTED;
-
- /* if the disk isn't writeable, punt */
- if (!(chd->header.flags & CHDFLAGS_IS_WRITEABLE))
- return CHDERR_FILE_NOT_WRITEABLE;
-
- /* must write at least 1 byte */
- if (inputlen < 1)
- return CHDERR_INVALID_PARAMETER;
-
- /* no more than 16MB */
- if (inputlen >= 16 * 1024 * 1024)
- return CHDERR_INVALID_PARAMETER;
-
- /* wait for any pending async operations */
- wait_for_pending_async(chd);
+ // make sure we don't already have a file open
+ if (m_file != NULL)
+ return CHDERR_ALREADY_OPEN;
- /* find the entry if it already exists */
- err = metadata_find_entry(chd, metatag, metaindex, &metaentry);
+ // open the file
+ m_file = &file;
+ m_owns_file = false;
+ m_parent = parent;
+ return open_common(writeable);
+}
- /* if it's there and it fits, just overwrite */
- if (err == CHDERR_NONE && inputlen <= metaentry.length)
- {
- /* overwrite the original data with our new input data */
- core_fseek(chd->file, metaentry.offset + METADATA_HEADER_SIZE, SEEK_SET);
- count = core_fwrite(chd->file, inputbuf, inputlen);
- if (count != inputlen)
- {
- err = CHDERR_WRITE_ERROR;
- goto update;
- }
- /* if the lengths don't match, we need to update the length in our header */
- if (inputlen != metaentry.length)
- err = metadata_set_length(chd, metaentry.offset, inputlen);
- goto update;
- }
+//-------------------------------------------------
+// close - close a CHD file for access
+//-------------------------------------------------
- /* if we already have an entry, unlink it */
- if (err == CHDERR_NONE)
- {
- err = metadata_set_previous_next(chd, metaentry.prev, metaentry.next);
- if (err != CHDERR_NONE)
- goto update;
- }
+void chd_file::close()
+{
+ // reset file characteristics
+ if (m_owns_file && m_file != NULL)
+ core_fclose(m_file);
+ m_file = NULL;
+ m_owns_file = false;
+ m_allow_reads = false;
+ m_allow_writes = false;
+
+ // reset core parameters from the header
+ m_version = HEADER_VERSION;
+ m_logicalbytes = 0;
+ m_mapoffset = 0;
+ m_metaoffset = 0;
+ m_hunkbytes = 0;
+ m_hunkcount = 0;
+ m_unitbytes = 0;
+ m_unitcount = 0;
+ memset(m_compression, 0, sizeof(m_compression));
+ m_parent = NULL;
+ m_parent_missing = false;
- /* now build us a new entry */
- put_bigendian_uint32(&raw_meta_header[0], metatag);
- put_bigendian_uint32(&raw_meta_header[4], (inputlen & 0x00ffffff) | (flags << 24));
- put_bigendian_uint64(&raw_meta_header[8], (err == CHDERR_NONE) ? metaentry.next : 0);
+ // reset key offsets within the header
+ m_mapoffset_offset = 0;
+ m_metaoffset_offset = 0;
+ m_sha1_offset = 0;
+ m_rawsha1_offset = 0;
+ m_parentsha1_offset = 0;
- /* write out the new header */
- offset = core_fsize(chd->file);
- core_fseek(chd->file, offset, SEEK_SET);
- count = core_fwrite(chd->file, raw_meta_header, sizeof(raw_meta_header));
- if (count != sizeof(raw_meta_header))
- {
- err = CHDERR_WRITE_ERROR;
- goto update;
- }
+ // reset map information
+ m_mapentrybytes = 0;
+ m_rawmap.reset();
- /* follow that with the data */
- core_fseek(chd->file, offset + METADATA_HEADER_SIZE, SEEK_SET);
- count = core_fwrite(chd->file, inputbuf, inputlen);
- if (count != inputlen)
+ // reset compression management
+ for (int decompnum = 0; decompnum < ARRAY_LENGTH(m_decompressor); decompnum++)
{
- err = CHDERR_WRITE_ERROR;
- goto update;
+ delete m_decompressor[decompnum];
+ m_decompressor[decompnum] = NULL;
}
+ m_compressed.reset();
- /* set the previous entry to point to us */
- err = metadata_set_previous_next(chd, metaentry.prev, offset);
-
-update:
- /* update the hash */
- if (metadata_compute_hash(chd, chd->header.rawsha1, chd->header.sha1) == CHDERR_NONE)
- err = header_write(chd->file, &chd->header);
- return err;
+ // reset caching
+ m_cache.reset();
+ m_cachehunk = ~0;
}
-/*-------------------------------------------------
- chd_clone_metadata - clone the metadata from
- one CHD to a second
--------------------------------------------------*/
+//-------------------------------------------------
+// read - read a single hunk from the CHD file
+//-------------------------------------------------
-chd_error chd_clone_metadata(chd_file *source, chd_file *dest)
+chd_error chd_file::read_hunk(UINT32 hunknum, void *buffer)
{
- UINT32 metatag, metasize, metaindex;
- UINT8 metabuffer[1024];
- UINT8 metaflags;
- chd_error err;
-
- /* clone the metadata */
- for (metaindex = 0; ; metaindex++)
+ // wrap this for clean reporting
+ try
{
- /* fetch the next piece of metadata */
- err = chd_get_metadata(source, CHDMETATAG_WILDCARD, metaindex, metabuffer, sizeof(metabuffer), &metasize, &metatag, &metaflags);
- if (err != CHDERR_NONE)
- {
- if (err == CHDERR_METADATA_NOT_FOUND)
- err = CHDERR_NONE;
- break;
- }
-
- /* if that fit, just write it back from the temporary buffer */
- if (metasize <= sizeof(metabuffer))
- {
- /* write it to the target */
- err = chd_set_metadata(dest, metatag, CHD_METAINDEX_APPEND, metabuffer, metasize, metaflags);
- if (err != CHDERR_NONE)
- break;
- }
+ // punt if no file
+ if (m_file == NULL)
+ throw CHDERR_NOT_OPEN;
+
+ // return an error if out of range
+ if (hunknum >= m_hunkcount)
+ throw CHDERR_HUNK_OUT_OF_RANGE;
- /* otherwise, allocate a bigger temporary buffer */
- else
+ // get a pointer to the map entry
+ UINT64 blockoffs;
+ UINT32 blocklen;
+ UINT32 blockcrc;
+ UINT8 *rawmap;
+ UINT8 *dest = reinterpret_cast<UINT8 *>(buffer);
+ switch (m_version)
{
- UINT8 *allocbuffer = (UINT8 *)malloc(metasize);
- if (allocbuffer == NULL)
- {
- err = CHDERR_OUT_OF_MEMORY;
- break;
- }
-
- /* re-read the whole thing */
- err = chd_get_metadata(source, CHDMETATAG_WILDCARD, metaindex, allocbuffer, metasize, &metasize, &metatag, &metaflags);
- if (err != CHDERR_NONE)
- {
- free(allocbuffer);
+ // v3/v4 map entries
+ case 3:
+ case 4:
+ rawmap = m_rawmap + 16 * hunknum;
+ blockoffs = be_read(&rawmap[0], 8);
+ blockcrc = be_read(&rawmap[8], 4);
+ switch (rawmap[15] & V34_MAP_ENTRY_FLAG_TYPE_MASK)
+ {
+ case V34_MAP_ENTRY_TYPE_COMPRESSED:
+ blocklen = be_read(&rawmap[12], 2) + (rawmap[14] << 16);
+ file_read(blockoffs, m_compressed, blocklen);
+ m_decompressor[0]->decompress(m_compressed, blocklen, dest, m_hunkbytes);
+ if (!(rawmap[15] & V34_MAP_ENTRY_FLAG_NO_CRC) && dest != NULL && crc32_creator::simple(dest, m_hunkbytes) != blockcrc)
+ throw CHDERR_DECOMPRESSION_ERROR;
+ return CHDERR_NONE;
+
+ case V34_MAP_ENTRY_TYPE_UNCOMPRESSED:
+ file_read(blockoffs, dest, m_hunkbytes);
+ if (!(rawmap[15] & V34_MAP_ENTRY_FLAG_NO_CRC) && crc32_creator::simple(dest, m_hunkbytes) != blockcrc)
+ throw CHDERR_DECOMPRESSION_ERROR;
+ return CHDERR_NONE;
+
+ case V34_MAP_ENTRY_TYPE_MINI:
+ be_write(dest, blockoffs, 8);
+ for (UINT32 bytes = 8; bytes < m_hunkbytes; bytes++)
+ dest[bytes] = dest[bytes - 8];
+ if (!(rawmap[15] & V34_MAP_ENTRY_FLAG_NO_CRC) && crc32_creator::simple(dest, m_hunkbytes) != blockcrc)
+ throw CHDERR_DECOMPRESSION_ERROR;
+ return CHDERR_NONE;
+
+ case V34_MAP_ENTRY_TYPE_SELF_HUNK:
+ return read_hunk(blockoffs, dest);
+
+ case V34_MAP_ENTRY_TYPE_PARENT_HUNK:
+ if (m_parent_missing)
+ throw CHDERR_REQUIRES_PARENT;
+ return m_parent->read_hunk(blockoffs, dest);
+ }
break;
- }
+
+ // v5 map entries
+ case 5:
+ rawmap = m_rawmap + m_mapentrybytes * hunknum;
- /* write it to the target */
- err = chd_set_metadata(dest, metatag, CHD_METAINDEX_APPEND, allocbuffer, metasize, metaflags);
- free(allocbuffer);
- if (err != CHDERR_NONE)
+ // uncompressed case
+ if (!compressed())
+ {
+ blockoffs = UINT64(be_read(rawmap, 4)) * UINT64(m_hunkbytes);
+ if (blockoffs != 0)
+ file_read(blockoffs, dest, m_hunkbytes);
+ else if (m_parent_missing)
+ throw CHDERR_REQUIRES_PARENT;
+ else if (m_parent != NULL)
+ m_parent->read_hunk(hunknum, dest);
+ else
+ memset(dest, 0, m_hunkbytes);
+ return CHDERR_NONE;
+ }
+
+ // compressed case
+ blocklen = be_read(&rawmap[1], 3);
+ blockoffs = be_read(&rawmap[4], 6);
+ blockcrc = be_read(&rawmap[10], 2);
+ switch (rawmap[0])
+ {
+ case COMPRESSION_TYPE_0:
+ case COMPRESSION_TYPE_1:
+ case COMPRESSION_TYPE_2:
+ case COMPRESSION_TYPE_3:
+ file_read(blockoffs, m_compressed, blocklen);
+ m_decompressor[rawmap[0]]->decompress(m_compressed, blocklen, dest, m_hunkbytes);
+ if (!m_decompressor[rawmap[0]]->lossy() && dest != NULL && crc16_creator::simple(dest, m_hunkbytes) != blockcrc)
+ throw CHDERR_DECOMPRESSION_ERROR;
+ if (m_decompressor[rawmap[0]]->lossy() && crc16_creator::simple(m_compressed, blocklen) != blockcrc)
+ throw CHDERR_DECOMPRESSION_ERROR;
+ return CHDERR_NONE;
+
+ case COMPRESSION_NONE:
+ file_read(blockoffs, dest, m_hunkbytes);
+ if (crc16_creator::simple(dest, m_hunkbytes) != blockcrc)
+ throw CHDERR_DECOMPRESSION_ERROR;
+ return CHDERR_NONE;
+
+ case COMPRESSION_SELF:
+ return read_hunk(blockoffs, dest);
+
+ case COMPRESSION_PARENT:
+ if (m_parent_missing)
+ throw CHDERR_REQUIRES_PARENT;
+ return m_parent->read_bytes(UINT64(blockoffs) * UINT64(m_parent->unit_bytes()), dest, m_hunkbytes);
+ }
break;
}
+
+ // if we get here, something was wrong
+ throw CHDERR_READ_ERROR;
}
- return err;
-}
-
-
-
-/***************************************************************************
- COMPRESSION MANAGEMENT
-***************************************************************************/
-
-/*-------------------------------------------------
- chd_compress_begin - begin compressing data
- into a CHD
--------------------------------------------------*/
-
-chd_error chd_compress_begin(chd_file *chd)
-{
- chd_error err;
-
- /* verify parameters */
- if (chd == NULL)
- return CHDERR_INVALID_PARAMETER;
-
- /* wait for any pending async operations */
- wait_for_pending_async(chd);
-
- /* mark the CHD writeable and write the updated header */
- chd->header.flags |= CHDFLAGS_IS_WRITEABLE;
- err = header_write(chd->file, &chd->header);
- if (err != CHDERR_NONE)
+
+ // just return errors
+ catch (chd_error &err)
+ {
return err;
-
- /* create CRC maps for the new CHD and the parent */
- crcmap_init(chd, FALSE);
- if (chd->parent != NULL)
- crcmap_init(chd->parent, TRUE);
-
- /* init the MD5/SHA1 computations */
- MD5Init(&chd->compmd5);
- sha1_init(&chd->compsha1);
- chd->compressing = TRUE;
- chd->comphunk = 0;
-
- return CHDERR_NONE;
+ }
}
-/*-------------------------------------------------
- chd_compress_hunk - append data to a CHD
- that is being compressed
--------------------------------------------------*/
+//-------------------------------------------------
+// write - write a single hunk to the CHD file
+//-------------------------------------------------
-chd_error chd_compress_hunk(chd_file *chd, const void *data, double *curratio, int is_half_hunk)
+chd_error chd_file::write_hunk(UINT32 hunknum, const void *buffer)
{
- UINT32 thishunk = chd->comphunk++;
- UINT64 sourceoffset = (UINT64)thishunk * (UINT64)chd->header.hunkbytes;
- UINT32 bytestochecksum;
- const void *crcdata;
- chd_error err;
-
- /* error if in the wrong state */
- if (!chd->compressing)
- return CHDERR_INVALID_STATE;
-
- /* write out the hunk */
- err = hunk_write_from_memory(chd, thishunk, (const UINT8 *)data, is_half_hunk);
- if (err != CHDERR_NONE)
- return err;
+ // wrap this for clean reporting
+ try
+ {
+ // punt if no file
+ if (m_file == NULL)
+ throw CHDERR_NOT_OPEN;
- /* if we are lossy, then we need to use the decompressed version in */
- /* the cache as our MD5/SHA1 source */
- crcdata = (chd->codecintf->lossy || data == NULL) ? chd->cache : data;
+ // return an error if out of range
+ if (hunknum >= m_hunkcount)
+ throw CHDERR_HUNK_OUT_OF_RANGE;
- /* update the MD5/SHA1 */
- bytestochecksum = chd->header.hunkbytes;
+ // if not writeable, fail
+ if (!m_allow_writes)
+ throw CHDERR_FILE_NOT_WRITEABLE;
- if (is_half_hunk)
- {
- bytestochecksum = bytestochecksum/2;
- }
+ // uncompressed writes only via this interface
+ if (compressed())
+ throw CHDERR_FILE_NOT_WRITEABLE;
+
+ // see if we have allocated the space on disk for this hunk
+ UINT8 *rawmap = m_rawmap + hunknum * 4;
+ UINT32 rawentry = be_read(rawmap, 4);
- if (sourceoffset + chd->header.hunkbytes > chd->header.logicalbytes)
- {
- if (sourceoffset >= chd->header.logicalbytes)
- bytestochecksum = 0;
+ // if not, allocate one now
+ if (rawentry == 0)
+ {
+ // append new data to the end of the file, aligning the first chunk
+ rawentry = file_append(buffer, m_hunkbytes, m_hunkbytes) / m_hunkbytes;
+
+ // write the map entry back
+ be_write(rawmap, rawentry, 4);
+ file_write(m_mapoffset + hunknum * 4, rawmap, 4);
+
+ // update the cached hunk if we just wrote it
+ if (hunknum == m_cachehunk && buffer != m_cache)
+ memcpy(m_cache, buffer, m_hunkbytes);
+ }
+
+ // otherwise, just overwrite
else
- bytestochecksum = chd->header.logicalbytes - sourceoffset;
- }
- if (bytestochecksum > 0)
- {
- MD5Update(&chd->compmd5, (const unsigned char *)crcdata, bytestochecksum);
- sha1_update(&chd->compsha1, bytestochecksum, (const UINT8 *)crcdata);
+ file_write(UINT64(rawentry) * UINT64(m_hunkbytes), buffer, m_hunkbytes);
+ return CHDERR_NONE;
}
-
- /* update our CRC map */
- if ((chd->map[thishunk].flags & MAP_ENTRY_FLAG_TYPE_MASK) != MAP_ENTRY_TYPE_SELF_HUNK &&
- (chd->map[thishunk].flags & MAP_ENTRY_FLAG_TYPE_MASK) != MAP_ENTRY_TYPE_PARENT_HUNK)
- crcmap_add_entry(chd, thishunk);
-
- /* update the ratio */
- if (curratio != NULL)
+
+ // just return errors
+ catch (chd_error &err)
{
- UINT64 curlength = core_fsize(chd->file);
- *curratio = 1.0 - (double)curlength / (double)((UINT64)chd->comphunk * (UINT64)chd->header.hunkbytes);
+ return err;
}
-
- return CHDERR_NONE;
}
-/*-------------------------------------------------
- chd_compress_finish - complete compression of
- a CHD
--------------------------------------------------*/
+//-------------------------------------------------
+// read_units - read the given number of units
+// from the CHD
+//-------------------------------------------------
-chd_error chd_compress_finish(chd_file *chd, int write_protect)
+chd_error chd_file::read_units(UINT64 unitnum, void *buffer, UINT32 count)
{
- /* error if in the wrong state */
- if (!chd->compressing)
- return CHDERR_INVALID_STATE;
-
- /* compute the final MD5/SHA1 values */
- MD5Final(chd->header.md5, &chd->compmd5);
- sha1_final(&chd->compsha1);
- sha1_digest(&chd->compsha1, SHA1_DIGEST_SIZE, chd->header.rawsha1);
- metadata_compute_hash(chd, chd->header.rawsha1, chd->header.sha1);
-
- /* turn off the writeable flag and re-write the header */
- if (chd->header.compression != CHDCOMPRESSION_NONE || write_protect)
- chd->header.flags &= ~CHDFLAGS_IS_WRITEABLE;
- chd->compressing = FALSE;
- return header_write(chd->file, &chd->header);
+ return read_bytes(unitnum * UINT64(m_unitbytes), buffer, count * m_unitbytes);
}
+//-------------------------------------------------
+// write_units - write the given number of units
+// to the CHD
+//-------------------------------------------------
-/***************************************************************************
- VERIFICATION
-***************************************************************************/
-
-/*-------------------------------------------------
- chd_verify_begin - begin compressing data
- into a CHD
--------------------------------------------------*/
-
-chd_error chd_verify_begin(chd_file *chd)
+chd_error chd_file::write_units(UINT64 unitnum, const void *buffer, UINT32 count)
{
- /* verify parameters */
- if (chd == NULL)
- return CHDERR_INVALID_PARAMETER;
-
- /* if this is a writeable file image, we can't verify */
- if (chd->header.flags & CHDFLAGS_IS_WRITEABLE)
- return CHDERR_CANT_VERIFY;
-
- /* wait for any pending async operations */
- wait_for_pending_async(chd);
-
- /* init the MD5/SHA1 computations */
- MD5Init(&chd->vermd5);
- sha1_init(&chd->versha1);
- chd->verifying = TRUE;
- chd->verhunk = 0;
-
- return CHDERR_NONE;
+ return write_bytes(unitnum * UINT64(m_unitbytes), buffer, count * m_unitbytes);
}
-/*-------------------------------------------------
- chd_verify_hunk - verify the next hunk in
- the CHD
--------------------------------------------------*/
+//-------------------------------------------------
+// read_bytes - read from the CHD at a byte level,
+// using the cache to handle partial hunks
+//-------------------------------------------------
-chd_error chd_verify_hunk(chd_file *chd)
+chd_error chd_file::read_bytes(UINT64 offset, void *buffer, UINT32 bytes)
{
- UINT32 thishunk = chd->verhunk++;
- UINT64 hunkoffset = (UINT64)thishunk * (UINT64)chd->header.hunkbytes;
- map_entry *entry;
- chd_error err;
-
- /* error if in the wrong state */
- if (!chd->verifying)
- return CHDERR_INVALID_STATE;
-
- /* read the hunk into the cache */
- err = hunk_read_into_cache(chd, thishunk);
- if (err != CHDERR_NONE)
- return err;
-
- entry = &chd->map[thishunk];
-
- /* update the MD5/SHA1 */
- if (hunkoffset < chd->header.logicalbytes)
+ // iterate over hunks
+ UINT32 first_hunk = offset / m_hunkbytes;
+ UINT32 last_hunk = (offset + bytes - 1) / m_hunkbytes;
+ UINT8 *dest = reinterpret_cast<UINT8 *>(buffer);
+ for (UINT32 curhunk = first_hunk; curhunk <= last_hunk; curhunk++)
{
- UINT64 bytestochecksum = MIN(chd->header.hunkbytes, chd->header.logicalbytes - hunkoffset);
-
- if (entry->flags & MAP_ENTRY_FLAG_HALF_HUNK)
+ // determine start/end boundaries
+ UINT32 startoffs = (curhunk == first_hunk) ? (offset % m_hunkbytes) : 0;
+ UINT32 endoffs = (curhunk == last_hunk) ? ((offset + bytes - 1) % m_hunkbytes) : (m_hunkbytes - 1);
+
+ // if it's a full block, just read directly from disk unless it's the cached hunk
+ chd_error err = CHDERR_NONE;
+ if (startoffs == 0 && endoffs == m_hunkbytes - 1 && curhunk != m_cachehunk)
+ err = read_hunk(curhunk, dest);
+
+ // otherwise, read from the cache
+ else
{
- bytestochecksum /= 2;
+ if (curhunk != m_cachehunk)
+ {
+ err = read_hunk(curhunk, m_cache);
+ if (err != CHDERR_NONE)
+ return err;
+ m_cachehunk = curhunk;
+ }
+ memcpy(dest, &m_cache[startoffs], endoffs + 1 - startoffs);
}
- if (bytestochecksum > 0)
- {
- MD5Update(&chd->vermd5, chd->cache, bytestochecksum);
- sha1_update(&chd->versha1, bytestochecksum, chd->cache);
- }
+ // handle errors and advance
+ if (err != CHDERR_NONE)
+ return err;
+ dest += endoffs + 1 - startoffs;
}
-
- /* validate the CRC if we have one */
- if (!(entry->flags & MAP_ENTRY_FLAG_NO_CRC) && entry->crc != crc32(0, chd->cache, chd->header.hunkbytes))
- return CHDERR_DECOMPRESSION_ERROR;
-
return CHDERR_NONE;
}
-/*-------------------------------------------------
- chd_verify_finish - finish verification of
- the CHD
--------------------------------------------------*/
+//-------------------------------------------------
+// write_bytes - write to the CHD at a byte level,
+// using the cache to handle partial hunks
+//-------------------------------------------------
-chd_error chd_verify_finish(chd_file *chd, chd_verify_result *result)
+chd_error chd_file::write_bytes(UINT64 offset, const void *buffer, UINT32 bytes)
{
- /* error if in the wrong state */
- if (!chd->verifying)
- return CHDERR_INVALID_STATE;
-
- /* compute the final MD5 */
- MD5Final(result->md5, &chd->vermd5);
-
- /* compute the final SHA1 */
- sha1_final(&chd->versha1);
- sha1_digest(&chd->versha1, SHA1_DIGEST_SIZE, result->rawsha1);
-
- /* compute the overall hash including metadata */
- metadata_compute_hash(chd, result->rawsha1, result->sha1);
+ // iterate over hunks
+ UINT32 first_hunk = offset / m_hunkbytes;
+ UINT32 last_hunk = (offset + bytes - 1) / m_hunkbytes;
+ const UINT8 *source = reinterpret_cast<const UINT8 *>(buffer);
+ for (UINT32 curhunk = first_hunk; curhunk <= last_hunk; curhunk++)
+ {
+ // determine start/end boundaries
+ UINT32 startoffs = (curhunk == first_hunk) ? (offset % m_hunkbytes) : 0;
+ UINT32 endoffs = (curhunk == last_hunk) ? ((offset + bytes - 1) % m_hunkbytes) : (m_hunkbytes - 1);
+
+ // if it's a full block, just write directly to disk unless it's the cached hunk
+ chd_error err = CHDERR_NONE;
+ if (startoffs == 0 && endoffs == m_hunkbytes - 1 && curhunk != m_cachehunk)
+ err = write_hunk(curhunk, source);
+
+ // otherwise, write from the cache
+ else
+ {
+ if (curhunk != m_cachehunk)
+ {
+ err = read_hunk(curhunk, m_cache);
+ if (err != CHDERR_NONE)
+ return err;
+ m_cachehunk = curhunk;
+ }
+ memcpy(&m_cache[startoffs], source, endoffs + 1 - startoffs);
+ err = write_hunk(curhunk, m_cache);
+ }
- /* return an error */
- chd->verifying = FALSE;
- return (chd->verhunk < chd->header.totalhunks) ? CHDERR_VERIFY_INCOMPLETE : CHDERR_NONE;
+ // handle errors and advance
+ if (err != CHDERR_NONE)
+ return err;
+ source += endoffs + 1 - startoffs;
+ }
+ return CHDERR_NONE;
}
+//-------------------------------------------------
+// read_metadata - read the indexed metadata
+// of the given type
+//-------------------------------------------------
-/***************************************************************************
- CODEC INTERFACES
-***************************************************************************/
-
-/*-------------------------------------------------
- chd_codec_config - set internal codec
- parameters
--------------------------------------------------*/
-
-chd_error chd_codec_config(chd_file *chd, int param, void *config)
+chd_error chd_file::read_metadata(chd_metadata_tag searchtag, UINT32 searchindex, astring &output)
{
- /* wait for any pending async operations */
- wait_for_pending_async(chd);
-
- /* if the codec has a configuration callback, call through to it */
- if (chd->codecintf->config != NULL)
- return (*chd->codecintf->config)(chd, param, config);
+ // wrap this for clean reporting
+ try
+ {
+ // if we didn't find it, just return
+ metadata_entry metaentry;
+ if (!metadata_find(searchtag, searchindex, metaentry))
+ throw CHDERR_METADATA_NOT_FOUND;
- return CHDERR_INVALID_PARAMETER;
+ // read the metadata
+ file_read(metaentry.offset + METADATA_HEADER_SIZE, output.stringbuffer(metaentry.length), metaentry.length);
+ return CHDERR_NONE;
+ }
+
+ // just return errors
+ catch (chd_error &err)
+ {
+ return err;
+ }
}
-
-/*-------------------------------------------------
- chd_get_codec_name - get the name of a
- particular codec
--------------------------------------------------*/
-
-const char *chd_get_codec_name(UINT32 codec)
+chd_error chd_file::read_metadata(chd_metadata_tag searchtag, UINT32 searchindex, dynamic_buffer &output)
{
- int intfnum;
-
- /* look for a matching codec and return its string */
- for (intfnum = 0; intfnum < ARRAY_LENGTH(codec_interfaces); intfnum++)
- if (codec_interfaces[intfnum].compression == codec)
- return codec_interfaces[intfnum].compname;
+ // wrap this for clean reporting
+ try
+ {
+ // if we didn't find it, just return
+ metadata_entry metaentry;
+ if (!metadata_find(searchtag, searchindex, metaentry))
+ throw CHDERR_METADATA_NOT_FOUND;
- return "Unknown";
+ // read the metadata
+ output.resize(metaentry.length);
+ file_read(metaentry.offset + METADATA_HEADER_SIZE, output, metaentry.length);
+ return CHDERR_NONE;
+ }
+
+ // just return errors
+ catch (chd_error &err)
+ {
+ return err;
+ }
}
-
-
-/***************************************************************************
- INTERNAL ASYNC OPERATIONS
-***************************************************************************/
-
-/*-------------------------------------------------
- async_read_callback - asynchronous reading
- callback
--------------------------------------------------*/
-
-static void *async_read_callback(void *param, int threadid)
+chd_error chd_file::read_metadata(chd_metadata_tag searchtag, UINT32 searchindex, void *output, UINT32 outputlen, UINT32 &resultlen)
{
- chd_file *chd = (chd_file *)param;
- chd_error err;
-
- /* read the hunk into the cache */
- err = hunk_read_into_memory(chd, chd->async_hunknum, (UINT8 *)chd->async_buffer);
+ // wrap this for clean reporting
+ try
+ {
+ // if we didn't find it, just return
+ metadata_entry metaentry;
+ if (!metadata_find(searchtag, searchindex, metaentry))
+ throw CHDERR_METADATA_NOT_FOUND;
- /* return the error */
- return (void *)err;
+ // read the metadata
+ resultlen = metaentry.length;
+ file_read(metaentry.offset + METADATA_HEADER_SIZE, output, MIN(outputlen, resultlen));
+ return CHDERR_NONE;
+ }
+
+ // just return errors
+ catch (chd_error &err)
+ {
+ return err;
+ }
}
-
-/*-------------------------------------------------
- async_write_callback - asynchronous writing
- callback
--------------------------------------------------*/
-
-static void *async_write_callback(void *param, int threadid)
+chd_error chd_file::read_metadata(chd_metadata_tag searchtag, UINT32 searchindex, dynamic_buffer &output, chd_metadata_tag &resulttag, UINT8 &resultflags)
{
- chd_file *chd = (chd_file *)param;
- chd_error err;
-
- /* write the hunk from memory */
- err = hunk_write_from_memory(chd, chd->async_hunknum, (const UINT8 *)chd->async_buffer);
+ // wrap this for clean reporting
+ try
+ {
+ // if we didn't find it, just return
+ metadata_entry metaentry;
+ if (!metadata_find(searchtag, searchindex, metaentry))
+ throw CHDERR_METADATA_NOT_FOUND;
- /* return the error */
- return (void *)err;
+ // read the metadata
+ output.resize(metaentry.length);
+ file_read(metaentry.offset + METADATA_HEADER_SIZE, output, metaentry.length);
+ resulttag = metaentry.metatag;
+ resultflags = metaentry.flags;
+ return CHDERR_NONE;
+ }
+
+ // just return errors
+ catch (chd_error &err)
+ {
+ return err;
+ }
}
+//-------------------------------------------------
+// write_metadata - write the indexed metadata
+// of the given type
+//-------------------------------------------------
-/***************************************************************************
- INTERNAL HEADER OPERATIONS
-***************************************************************************/
-
-/*-------------------------------------------------
- header_validate - check the validity of a
- CHD header
--------------------------------------------------*/
-
-static chd_error header_validate(const chd_header *header)
+chd_error chd_file::write_metadata(chd_metadata_tag metatag, UINT32 metaindex, const void *inputbuf, UINT32 inputlen, UINT8 flags)
{
- int intfnum;
-
- /* require a valid version */
- if (header->version == 0 || header->version > CHD_HEADER_VERSION)
- return CHDERR_UNSUPPORTED_VERSION;
-
- /* require a valid length */
- if ((header->version == 1 && header->length != CHD_V1_HEADER_SIZE) ||
- (header->version == 2 && header->length != CHD_V2_HEADER_SIZE) ||
- (header->version == 3 && header->length != CHD_V3_HEADER_SIZE) ||
- (header->version == 4 && header->length != CHD_V4_HEADER_SIZE))
- return CHDERR_INVALID_PARAMETER;
-
- /* require valid flags */
- if (header->flags & CHDFLAGS_UNDEFINED)
- return CHDERR_INVALID_PARAMETER;
-
- /* require a supported compression mechanism */
- for (intfnum = 0; intfnum < ARRAY_LENGTH(codec_interfaces); intfnum++)
- if (codec_interfaces[intfnum].compression == header->compression)
- break;
- if (intfnum == ARRAY_LENGTH(codec_interfaces))
- return CHDERR_INVALID_PARAMETER;
-
- /* require a valid hunksize */
- if (header->hunkbytes == 0 || header->hunkbytes >= 65536 * 256)
- return CHDERR_INVALID_PARAMETER;
-
- /* require a valid hunk count */
- if (header->totalhunks == 0)
- return CHDERR_INVALID_PARAMETER;
-
- /* require a valid MD5 and/or SHA1 if we're using a parent */
- if ((header->flags & CHDFLAGS_HAS_PARENT) && memcmp(header->parentmd5, nullmd5, sizeof(nullmd5)) == 0 && memcmp(header->parentsha1, nullsha1, sizeof(nullsha1)) == 0)
- return CHDERR_INVALID_PARAMETER;
+ // wrap this for clean reporting
+ try
+ {
+ // must write at least 1 byte and no more than 16MB
+ if (inputlen < 1 || inputlen >= 16 * 1024 * 1024)
+ return CHDERR_INVALID_PARAMETER;
- /* if we're V3 or later, the obsolete fields must be 0 */
- if (header->version >= 3 &&
- (header->obsolete_cylinders != 0 || header->obsolete_sectors != 0 ||
- header->obsolete_heads != 0 || header->obsolete_hunksize != 0))
- return CHDERR_INVALID_PARAMETER;
+ // find the entry if it already exists
+ metadata_entry metaentry;
+ bool finished = false;
+ if (metadata_find(metatag, metaindex, metaentry))
+ {
+ // if the new data fits over the old data, just overwrite
+ if (inputlen <= metaentry.length)
+ {
+ file_write(metaentry.offset + METADATA_HEADER_SIZE, inputbuf, inputlen);
- /* if we're pre-V3, the obsolete fields must NOT be 0 */
- if (header->version < 3 &&
- (header->obsolete_cylinders == 0 || header->obsolete_sectors == 0 ||
- header->obsolete_heads == 0 || header->obsolete_hunksize == 0))
- return CHDERR_INVALID_PARAMETER;
+ // if the lengths don't match, we need to update the length in our header
+ if (inputlen != metaentry.length)
+ {
+ UINT8 length[3];
+ be_write(length, inputlen, 3);
+ file_write(metaentry.offset + 5, length, sizeof(length));
+ }
+
+ // indicate we did everything
+ finished = true;
+ }
+
+ // if it doesn't fit, unlink the current entry
+ else
+ metadata_set_previous_next(metaentry.prev, metaentry.next);
+ }
+
+ // if not yet done, create a new entry and append
+ if (!finished)
+ {
+ // now build us a new entry
+ UINT8 raw_meta_header[METADATA_HEADER_SIZE];
+ be_write(&raw_meta_header[0], metatag, 4);
+ raw_meta_header[4] = flags;
+ be_write(&raw_meta_header[5], (inputlen & 0x00ffffff) | (flags << 24), 3);
+ be_write(&raw_meta_header[8], 0, 8);
+
+ // append the new header, then the data
+ UINT64 offset = file_append(raw_meta_header, sizeof(raw_meta_header));
+ file_append(inputbuf, inputlen);
+
+ // set the previous entry to point to us
+ metadata_set_previous_next(metaentry.prev, offset);
+ }
- return CHDERR_NONE;
+ // update the hash
+ metadata_update_hash();
+ return CHDERR_NONE;
+ }
+
+ // return any errors
+ catch (chd_error &err)
+ {
+ return err;
+ }
}
-/*-------------------------------------------------
- header_read - read a CHD header into the
- internal data structure
--------------------------------------------------*/
+//-------------------------------------------------
+// delete_metadata - remove the given metadata
+// from the list
+//-------------------------------------------------
-static chd_error header_read(core_file *file, chd_header *header)
+chd_error chd_file::delete_metadata(chd_metadata_tag metatag, UINT32 metaindex)
{
- UINT8 rawheader[CHD_MAX_HEADER_SIZE];
- UINT32 count;
-
- /* punt if NULL */
- if (header == NULL)
- return CHDERR_INVALID_PARAMETER;
-
- /* punt if invalid file */
- if (file == NULL)
- return CHDERR_INVALID_FILE;
-
- /* seek and read */
- core_fseek(file, 0, SEEK_SET);
- count = core_fread(file, rawheader, sizeof(rawheader));
- if (count != sizeof(rawheader))
- return CHDERR_READ_ERROR;
-
- /* verify the tag */
- if (strncmp((char *)rawheader, "MComprHD", 8) != 0)
- return CHDERR_INVALID_DATA;
-
- /* extract the direct data */
- memset(header, 0, sizeof(*header));
- header->length = get_bigendian_uint32(&rawheader[8]);
- header->version = get_bigendian_uint32(&rawheader[12]);
-
- /* make sure it's a version we understand */
- if (header->version == 0 || header->version > CHD_HEADER_VERSION)
- return CHDERR_UNSUPPORTED_VERSION;
-
- /* make sure the length is expected */
- if ((header->version == 1 && header->length != CHD_V1_HEADER_SIZE) ||
- (header->version == 2 && header->length != CHD_V2_HEADER_SIZE) ||
- (header->version == 3 && header->length != CHD_V3_HEADER_SIZE) ||
- (header->version == 4 && header->length != CHD_V4_HEADER_SIZE))
- return CHDERR_INVALID_DATA;
-
- /* extract the common data */
- header->flags = get_bigendian_uint32(&rawheader[16]);
- header->compression = get_bigendian_uint32(&rawheader[20]);
-
- /* extract the V1/V2-specific data */
- if (header->version < 3)
+ // wrap this for clean reporting
+ try
{
- int seclen = (header->version == 1) ? CHD_V1_SECTOR_SIZE : get_bigendian_uint32(&rawheader[76]);
- header->obsolete_hunksize = get_bigendian_uint32(&rawheader[24]);
- header->totalhunks = get_bigendian_uint32(&rawheader[28]);
- header->obsolete_cylinders = get_bigendian_uint32(&rawheader[32]);
- header->obsolete_heads = get_bigendian_uint32(&rawheader[36]);
- header->obsolete_sectors = get_bigendian_uint32(&rawheader[40]);
- memcpy(header->md5, &rawheader[44], CHD_MD5_BYTES);
- memcpy(header->parentmd5, &rawheader[60], CHD_MD5_BYTES);
- header->logicalbytes = (UINT64)header->obsolete_cylinders * (UINT64)header->obsolete_heads * (UINT64)header->obsolete_sectors * (UINT64)seclen;
- header->hunkbytes = seclen * header->obsolete_hunksize;
- header->metaoffset = 0;
+ // find the entry
+ metadata_entry metaentry;
+ if (!metadata_find(metatag, metaindex, metaentry))
+ throw CHDERR_METADATA_NOT_FOUND;
+
+ // point the previous to the next, unlinking us
+ metadata_set_previous_next(metaentry.prev, metaentry.next);
+ return CHDERR_NONE;
}
-
- /* extract the V3-specific data */
- else if (header->version == 3)
+
+ // return any errors
+ catch (chd_error &err)
{
- header->totalhunks = get_bigendian_uint32(&rawheader[24]);
- header->logicalbytes = get_bigendian_uint64(&rawheader[28]);
- header->metaoffset = get_bigendian_uint64(&rawheader[36]);
- memcpy(header->md5, &rawheader[44], CHD_MD5_BYTES);
- memcpy(header->parentmd5, &rawheader[60], CHD_MD5_BYTES);
- header->hunkbytes = get_bigendian_uint32(&rawheader[76]);
- memcpy(header->sha1, &rawheader[80], CHD_SHA1_BYTES);
- memcpy(header->parentsha1, &rawheader[100], CHD_SHA1_BYTES);
+ return err;
}
+}
- /* extract the V4-specific data */
- else
+
+//-------------------------------------------------
+// clone_all_metadata - clone the metadata from
+// one CHD to a second
+//-------------------------------------------------
+
+chd_error chd_file::clone_all_metadata(chd_file &source)
+{
+ // wrap this for clean reporting
+ try
{
- header->totalhunks = get_bigendian_uint32(&rawheader[24]);
- header->logicalbytes = get_bigendian_uint64(&rawheader[28]);
- header->metaoffset = get_bigendian_uint64(&rawheader[36]);
- header->hunkbytes = get_bigendian_uint32(&rawheader[44]);
- memcpy(header->sha1, &rawheader[48], CHD_SHA1_BYTES);
- memcpy(header->parentsha1, &rawheader[68], CHD_SHA1_BYTES);
- memcpy(header->rawsha1, &rawheader[88], CHD_SHA1_BYTES);
+ // iterate over metadata entries in the source
+ dynamic_buffer filedata;
+ metadata_entry metaentry;
+ for (bool has_data = source.metadata_find(CHDMETATAG_WILDCARD, 0, metaentry); has_data; has_data = source.metadata_find(CHDMETATAG_WILDCARD, 0, metaentry, true))
+ {
+ // read the metadata item
+ filedata.resize(metaentry.length);
+ source.file_read(metaentry.offset + METADATA_HEADER_SIZE, filedata, metaentry.length);
+
+ // write it to the destination
+ chd_error err = write_metadata(metaentry.metatag, -1, filedata, metaentry.length, metaentry.flags);
+ if (err != CHDERR_NONE)
+ throw err;
+ }
+ return CHDERR_NONE;
+ }
+
+ // return any errors
+ catch (chd_error &err)
+ {
+ return err;
}
-
- /* guess it worked */
- return CHDERR_NONE;
}
-/*-------------------------------------------------
- header_write - write a CHD header from the
- internal data structure
--------------------------------------------------*/
+//-------------------------------------------------
+// compute_overall_sha1 - iterate through the
+// metadata and compute the overall hash of the
+// CHD file
+//-------------------------------------------------
-static chd_error header_write(core_file *file, const chd_header *header)
+sha1_t chd_file::compute_overall_sha1(sha1_t rawsha1)
{
- UINT8 rawheader[CHD_MAX_HEADER_SIZE];
- UINT32 count;
+ // only works for v4 and above
+ if (m_version < 4)
+ return rawsha1;
- /* punt if NULL */
- if (header == NULL)
- return CHDERR_INVALID_PARAMETER;
+ // iterate over metadata
+ dynamic_buffer filedata;
+ dynamic_array<metadata_hash> hasharray;
+ metadata_entry metaentry;
+ for (bool has_data = metadata_find(CHDMETATAG_WILDCARD, 0, metaentry); has_data; has_data = metadata_find(CHDMETATAG_WILDCARD, 0, metaentry, true))
+ {
+ // if not checksumming, continue
+ if ((metaentry.flags & CHD_MDFLAGS_CHECKSUM) == 0)
+ continue;
- /* punt if invalid file */
- if (file == NULL)
- return CHDERR_INVALID_FILE;
+ // allocate memory and read the data
+ filedata.resize(metaentry.length);
+ file_read(metaentry.offset + METADATA_HEADER_SIZE, filedata, metaentry.length);
- /* only support writing modern headers */
- if (header->version != 4)
- return CHDERR_INVALID_PARAMETER;
+ // create an entry for this metadata and add it
+ metadata_hash hashentry;
+ be_write(hashentry.tag, metaentry.metatag, 4);
+ hashentry.sha1 = sha1_creator::simple(filedata, metaentry.length);
+ hasharray.append(hashentry);
+ }
- /* assemble the data */
- memset(rawheader, 0, sizeof(rawheader));
- memcpy(rawheader, "MComprHD", 8);
-
- put_bigendian_uint32(&rawheader[8], CHD_V4_HEADER_SIZE);
- put_bigendian_uint32(&rawheader[12], header->version);
- put_bigendian_uint32(&rawheader[16], header->flags);
- put_bigendian_uint32(&rawheader[20], header->compression);
- put_bigendian_uint32(&rawheader[24], header->totalhunks);
- put_bigendian_uint64(&rawheader[28], header->logicalbytes);
- put_bigendian_uint64(&rawheader[36], header->metaoffset);
- put_bigendian_uint32(&rawheader[44], header->hunkbytes);
- memcpy(&rawheader[48], header->sha1, CHD_SHA1_BYTES);
- memcpy(&rawheader[68], header->parentsha1, CHD_SHA1_BYTES);
- memcpy(&rawheader[88], header->rawsha1, CHD_SHA1_BYTES);
-
- /* seek and write */
- core_fseek(file, 0, SEEK_SET);
- count = core_fwrite(file, rawheader, CHD_V4_HEADER_SIZE);
- if (count != CHD_V4_HEADER_SIZE)
- return CHDERR_WRITE_ERROR;
+ // sort the array
+ if (hasharray.count() != 0)
+ qsort(&hasharray[0], hasharray.count(), sizeof(hasharray[0]), metadata_hash_compare);
- return CHDERR_NONE;
+ // read the raw data hash from our header and start a new SHA1 with that data
+ sha1_creator overall_sha1;
+ overall_sha1.append(&rawsha1, sizeof(rawsha1));
+ if (hasharray.count() != 0)
+ overall_sha1.append(&hasharray[0], hasharray.count() * sizeof(hasharray[0]));
+ return overall_sha1.finish();
}
+//-------------------------------------------------
+// codec_config - set internal codec parameters
+//-------------------------------------------------
-/***************************************************************************
- INTERNAL HUNK READ/WRITE
-***************************************************************************/
-
-/*-------------------------------------------------
- hunk_read_into_cache - read a hunk into
- the CHD's hunk cache
--------------------------------------------------*/
-
-static chd_error hunk_read_into_cache(chd_file *chd, UINT32 hunknum)
+chd_error chd_file::codec_configure(chd_codec_type codec, int param, void *config)
{
- chd_error err;
-
- /* track the max */
- if (hunknum > chd->maxhunk)
- chd->maxhunk = hunknum;
-
- /* if we're already in the cache, we're done */
- if (chd->cachehunk == hunknum)
- return CHDERR_NONE;
- chd->cachehunk = ~0;
-
- /* otherwise, read the data */
- err = hunk_read_into_memory(chd, hunknum, chd->cache);
- if (err != CHDERR_NONE)
+ // wrap this for clean reporting
+ try
+ {
+ // find the codec and call its configuration
+ for (int codecnum = 0; codecnum < ARRAY_LENGTH(m_compression); codecnum++)
+ if (m_compression[codecnum] == codec)
+ {
+ m_decompressor[codecnum]->configure(param, config);
+ return CHDERR_NONE;
+ }
+ return CHDERR_INVALID_PARAMETER;
+ }
+
+ // return any errors
+ catch (chd_error &err)
+ {
return err;
-
- /* mark the hunk successfully cached in */
- chd->cachehunk = hunknum;
- return CHDERR_NONE;
+ }
}
-/*-------------------------------------------------
- hunk_read_into_memory - read a hunk into
- memory at the given location
--------------------------------------------------*/
+//-------------------------------------------------
+// error_string - return an error string for
+// the given CHD error
+//-------------------------------------------------
-static chd_error hunk_read_into_memory(chd_file *chd, UINT32 hunknum, UINT8 *dest)
+const char *chd_file::error_string(chd_error err)
{
- map_entry *entry = &chd->map[hunknum];
- chd_error err;
- UINT32 bytes;
-
- /* return an error if out of range */
- if (hunknum >= chd->header.totalhunks)
- return CHDERR_HUNK_OUT_OF_RANGE;
-
- /* switch off the entry type */
- switch (entry->flags & MAP_ENTRY_FLAG_TYPE_MASK)
+ switch (err)
{
- /* compressed data */
- case MAP_ENTRY_TYPE_COMPRESSED:
-
- /* read it into the decompression buffer */
- core_fseek(chd->file, entry->offset, SEEK_SET);
- bytes = core_fread(chd->file, chd->compressed, entry->length);
- if (bytes != entry->length)
- return CHDERR_READ_ERROR;
-
- /* now decompress using the codec */
- err = CHDERR_NONE;
- if (chd->codecintf->decompress != NULL)
- err = (*chd->codecintf->decompress)(chd, entry->length, dest);
- if (err != CHDERR_NONE)
- return err;
- break;
-
- /* uncompressed data */
- case MAP_ENTRY_TYPE_UNCOMPRESSED:
- core_fseek(chd->file, entry->offset, SEEK_SET);
- bytes = core_fread(chd->file, dest, chd->header.hunkbytes);
- if (bytes != chd->header.hunkbytes)
- return CHDERR_READ_ERROR;
- break;
-
- /* mini-compressed data */
- case MAP_ENTRY_TYPE_MINI:
- put_bigendian_uint64(&dest[0], entry->offset);
- for (bytes = 8; bytes < chd->header.hunkbytes; bytes++)
- dest[bytes] = dest[bytes - 8];
- break;
-
- /* self-referenced data */
- case MAP_ENTRY_TYPE_SELF_HUNK:
- if (chd->cachehunk == entry->offset && dest == chd->cache)
- break;
- return hunk_read_into_memory(chd, entry->offset, dest);
-
- /* parent-referenced data */
- case MAP_ENTRY_TYPE_PARENT_HUNK:
- err = hunk_read_into_memory(chd->parent, entry->offset, dest);
- if (err != CHDERR_NONE)
- return err;
- break;
-
- case MAP_ENTRY_TYPE_2ND_COMPRESSED:
- /* read it into the decompression buffer */
- core_fseek(chd->file, entry->offset, SEEK_SET);
- bytes = core_fread(chd->file, chd->compressed, entry->length);
- if (bytes != entry->length)
- return CHDERR_READ_ERROR;
-
- /* now decompress using the codec */
- err = CHDERR_NONE;
- if (chd->codecintf->secondary_decompress != NULL)
- err = (*chd->codecintf->secondary_decompress)(chd, entry->length, dest);
- if (err != CHDERR_NONE)
- return err;
- break;
-
+ case CHDERR_NONE: return "no error";
+ case CHDERR_NO_INTERFACE: return "no drive interface";
+ case CHDERR_OUT_OF_MEMORY: return "out of memory";
+ case CHDERR_INVALID_FILE: return "invalid file";
+ case CHDERR_INVALID_PARAMETER: return "invalid parameter";
+ case CHDERR_INVALID_DATA: return "invalid data";
+ case CHDERR_FILE_NOT_FOUND: return "file not found";
+ case CHDERR_REQUIRES_PARENT: return "requires parent";
+ case CHDERR_FILE_NOT_WRITEABLE: return "file not writeable";
+ case CHDERR_READ_ERROR: return "read error";
+ case CHDERR_WRITE_ERROR: return "write error";
+ case CHDERR_CODEC_ERROR: return "codec error";
+ case CHDERR_INVALID_PARENT: return "invalid parent";
+ case CHDERR_HUNK_OUT_OF_RANGE: return "hunk out of range";
+ case CHDERR_DECOMPRESSION_ERROR: return "decompression error";
+ case CHDERR_COMPRESSION_ERROR: return "compression error";
+ case CHDERR_CANT_CREATE_FILE: return "can't create file";
+ case CHDERR_CANT_VERIFY: return "can't verify file";
+ case CHDERR_NOT_SUPPORTED: return "operation not supported";
+ case CHDERR_METADATA_NOT_FOUND: return "can't find metadata";
+ case CHDERR_INVALID_METADATA_SIZE: return "invalid metadata size";
+ case CHDERR_UNSUPPORTED_VERSION: return "unsupported CHD version";
+ case CHDERR_VERIFY_INCOMPLETE: return "incomplete verify";
+ case CHDERR_INVALID_METADATA: return "invalid metadata";
+ case CHDERR_INVALID_STATE: return "invalid state";
+ case CHDERR_OPERATION_PENDING: return "operation pending";
+ case CHDERR_UNSUPPORTED_FORMAT: return "unsupported format";
+ default: return "undocumented error";
}
- return CHDERR_NONE;
}
-/*-------------------------------------------------
- hunk_write_from_memory - write a hunk from
- memory into a CHD
--------------------------------------------------*/
-
-
-static chd_error hunk_write_from_memory(chd_file *chd, UINT32 hunknum, const UINT8 *src, int is_half_hunk)
-{
- map_entry *entry = &chd->map[hunknum];
- map_entry newentry;
- UINT8 fileentry[MAP_ENTRY_SIZE];
- const void *data = src;
- UINT32 bytes = 0, match;
- chd_error err;
- bool is_likely_cd = false;
- int strategy = 0;
- /* track the max */
- if (hunknum > chd->maxhunk)
- chd->maxhunk = hunknum;
-
- /* first compute the CRC of the original data */
- newentry.flags = 0;
- newentry.length = 0;
- newentry.offset = 0;
- newentry.crc = 0;
- if (src != NULL)
- newentry.crc = crc32(0, &src[0], chd->header.hunkbytes);
-
- /* if we're not a lossy codec, compute the CRC and look for matches */
- if (!chd->codecintf->lossy && src != NULL)
- {
- /* some extra stuff for zlib+ compression */
- if (chd->header.compression >= CHDCOMPRESSION_ZLIB_PLUS)
+//**************************************************************************
+// INTERNAL HELPERS
+//**************************************************************************
+
+//-------------------------------------------------
+// guess_unitbytes - for older CHD formats, take
+// a guess at the bytes/unit based on metadata
+//-------------------------------------------------
+
+UINT32 chd_file::guess_unitbytes()
+{
+ // look for hard disk metadata; if found, then the unit size == sector size
+ astring metadata;
+ int i0, i1, i2, i3;
+ if (read_metadata(HARD_DISK_METADATA_TAG, 0, metadata) == CHDERR_NONE && sscanf(metadata, HARD_DISK_METADATA_FORMAT, &i0, &i1, &i2, &i3) == 4)
+ return i3;
+
+ // look for CD-ROM metadata; if found, then the unit size == CD frame size
+ if (read_metadata(CDROM_OLD_METADATA_TAG, 0, metadata) == CHDERR_NONE ||
+ read_metadata(CDROM_TRACK_METADATA_TAG, 0, metadata) == CHDERR_NONE ||
+ read_metadata(CDROM_TRACK_METADATA2_TAG, 0, metadata) == CHDERR_NONE)
+ return CD_FRAME_SIZE;
+
+ // otherwise, just map 1:1 with the hunk size
+ return m_hunkbytes;
+}
+
+
+//-------------------------------------------------
+// parse_v3_header - parse the header from a v3
+// file and configure core parameters
+//-------------------------------------------------
+
+void chd_file::parse_v3_header(UINT8 *rawheader, sha1_t &parentsha1)
+{
+ // verify header length
+ if (be_read(&rawheader[8], 4) != V3_HEADER_SIZE)
+ throw CHDERR_INVALID_FILE;
+
+ // extract core info
+ m_logicalbytes = be_read(&rawheader[28], 8);
+ m_mapoffset = 120;
+ m_metaoffset = be_read(&rawheader[36], 8);
+ m_hunkbytes = be_read(&rawheader[76], 4);
+ m_hunkcount = be_read(&rawheader[24], 4);
+
+ // extract parent SHA-1
+ UINT32 flags = be_read(&rawheader[16], 4);
+ if ((flags & 2) && m_allow_writes)
+ throw CHDERR_FILE_NOT_WRITEABLE;
+
+ // determine compression
+ switch (be_read(&rawheader[20], 4))
+ {
+ case 0: m_compression[0] = CHD_CODEC_NONE; break;
+ case 1: m_compression[0] = CHD_CODEC_ZLIB; break;
+ case 2: m_compression[0] = CHD_CODEC_ZLIB; break;
+ case 3: m_compression[0] = CHD_CODEC_AVHUFF; break;
+ default: throw CHDERR_UNKNOWN_COMPRESSION;
+ }
+ m_compression[1] = m_compression[2] = m_compression[3] = CHD_CODEC_NONE;
+
+ // describe the format
+ m_mapoffset_offset = 0;
+ m_metaoffset_offset = 36;
+ m_sha1_offset = 80;
+ m_rawsha1_offset = 0;
+ m_parentsha1_offset = 100;
+
+ // determine properties of map entries
+ m_mapentrybytes = 16;
+
+ // extract parent SHA-1
+ if (flags & 1)
+ parentsha1 = be_read_sha1(&rawheader[m_parentsha1_offset]);
+
+ // guess at the units based on snooping the metadata
+ m_unitbytes = guess_unitbytes();
+ m_unitcount = (m_logicalbytes + m_unitbytes - 1) / m_unitbytes;
+}
+
+
+//-------------------------------------------------
+// parse_v4_header - parse the header from a v4
+// file and configure core parameters
+//-------------------------------------------------
+
+void chd_file::parse_v4_header(UINT8 *rawheader, sha1_t &parentsha1)
+{
+ // verify header length
+ if (be_read(&rawheader[8], 4) != V4_HEADER_SIZE)
+ throw CHDERR_INVALID_FILE;
+
+ // extract core info
+ m_logicalbytes = be_read(&rawheader[28], 8);
+ m_mapoffset = 108;
+ m_metaoffset = be_read(&rawheader[36], 8);
+ m_hunkbytes = be_read(&rawheader[44], 4);
+ m_hunkcount = be_read(&rawheader[24], 4);
+
+ // extract parent SHA-1
+ UINT32 flags = be_read(&rawheader[16], 4);
+ if ((flags & 2) && m_allow_writes)
+ throw CHDERR_FILE_NOT_WRITEABLE;
+
+ // determine compression
+ switch (be_read(&rawheader[20], 4))
+ {
+ case 0: m_compression[0] = CHD_CODEC_NONE; break;
+ case 1: m_compression[0] = CHD_CODEC_ZLIB; break;
+ case 2: m_compression[0] = CHD_CODEC_ZLIB; break;
+ case 3: m_compression[0] = CHD_CODEC_AVHUFF; break;
+ default: throw CHDERR_UNKNOWN_COMPRESSION;
+ }
+ m_compression[1] = m_compression[2] = m_compression[3] = CHD_CODEC_NONE;
+
+ // describe the format
+ m_mapoffset_offset = 0;
+ m_metaoffset_offset = 36;
+ m_sha1_offset = 48;
+ m_rawsha1_offset = 88;
+ m_parentsha1_offset = 68;
+
+ // determine properties of map entries
+ m_mapentrybytes = 16;
+
+ // extract parent SHA-1
+ if (flags & 1)
+ parentsha1 = be_read_sha1(&rawheader[m_parentsha1_offset]);
+
+ // guess at the units based on snooping the metadata
+ m_unitbytes = guess_unitbytes();
+ m_unitcount = (m_logicalbytes + m_unitbytes - 1) / m_unitbytes;
+}
+
+
+//-------------------------------------------------
+// parse_v5_header - read the header from a v5
+// file and configure core parameters
+//-------------------------------------------------
+
+void chd_file::parse_v5_header(UINT8 *rawheader, sha1_t &parentsha1)
+{
+ // verify header length
+ if (be_read(&rawheader[8], 4) != V5_HEADER_SIZE)
+ throw CHDERR_INVALID_FILE;
+
+ // extract core info
+ m_logicalbytes = be_read(&rawheader[32], 8);
+ m_mapoffset = be_read(&rawheader[40], 8);
+ m_metaoffset = be_read(&rawheader[48], 8);
+ m_hunkbytes = be_read(&rawheader[56], 4);
+ m_hunkcount = (m_logicalbytes + m_hunkbytes - 1) / m_hunkbytes;
+ m_unitbytes = be_read(&rawheader[60], 4);
+ m_unitcount = (m_logicalbytes + m_unitbytes - 1) / m_unitbytes;
+
+ // determine compression
+ m_compression[0] = be_read(&rawheader[16], 4);
+ m_compression[1] = be_read(&rawheader[20], 4);
+ m_compression[2] = be_read(&rawheader[24], 4);
+ m_compression[3] = be_read(&rawheader[28], 4);
+
+ // describe the format
+ m_mapoffset_offset = 40;
+ m_metaoffset_offset = 48;
+ m_sha1_offset = 84;
+ m_rawsha1_offset = 64;
+ m_parentsha1_offset = 104;
+
+ // determine properties of map entries
+ m_mapentrybytes = compressed() ? 12 : 4;
+
+ // extract parent SHA-1
+ parentsha1 = be_read_sha1(&rawheader[m_parentsha1_offset]);
+}
+
+
+//-------------------------------------------------
+// compress_v5_map - compress the v5 map and
+// write it to the end of the file
+//-------------------------------------------------
+
+chd_error chd_file::compress_v5_map()
+{
+ try
+ {
+ // first get a CRC-16 of the original rawmap
+ crc16_t mapcrc = crc16_creator::simple(m_rawmap, m_hunkcount * 12);
+
+ // create a buffer to hold the RLE data
+ dynamic_buffer compression_rle(m_hunkcount);
+ UINT8 *dest = compression_rle;
+
+ // use a huffman encoder for 16 different codes, maximum length is 8 bits
+ huffman_encoder<16, 8> encoder;
+ encoder.histo_reset();
+
+ // RLE-compress the compression type since we expect runs of the same
+ UINT32 max_self = 0;
+ UINT32 last_self = 0;
+ UINT64 max_parent = 0;
+ UINT64 last_parent = 0;
+ UINT32 max_complen = 0;
+ UINT8 lastcomp = 0;
+ int count = 0;
+ for (int hunknum = 0; hunknum < m_hunkcount; hunknum++)
{
- /* see if we can mini-compress first */
- for (bytes = 8; bytes < chd->header.hunkbytes; bytes++)
- if (src[bytes] != src[bytes - 8])
- break;
-
- /* if so, we don't need to write any data */
- if (bytes == chd->header.hunkbytes)
+ UINT8 curcomp = m_rawmap[hunknum * 12 + 0];
+
+ // promote self block references to more compact forms
+ if (curcomp == COMPRESSION_SELF)
{
- newentry.offset = get_bigendian_uint64(&src[0]);
- newentry.length = 0;
- newentry.flags = MAP_ENTRY_TYPE_MINI;
- goto write_entry;
+ UINT32 refhunk = be_read(&m_rawmap[hunknum * 12 + 4], 6);
+ if (refhunk == last_self)
+ curcomp = COMPRESSION_SELF_0;
+ else if (refhunk == last_self + 1)
+ curcomp = COMPRESSION_SELF_1;
+ else
+ max_self = MAX(max_self, refhunk);
+ last_self = refhunk;
}
- /* otherwise, see if we can find a match in the current file */
- match = crcmap_find_hunk(chd, hunknum, newentry.crc, &src[0]);
- if (match != NO_MATCH)
+ // promote parent block references to more compact forms
+ else if (curcomp == COMPRESSION_PARENT)
{
- newentry.offset = match;
- newentry.length = 0;
- newentry.flags = MAP_ENTRY_TYPE_SELF_HUNK;
- goto write_entry;
+ UINT32 refunit = be_read(&m_rawmap[hunknum * 12 + 4], 6);
+ if (refunit == (UINT64(hunknum) * UINT64(m_hunkbytes)) / m_unitbytes)
+ curcomp = COMPRESSION_PARENT_SELF;
+ else if (refunit == last_parent)
+ curcomp = COMPRESSION_PARENT_0;
+ else if (refunit == last_parent + m_hunkbytes / m_unitbytes)
+ curcomp = COMPRESSION_PARENT_1;
+ else
+ max_parent = MAX(max_parent, refunit);
+ last_parent = refunit;
}
-
- /* if we have a parent, see if we can find a match in there */
- if (chd->header.flags & CHDFLAGS_HAS_PARENT)
+
+ // track maximum compressed length
+ else if (curcomp >= COMPRESSION_TYPE_0 && curcomp <= COMPRESSION_TYPE_3)
+ max_complen = MAX(max_complen, be_read(&m_rawmap[hunknum * 12 + 1], 3));
+
+ // track repeats
+ if (curcomp == lastcomp)
+ count++;
+
+ // if no repeat, or we're at the end, flush it
+ if (curcomp != lastcomp || hunknum == m_hunkcount - 1)
{
- match = crcmap_find_hunk(chd->parent, ~0, newentry.crc, &src[0]);
- if (match != NO_MATCH)
+ while (count != 0)
{
- newentry.offset = match;
- newentry.length = 0;
- newentry.flags = MAP_ENTRY_TYPE_PARENT_HUNK;
- goto write_entry;
+ if (count < 3)
+ encoder.histo_one(*dest++ = lastcomp), count--;
+ else if (count <= 3+15)
+ {
+ encoder.histo_one(*dest++ = COMPRESSION_RLE_SMALL);
+ encoder.histo_one(*dest++ = count - 3);
+ count = 0;
+ }
+ else
+ {
+ int this_count = MIN(count, 3+16+255);
+ encoder.histo_one(*dest++ = COMPRESSION_RLE_LARGE);
+ encoder.histo_one(*dest++ = (this_count - 3 - 16) >> 4);
+ encoder.histo_one(*dest++ = (this_count - 3 - 16) & 15);
+ count -= this_count;
+ }
}
+ if (curcomp != lastcomp)
+ encoder.histo_one(*dest++ = lastcomp = curcomp);
}
}
- }
-
- if (chd->codecintf->secondary_compress != NULL)
- {
- if (chd->header.hunkbytes == (CD_MAX_SECTOR_DATA+CD_MAX_SUBCODE_DATA) * CD_FRAMES_PER_HUNK)
- is_likely_cd = true;
-
- if (is_likely_cd)
+
+ // compute a tree and export it to the buffer
+ dynamic_buffer compressed(m_hunkcount * 6);
+ bitstream_out bitbuf(&compressed[16], compressed.count() - 16);
+ huffman_error err = encoder.compute_tree_from_histo();
+ if (err != HUFFERR_NONE)
+ throw CHDERR_COMPRESSION_ERROR;
+ err = encoder.export_tree_rle(bitbuf);
+ if (err != HUFFERR_NONE)
+ throw CHDERR_COMPRESSION_ERROR;
+
+ // encode the data
+ for (UINT8 *src = compression_rle; src < dest; src++)
+ encoder.encode_one(bitbuf, *src);
+
+ // determine the number of bits we need to hold the a length
+ // and a hunk index
+ UINT8 lengthbits = bits_for_value(max_complen);
+ UINT8 selfbits = bits_for_value(max_self);
+ UINT8 parentbits = bits_for_value(max_parent);
+
+ // for each compression type, output the relevant data
+ lastcomp = 0;
+ count = 0;
+ UINT8 *src = compression_rle;
+ UINT64 firstoffs = 0;
+ for (int hunknum = 0; hunknum < m_hunkcount; hunknum++)
{
- int offset = 0;
- for (int frames=0;frames<CD_FRAMES_PER_HUNK;frames++)
+ UINT8 *rawmap = &m_rawmap[hunknum * 12];
+ UINT32 length = be_read(&rawmap[1], 3);
+ UINT64 offset = be_read(&rawmap[4], 6);
+ UINT16 crc = be_read(&rawmap[10], 2);
+
+ // if no count remaining, fetch the next entry
+ if (count == 0)
{
- int secoff;
- for (secoff=0;secoff<CD_MAX_SECTOR_DATA;secoff++)
- {
- offset++;
- }
- for (secoff=0;secoff<CD_MAX_SUBCODE_DATA;secoff++)
- {
- if (src[offset]!=0x00)
- is_likely_cd = false;
-
- offset++;
- }
+ UINT8 val = *src++;
+ if (val == COMPRESSION_RLE_SMALL)
+ count = 2 + *src++;
+ else if (val == COMPRESSION_RLE_LARGE)
+ count = 2 + 16 + (*src++ << 4), count += *src++;
+ else
+ lastcomp = val;
}
- }
- }
-
- if (is_likely_cd)
- {
- err = CHDERR_COMPRESSION_ERROR;
-
- UINT8* tempram = (UINT8 *)malloc(chd->header.hunkbytes);
- UINT32 tempbytes = 0;
- chd_error temperror = err;
- int tempstrategy = 0;
-
- /* try strategy 0 - zlib */
- if (chd->codecintf->compress != NULL)
- {
- err = (*chd->codecintf->compress)(chd, src, &bytes);
-
- /* store current results and errors */
- memcpy(tempram, chd->compressed, bytes);
- tempbytes = bytes;
- temperror = err;
- tempstrategy = 0;
- }
-
- /* try strategy 1 - flac */
- if (chd->codecintf->secondary_compress != NULL)
- {
- strategy = 1;
-
- err = (*chd->codecintf->secondary_compress)(chd, src, &bytes);
-
- /* check against previous compression attempt if that was successful */
- if (temperror == CHDERR_NONE)
+ else
+ count--;
+
+ // output additional data needed for this entry
+ switch (lastcomp)
{
- /* if the previous compression was better, restore that ... */
- if (bytes>=tempbytes)
- {
- strategy = tempstrategy;
- memcpy(chd->compressed, tempram, tempbytes);
- bytes = tempbytes;
- err = temperror;
- }
+ case COMPRESSION_TYPE_0:
+ case COMPRESSION_TYPE_1:
+ case COMPRESSION_TYPE_2:
+ case COMPRESSION_TYPE_3:
+ assert(length < (1 << lengthbits));
+ bitbuf.write(length, lengthbits);
+ bitbuf.write(crc, 16);
+ if (firstoffs == 0)
+ firstoffs = offset;
+ break;
+
+ case COMPRESSION_NONE:
+ bitbuf.write(crc, 16);
+ if (firstoffs == 0)
+ firstoffs = offset;
+ break;
+
+ case COMPRESSION_SELF:
+ assert(offset < (UINT64(1) << selfbits));
+ bitbuf.write(offset, selfbits);
+ break;
+
+ case COMPRESSION_PARENT:
+ assert(offset < (UINT64(1) << parentbits));
+ bitbuf.write(offset, parentbits);
+ break;
+
+ case COMPRESSION_SELF_0:
+ case COMPRESSION_SELF_1:
+ case COMPRESSION_PARENT_SELF:
+ case COMPRESSION_PARENT_0:
+ case COMPRESSION_PARENT_1:
+ break;
}
}
-
- free(tempram);
+
+ // write the map header
+ UINT32 complen = bitbuf.flush();
+ assert(!bitbuf.overflow());
+ be_write(&compressed[0], complen, 4);
+ be_write(&compressed[4], firstoffs, 6);
+ be_write(&compressed[10], mapcrc, 2);
+ compressed[12] = lengthbits;
+ compressed[13] = selfbits;
+ compressed[14] = parentbits;
+
+ // write the result
+ m_mapoffset = file_append(compressed, complen + 16);
+
+ // then write the map offset
+ UINT8 rawbuf[sizeof(UINT64)];
+ be_write(rawbuf, m_mapoffset, 8);
+ file_write(m_mapoffset_offset, rawbuf, sizeof(rawbuf));
+ return CHDERR_NONE;
}
- else
+ catch (chd_error &err)
{
- /* now try compressing the data */
- err = CHDERR_COMPRESSION_ERROR;
- if (chd->codecintf->compress != NULL)
- err = (*chd->codecintf->compress)(chd, src, &bytes);
+ return err;
}
+}
- /* if that worked, and we're lossy, decompress and CRC the result */
- if (err == CHDERR_NONE && (chd->codecintf->lossy || src == NULL))
- {
- err = (*chd->codecintf->decompress)(chd, bytes, chd->cache);
- if (err == CHDERR_NONE)
- newentry.crc = crc32(0, chd->cache, chd->header.hunkbytes);
- }
- /* if we succeeded in compressing the data, replace our data pointer and mark it so */
- if (err == CHDERR_NONE)
+//-------------------------------------------------
+// decompress_v5_map - decompress the v5 map
+//-------------------------------------------------
+
+void chd_file::decompress_v5_map()
+{
+ // if no offset, we haven't written it yet
+ if (m_mapoffset == 0)
{
- data = chd->compressed;
- newentry.length = bytes;
+ memset(m_rawmap, 0xff, m_rawmap.count());
+ return;
+ }
- if (strategy == 0)
+ // read the reader
+ UINT8 rawbuf[16];
+ file_read(m_mapoffset, rawbuf, sizeof(rawbuf));
+ UINT32 mapbytes = be_read(&rawbuf[0], 4);
+ UINT64 firstoffs = be_read(&rawbuf[4], 6);
+ UINT16 mapcrc = be_read(&rawbuf[10], 2);
+ UINT8 lengthbits = rawbuf[12];
+ UINT8 selfbits = rawbuf[13];
+ UINT8 parentbits = rawbuf[14];
+
+ // now read the map
+ dynamic_buffer compressed(mapbytes);
+ file_read(m_mapoffset + 16, compressed, mapbytes);
+ bitstream_in bitbuf(compressed, compressed.count());
+
+ // first decode the compression types
+ huffman_decoder<16, 8> decoder;
+ huffman_error err = decoder.import_tree_rle(bitbuf);
+ if (err != HUFFERR_NONE)
+ throw CHDERR_DECOMPRESSION_ERROR;
+ UINT8 lastcomp = 0;
+ int repcount = 0;
+ for (int hunknum = 0; hunknum < m_hunkcount; hunknum++)
+ {
+ UINT8 *rawmap = &m_rawmap[hunknum * 12];
+ if (repcount > 0)
+ rawmap[0] = lastcomp, repcount--;
+ else
{
- newentry.flags = MAP_ENTRY_TYPE_COMPRESSED;
+ UINT8 val = decoder.decode_one(bitbuf);
+ if (val == COMPRESSION_RLE_SMALL)
+ rawmap[0] = lastcomp, repcount = 2 + decoder.decode_one(bitbuf);
+ else if (val == COMPRESSION_RLE_LARGE)
+ rawmap[0] = lastcomp, repcount = 2 + 16 + (decoder.decode_one(bitbuf) << 4), repcount += decoder.decode_one(bitbuf);
+ else
+ rawmap[0] = lastcomp = val;
}
- else if (strategy == 1)
+ }
+
+ // then iterate through the hunks and extract the needed data
+ UINT64 curoffset = firstoffs;
+ UINT32 last_self = 0;
+ UINT64 last_parent = 0;
+ for (int hunknum = 0; hunknum < m_hunkcount; hunknum++)
+ {
+ UINT8 *rawmap = &m_rawmap[hunknum * 12];
+ UINT64 offset = curoffset;
+ UINT32 length = 0;
+ UINT16 crc = 0;
+ switch (rawmap[0])
{
- newentry.flags = MAP_ENTRY_TYPE_2ND_COMPRESSED;
+ // base types
+ case COMPRESSION_TYPE_0:
+ case COMPRESSION_TYPE_1:
+ case COMPRESSION_TYPE_2:
+ case COMPRESSION_TYPE_3:
+ curoffset += length = bitbuf.read(lengthbits);
+ crc = bitbuf.read(16);
+ break;
+
+ case COMPRESSION_NONE:
+ curoffset += length = m_hunkbytes;
+ crc = bitbuf.read(16);
+ break;
+
+ case COMPRESSION_SELF:
+ last_self = offset = bitbuf.read(selfbits);
+ break;
+
+ case COMPRESSION_PARENT:
+ offset = bitbuf.read(parentbits);
+ last_parent = offset;
+ break;
+
+ // pseudo-types; convert into base types
+ case COMPRESSION_SELF_1:
+ last_self++;
+ case COMPRESSION_SELF_0:
+ rawmap[0] = COMPRESSION_SELF;
+ offset = last_self;
+ break;
+
+ case COMPRESSION_PARENT_SELF:
+ rawmap[0] = COMPRESSION_PARENT;
+ last_parent = offset = (UINT64(hunknum) * UINT64(m_hunkbytes)) / m_unitbytes;
+ break;
+
+ case COMPRESSION_PARENT_1:
+ last_parent += m_hunkbytes / m_unitbytes;
+ case COMPRESSION_PARENT_0:
+ rawmap[0] = COMPRESSION_PARENT;
+ offset = last_parent;
+ break;
+ }
+ be_write(&rawmap[1], length, 3);
+ be_write(&rawmap[4], offset, 6);
+ be_write(&rawmap[10], crc, 2);
+ }
+
+ // verify the final CRC
+ if (crc16_creator::simple(m_rawmap, m_hunkcount * 12) != mapcrc)
+ throw CHDERR_DECOMPRESSION_ERROR;
+}
+
+
+//-------------------------------------------------
+// create_common - command path when creating a
+// new CHD file
+//-------------------------------------------------
+
+chd_error chd_file::create_common()
+{
+ // wrap in try for proper error handling
+ try
+ {
+ // if we have a parent, it must be V3 or later
+ if (m_parent != NULL && m_parent->version() < 3)
+ throw CHDERR_UNSUPPORTED_VERSION;
+
+ // must be an even number of units per hunk
+ if (m_hunkbytes % m_unitbytes != 0)
+ throw CHDERR_INVALID_PARAMETER;
+ if (m_parent != NULL && m_unitbytes != m_parent->unit_bytes())
+ throw CHDERR_INVALID_PARAMETER;
+
+ // writes are obviously permitted; reads only if uncompressed
+ m_allow_writes = true;
+ m_allow_reads = !compressed();
+
+ // verify the compression types
+ bool found_zero = false;
+ for (int codecnum = 0; codecnum < ARRAY_LENGTH(m_compression); codecnum++)
+ {
+ // once we hit an empty slot, all later slots must be empty as well
+ if (m_compression[codecnum] == CHD_CODEC_NONE)
+ found_zero = true;
+ else if (found_zero)
+ throw CHDERR_INVALID_PARAMETER;
+ else if (!chd_codec_list::codec_exists(m_compression[codecnum]))
+ throw CHDERR_UNKNOWN_COMPRESSION;
}
- }
-
- /* otherwise, mark it uncompressed and use the original data */
- else
- {
- newentry.length = chd->header.hunkbytes;
- newentry.flags = MAP_ENTRY_TYPE_UNCOMPRESSED;
- }
-
-
- /* if the data doesn't fit into the previous entry, make a new one at the eof */
- newentry.offset = entry->offset;
- if (newentry.offset == 0 || newentry.length > entry->length)
- newentry.offset = core_fsize(chd->file);
-
- /* write the data */
- core_fseek(chd->file, newentry.offset, SEEK_SET);
- bytes = core_fwrite(chd->file, data, newentry.length);
- if (bytes != newentry.length)
- return CHDERR_WRITE_ERROR;
-
- /* update the entry in memory */
-write_entry:
-
- if (is_half_hunk)
- newentry.flags |= MAP_ENTRY_FLAG_HALF_HUNK;
-
- *entry = newentry;
-
- /* update the map on file */
- map_assemble(&fileentry[0], &chd->map[hunknum]);
- core_fseek(chd->file, chd->header.length + hunknum * sizeof(fileentry), SEEK_SET);
- bytes = core_fwrite(chd->file, &fileentry[0], sizeof(fileentry));
- if (bytes != sizeof(fileentry))
- return CHDERR_WRITE_ERROR;
-
- return CHDERR_NONE;
-}
-
-
-
-/***************************************************************************
- INTERNAL MAP ACCESS
-***************************************************************************/
-/*-------------------------------------------------
- map_write_initial - write an initial map to
- a new CHD file
--------------------------------------------------*/
-
-static chd_error map_write_initial(core_file *file, chd_file *parent, const chd_header *header)
-{
- UINT8 blank_map_entries[MAP_STACK_ENTRIES * MAP_ENTRY_SIZE];
- int fullchunks, remainder, count, i, j;
- map_entry mapentry;
- UINT64 fileoffset;
-
- /* create a mini hunk of 0's */
- mapentry.offset = 0;
- mapentry.crc = 0;
- mapentry.length = 0;
- mapentry.flags = MAP_ENTRY_TYPE_MINI | MAP_ENTRY_FLAG_NO_CRC;
- for (i = 0; i < MAP_STACK_ENTRIES; i++)
- map_assemble(&blank_map_entries[i * MAP_ENTRY_SIZE], &mapentry);
-
- /* prepare to write a blank hunk map immediately following */
- fileoffset = header->length;
- fullchunks = header->totalhunks / MAP_STACK_ENTRIES;
- remainder = header->totalhunks % MAP_STACK_ENTRIES;
-
- /* first write full chunks of blank entries */
- for (i = 0; i < fullchunks; i++)
- {
- /* parent drives need to be mapped through */
- if (parent != NULL)
- for (j = 0; j < MAP_STACK_ENTRIES; j++)
+ // create our V5 header
+ assert(m_version == HEADER_VERSION);
+ UINT8 rawheader[V5_HEADER_SIZE];
+ memcpy(&rawheader[0], "MComprHD", 8);
+ be_write(&rawheader[8], V5_HEADER_SIZE, 4);
+ be_write(&rawheader[12], m_version, 4);
+ be_write(&rawheader[16], m_compression[0], 4);
+ be_write(&rawheader[20], m_compression[1], 4);
+ be_write(&rawheader[24], m_compression[2], 4);
+ be_write(&rawheader[28], m_compression[3], 4);
+ be_write(&rawheader[32], m_logicalbytes, 8);
+ be_write(&rawheader[40], compressed() ? 0 : V5_HEADER_SIZE, 8);
+ be_write(&rawheader[48], m_metaoffset, 8);
+ be_write(&rawheader[56], m_hunkbytes, 4);
+ be_write(&rawheader[60], m_unitbytes, 4);
+ be_write_sha1(&rawheader[64], sha1_t::null);
+ be_write_sha1(&rawheader[84], sha1_t::null);
+ be_write_sha1(&rawheader[104], (m_parent != NULL) ? m_parent->sha1() : sha1_t::null);
+
+ // write the resulting header
+ file_write(0, rawheader, sizeof(rawheader));
+
+ // parse it back out to set up fields appropriately
+ sha1_t parentsha1;
+ parse_v5_header(rawheader, parentsha1);
+
+ // write out the map (if not compressed)
+ if (!compressed())
+ {
+ UINT32 mapsize = m_mapentrybytes * m_hunkcount;
+ UINT8 buffer[4096] = { 0 };
+ UINT64 offset = m_mapoffset;
+ while (mapsize != 0)
{
- mapentry.offset = i * MAP_STACK_ENTRIES + j;
- mapentry.crc = parent->map[i * MAP_STACK_ENTRIES + j].crc;
- mapentry.flags = MAP_ENTRY_TYPE_PARENT_HUNK;
- map_assemble(&blank_map_entries[j * MAP_ENTRY_SIZE], &mapentry);
+ UINT32 bytes_to_write = MIN(mapsize, sizeof(buffer));
+ file_write(offset, buffer, bytes_to_write);
+ offset += bytes_to_write;
+ mapsize -= bytes_to_write;
}
+ }
- /* write the chunks */
- core_fseek(file, fileoffset, SEEK_SET);
- count = core_fwrite(file, blank_map_entries, sizeof(blank_map_entries));
- if (count != sizeof(blank_map_entries))
- return CHDERR_WRITE_ERROR;
- fileoffset += sizeof(blank_map_entries);
+ // finish opening the file
+ create_open_common();
}
-
- /* then write the remainder */
- if (remainder > 0)
+
+ // handle errors by closing ourself
+ catch (chd_error &err)
{
- /* parent drives need to be mapped through */
- if (parent != NULL)
- for (j = 0; j < remainder; j++)
- {
- mapentry.offset = i * MAP_STACK_ENTRIES + j;
- mapentry.crc = parent->map[i * MAP_STACK_ENTRIES + j].crc;
- mapentry.flags = MAP_ENTRY_TYPE_PARENT_HUNK;
- map_assemble(&blank_map_entries[j * MAP_ENTRY_SIZE], &mapentry);
- }
-
- /* write the chunks */
- core_fseek(file, fileoffset, SEEK_SET);
- count = core_fwrite(file, blank_map_entries, remainder * MAP_ENTRY_SIZE);
- if (count != remainder * MAP_ENTRY_SIZE)
- return CHDERR_WRITE_ERROR;
- fileoffset += remainder * MAP_ENTRY_SIZE;
+ close();
+ return err;
+ }
+ catch (...)
+ {
+ close();
+ throw;
}
-
- /* then write a special end-of-list cookie */
- memcpy(&blank_map_entries[0], END_OF_LIST_COOKIE, MAP_ENTRY_SIZE);
- core_fseek(file, fileoffset, SEEK_SET);
- count = core_fwrite(file, blank_map_entries, MAP_ENTRY_SIZE);
- if (count != MAP_ENTRY_SIZE)
- return CHDERR_WRITE_ERROR;
-
return CHDERR_NONE;
}
-/*-------------------------------------------------
- map_read - read the initial sector map
--------------------------------------------------*/
-
-static chd_error map_read(chd_file *chd)
-{
- UINT32 entrysize = (chd->header.version < 3) ? OLD_MAP_ENTRY_SIZE : MAP_ENTRY_SIZE;
- UINT8 raw_map_entries[MAP_STACK_ENTRIES * MAP_ENTRY_SIZE];
- UINT64 fileoffset, maxoffset = 0;
- UINT8 cookie[MAP_ENTRY_SIZE];
- UINT32 count;
- chd_error err;
- int i;
-
- /* first allocate memory */
- chd->map = (map_entry *)malloc(sizeof(chd->map[0]) * chd->header.totalhunks);
- if (!chd->map)
- return CHDERR_OUT_OF_MEMORY;
-
- /* read the map entries in in chunks and extract to the map list */
- fileoffset = chd->header.length;
- for (i = 0; i < chd->header.totalhunks; i += MAP_STACK_ENTRIES)
- {
- /* compute how many entries this time */
- int entries = chd->header.totalhunks - i, j;
- if (entries > MAP_STACK_ENTRIES)
- entries = MAP_STACK_ENTRIES;
-
- /* read that many */
- core_fseek(chd->file, fileoffset, SEEK_SET);
- count = core_fread(chd->file, raw_map_entries, entries * entrysize);
- if (count != entries * entrysize)
- {
- err = CHDERR_READ_ERROR;
- goto cleanup;
- }
- fileoffset += entries * entrysize;
-
- /* process that many */
- if (entrysize == MAP_ENTRY_SIZE)
+//-------------------------------------------------
+// open_common - common path when opening an
+// existing CHD file for input
+//-------------------------------------------------
+
+chd_error chd_file::open_common(bool writeable)
+{
+ // wrap in try for proper error handling
+ try
+ {
+ // reads are always permitted; writes possibly as well
+ m_allow_reads = true;
+ m_allow_writes = writeable;
+
+ // read the raw header
+ UINT8 rawheader[MAX_HEADER_SIZE];
+ file_read(0, rawheader, sizeof(rawheader));
+
+ // verify the signature
+ if (memcmp(rawheader, "MComprHD", 8) != 0)
+ throw CHDERR_INVALID_FILE;
+
+ // only allow writes to the most recent version
+ m_version = be_read(&rawheader[12], 4);
+ if (m_allow_writes && m_version < HEADER_VERSION)
+ throw CHDERR_UNSUPPORTED_VERSION;
+
+ // read the header if we support it
+ sha1_t parentsha1 = sha1_t::null;
+ switch (m_version)
{
- for (j = 0; j < entries; j++)
- map_extract(&raw_map_entries[j * MAP_ENTRY_SIZE], &chd->map[i + j]);
+ case 3: parse_v3_header(rawheader, parentsha1); break;
+ case 4: parse_v4_header(rawheader, parentsha1); break;
+ case 5: parse_v5_header(rawheader, parentsha1); break;
+ default: throw CHDERR_UNSUPPORTED_VERSION;
}
- else
+
+ // make sure we have a parent if we need one (and don't if we don't)
+ if (parentsha1 != sha1_t::null)
{
- for (j = 0; j < entries; j++)
- map_extract_old(&raw_map_entries[j * OLD_MAP_ENTRY_SIZE], &chd->map[i + j], chd->header.hunkbytes);
+ if (m_parent == NULL)
+ m_parent_missing = true;
+ else if (m_parent->sha1() != parentsha1)
+ throw CHDERR_INVALID_PARENT;
}
-
- /* track the maximum offset */
- for (j = 0; j < entries; j++)
- if ((chd->map[i + j].flags & MAP_ENTRY_FLAG_TYPE_MASK) == MAP_ENTRY_TYPE_COMPRESSED ||
- (chd->map[i + j].flags & MAP_ENTRY_FLAG_TYPE_MASK) == MAP_ENTRY_TYPE_2ND_COMPRESSED ||
- (chd->map[i + j].flags & MAP_ENTRY_FLAG_TYPE_MASK) == MAP_ENTRY_TYPE_UNCOMPRESSED)
- maxoffset = MAX(maxoffset, chd->map[i + j].offset + chd->map[i + j].length);
+ else if (parentsha1 == sha1_t::null && m_parent != NULL)
+ throw CHDERR_INVALID_PARAMETER;
+
+ // finish opening the file
+ create_open_common();
+ return CHDERR_NONE;
}
-
- /* verify the cookie */
- core_fseek(chd->file, fileoffset, SEEK_SET);
- count = core_fread(chd->file, &cookie, entrysize);
- if (count != entrysize || memcmp(&cookie, END_OF_LIST_COOKIE, entrysize))
+
+ // handle errors by closing ourself
+ catch (chd_error &err)
{
- err = CHDERR_INVALID_FILE;
- goto cleanup;
- }
-
- /* verify the length */
- if (maxoffset > core_fsize(chd->file))
- {
- err = CHDERR_INVALID_FILE;
- goto cleanup;
+ close();
+ return err;
}
- return CHDERR_NONE;
-
-cleanup:
- if (chd->map)
- free(chd->map);
- chd->map = NULL;
- return err;
}
+//-------------------------------------------------
+// create_open_common - common code for handling
+// creation and opening of a file
+//-------------------------------------------------
-/***************************************************************************
- INTERNAL CRC MAP ACCESS
-***************************************************************************/
-
-/*-------------------------------------------------
- crcmap_init - initialize the CRC map
--------------------------------------------------*/
-
-static void crcmap_init(chd_file *chd, int prepopulate)
+void chd_file::create_open_common()
{
- int i;
-
- /* if we already have one, bail */
- if (chd->crcmap != NULL)
- return;
-
- /* reset all pointers */
- chd->crcmap = NULL;
- chd->crcfree = NULL;
- chd->crctable = NULL;
-
- /* allocate a list; one for each hunk */
- chd->crcmap = (crcmap_entry *)malloc(chd->header.totalhunks * sizeof(chd->crcmap[0]));
- if (chd->crcmap == NULL)
- return;
-
- /* allocate a CRC map table */
- chd->crctable = (crcmap_entry **)malloc(CRCMAP_HASH_SIZE * sizeof(chd->crctable[0]));
- if (chd->crctable == NULL)
- {
- free(chd->crcmap);
- chd->crcmap = NULL;
- return;
- }
-
- /* initialize the free list */
- for (i = 0; i < chd->header.totalhunks; i++)
+ // verify the compression types and initialize the codecs
+ for (int decompnum = 0; decompnum < ARRAY_LENGTH(m_compression); decompnum++)
{
- chd->crcmap[i].next = chd->crcfree;
- chd->crcfree = &chd->crcmap[i];
+ m_decompressor[decompnum] = chd_codec_list::new_decompressor(m_compression[decompnum], *this);
+ if (m_decompressor[decompnum] == NULL && m_compression[decompnum] != 0)
+ throw CHDERR_UNKNOWN_COMPRESSION;
}
+
+ // read the map; v5+ compressed drives need to read and decompress their map
+ m_rawmap.resize(m_hunkcount * m_mapentrybytes);
+ if (m_version >= 5 && compressed())
+ decompress_v5_map();
+ else
+ file_read(m_mapoffset, m_rawmap, m_rawmap.count());
- /* initialize the table */
- memset(chd->crctable, 0, CRCMAP_HASH_SIZE * sizeof(chd->crctable[0]));
-
- /* if we're to prepopulate, go for it */
- if (prepopulate)
- for (i = 0; i < chd->header.totalhunks; i++)
- crcmap_add_entry(chd, i);
+ // allocate the temporary compressed buffer and a buffer for caching
+ m_compressed.resize(m_hunkbytes);
+ m_cache.resize(m_hunkbytes);
}
-/*-------------------------------------------------
- crcmap_add_entry - log a CRC entry
--------------------------------------------------*/
+//-------------------------------------------------
+// verify_proper_compression_append - verify that
+// the given hunk is a proper candidate for
+// appending to a compressed CHD
+//-------------------------------------------------
-static void crcmap_add_entry(chd_file *chd, UINT32 hunknum)
+void chd_file::verify_proper_compression_append(UINT32 hunknum)
{
- UINT32 hash = chd->map[hunknum].crc % CRCMAP_HASH_SIZE;
- crcmap_entry *crcmap;
-
- /* pull a free entry off the list */
- crcmap = chd->crcfree;
- chd->crcfree = crcmap->next;
+ // punt if no file
+ if (m_file == NULL)
+ throw CHDERR_NOT_OPEN;
- /* set up the entry and link it into the hash table */
- crcmap->hunknum = hunknum;
- crcmap->next = chd->crctable[hash];
- chd->crctable[hash] = crcmap;
-}
+ // return an error if out of range
+ if (hunknum >= m_hunkcount)
+ throw CHDERR_HUNK_OUT_OF_RANGE;
+ // if not writeable, fail
+ if (!m_allow_writes)
+ throw CHDERR_FILE_NOT_WRITEABLE;
-/*-------------------------------------------------
- crcmap_verify_hunk_match - verify that a
- hunk really matches by doing a byte-for-byte
- compare
--------------------------------------------------*/
+ // compressed writes only via this interface
+ if (!compressed())
+ throw CHDERR_FILE_NOT_WRITEABLE;
-static int crcmap_verify_hunk_match(chd_file *chd, UINT32 hunknum, const UINT8 *rawdata)
-{
- /* we have a potential match -- better be sure */
- /* read the hunk from disk and compare byte-for-byte */
- if (hunknum != chd->comparehunk)
- {
- chd->comparehunk = ~0;
- if (hunk_read_into_memory(chd, hunknum, chd->compare) == CHDERR_NONE)
- chd->comparehunk = hunknum;
- }
- return (hunknum == chd->comparehunk && memcmp(rawdata, chd->compare, chd->header.hunkbytes) == 0);
+ // only permitted to write new blocks
+ UINT8 *rawmap = &m_rawmap[hunknum * 12];
+ if (rawmap[0] != 0xff)
+ throw CHDERR_COMPRESSION_ERROR;
+
+ // if this isn't the first block, only permitted to write immediately
+ // after the previous one
+ if (hunknum != 0 && rawmap[-12] == 0xff)
+ throw CHDERR_COMPRESSION_ERROR;
}
+
+//-------------------------------------------------
+// hunk_write_compressed - write a hunk to a
+// compressed CHD, discovering the best
+// technique
+//-------------------------------------------------
-/*-------------------------------------------------
- crcmap_find_hunk - find a hunk with a matching
- CRC in the map
--------------------------------------------------*/
-
-static UINT32 crcmap_find_hunk(chd_file *chd, UINT32 hunknum, UINT32 crc, const UINT8 *rawdata)
+void chd_file::hunk_write_compressed(UINT32 hunknum, INT8 compression, const UINT8 *compressed, UINT32 complength, crc16_t crc16)
{
- UINT32 lasthunk = (hunknum < chd->header.totalhunks) ? hunknum : chd->header.totalhunks;
- int curhunk;
-
- /* if we have a CRC map, use that */
- if (chd->crctable)
- {
- crcmap_entry *curentry;
- for (curentry = chd->crctable[crc % CRCMAP_HASH_SIZE]; curentry; curentry = curentry->next)
- {
- curhunk = curentry->hunknum;
- if (chd->map[curhunk].crc == crc && !(chd->map[curhunk].flags & MAP_ENTRY_FLAG_NO_CRC) && !(chd->map[curhunk].flags & MAP_ENTRY_FLAG_HALF_HUNK) && crcmap_verify_hunk_match(chd, curhunk, rawdata))
- return curhunk;
- }
- return NO_MATCH;
- }
-
- /* first see if the last match is a valid one */
- if (chd->comparehunk < chd->header.totalhunks && chd->map[chd->comparehunk].crc == crc && !(chd->map[chd->comparehunk].flags & MAP_ENTRY_FLAG_NO_CRC) && !(chd->map[chd->comparehunk].flags & MAP_ENTRY_FLAG_HALF_HUNK) &&
- memcmp(rawdata, chd->compare, chd->header.hunkbytes) == 0)
- return chd->comparehunk;
-
- /* scan through the CHD's hunk map looking for a match */
- for (curhunk = 0; curhunk < lasthunk; curhunk++)
- if (chd->map[curhunk].crc == crc && !(chd->map[curhunk].flags & MAP_ENTRY_FLAG_NO_CRC) && !(chd->map[curhunk].flags & MAP_ENTRY_FLAG_HALF_HUNK) && crcmap_verify_hunk_match(chd, curhunk, rawdata))
- return curhunk;
-
- return NO_MATCH;
+ // verify that we are appending properly to a compressed file
+ verify_proper_compression_append(hunknum);
+
+ // write the final result
+ UINT64 offset = file_append(compressed, complength);
+
+ // update the map entry
+ UINT8 *rawmap = &m_rawmap[hunknum * 12];
+ rawmap[0] = (compression == -1) ? COMPRESSION_NONE : compression;
+ be_write(&rawmap[1], complength, 3);
+ be_write(&rawmap[4], offset, 6);
+ be_write(&rawmap[10], crc16, 2);
}
+//-------------------------------------------------
+// hunk_copy_from_self - mark a hunk as being a
+// copy of another hunk in the same CHD
+//-------------------------------------------------
-/***************************************************************************
- INTERNAL METADATA ACCESS
-***************************************************************************/
-
-/*-------------------------------------------------
- metadata_find_entry - find a metadata entry
--------------------------------------------------*/
-
-static chd_error metadata_find_entry(chd_file *chd, UINT32 metatag, UINT32 metaindex, metadata_entry *metaentry)
+void chd_file::hunk_copy_from_self(UINT32 hunknum, UINT32 otherhunk)
{
- /* start at the beginning */
- metaentry->offset = chd->header.metaoffset;
- metaentry->prev = 0;
-
- /* loop until we run out of options */
- while (metaentry->offset != 0)
- {
- UINT8 raw_meta_header[METADATA_HEADER_SIZE];
- UINT32 count;
+ // verify that we are appending properly to a compressed file
+ verify_proper_compression_append(hunknum);
+
+ // only permitted to reference prior hunks
+ if (otherhunk >= hunknum)
+ throw CHDERR_INVALID_PARAMETER;
- /* read the raw header */
- core_fseek(chd->file, metaentry->offset, SEEK_SET);
- count = core_fread(chd->file, raw_meta_header, sizeof(raw_meta_header));
- if (count != sizeof(raw_meta_header))
- break;
-
- /* extract the data */
- metaentry->metatag = get_bigendian_uint32(&raw_meta_header[0]);
- metaentry->length = get_bigendian_uint32(&raw_meta_header[4]);
- metaentry->next = get_bigendian_uint64(&raw_meta_header[8]);
+ // update the map entry
+ UINT8 *rawmap = &m_rawmap[hunknum * 12];
+ rawmap[0] = COMPRESSION_SELF;
+ be_write(&rawmap[1], 0, 3);
+ be_write(&rawmap[4], otherhunk, 6);
+ be_write(&rawmap[10], 0, 2);
+}
- /* flags are encoded in the high byte of length */
- metaentry->flags = metaentry->length >> 24;
- metaentry->length &= 0x00ffffff;
- /* if we got a match, proceed */
- if (metatag == CHDMETATAG_WILDCARD || metaentry->metatag == metatag)
- if (metaindex-- == 0)
- return CHDERR_NONE;
+//-------------------------------------------------
+// hunk_copy_from_parent - mark a hunk as being a
+// copy of a hunk from a parent CHD
+//-------------------------------------------------
- /* no match, fetch the next link */
- metaentry->prev = metaentry->offset;
- metaentry->offset = metaentry->next;
- }
+void chd_file::hunk_copy_from_parent(UINT32 hunknum, UINT64 parentunit)
+{
+ // verify that we are appending properly to a compressed file
+ verify_proper_compression_append(hunknum);
- /* if we get here, we didn't find it */
- return CHDERR_METADATA_NOT_FOUND;
+ // update the map entry
+ UINT8 *rawmap = &m_rawmap[hunknum * 12];
+ rawmap[0] = COMPRESSION_PARENT;
+ be_write(&rawmap[1], 0, 3);
+ be_write(&rawmap[4], parentunit, 6);
+ be_write(&rawmap[10], 0, 2);
}
-/*-------------------------------------------------
- metadata_set_previous_next - set the 'next'
- offset of a piece of metadata
--------------------------------------------------*/
+//-------------------------------------------------
+// metadata_find - find a metadata entry
+//-------------------------------------------------
-static chd_error metadata_set_previous_next(chd_file *chd, UINT64 prevoffset, UINT64 nextoffset)
+bool chd_file::metadata_find(chd_metadata_tag metatag, INT32 metaindex, metadata_entry &metaentry, bool resume)
{
- UINT8 raw_meta_header[METADATA_HEADER_SIZE];
- chd_error err;
- UINT32 count;
-
- /* if we were the first entry, make the next entry the first */
- if (prevoffset == 0)
+ // start at the beginning unless we're resuming a previous search
+ if (!resume)
{
- chd->header.metaoffset = nextoffset;
- err = header_write(chd->file, &chd->header);
- if (err != CHDERR_NONE)
- return err;
+ metaentry.offset = m_metaoffset;
+ metaentry.prev = 0;
}
-
- /* otherwise, update the link in the previous pointer */
else
{
- /* read the previous raw header */
- core_fseek(chd->file, prevoffset, SEEK_SET);
- count = core_fread(chd->file, raw_meta_header, sizeof(raw_meta_header));
- if (count != sizeof(raw_meta_header))
- return CHDERR_READ_ERROR;
-
- /* copy our next pointer into the previous->next offset */
- put_bigendian_uint64(&raw_meta_header[8], nextoffset);
-
- /* write the previous raw header */
- core_fseek(chd->file, prevoffset, SEEK_SET);
- count = core_fwrite(chd->file, raw_meta_header, sizeof(raw_meta_header));
- if (count != sizeof(raw_meta_header))
- return CHDERR_WRITE_ERROR;
+ metaentry.prev = metaentry.offset;
+ metaentry.offset = metaentry.next;
}
- return CHDERR_NONE;
-}
-
-
-/*-------------------------------------------------
- metadata_set_length - set the length field of
- a piece of metadata
--------------------------------------------------*/
-
-static chd_error metadata_set_length(chd_file *chd, UINT64 offset, UINT32 length)
-{
- UINT8 raw_meta_header[METADATA_HEADER_SIZE];
- UINT32 oldlength;
- UINT32 count;
-
- /* read the raw header */
- core_fseek(chd->file, offset, SEEK_SET);
- count = core_fread(chd->file, raw_meta_header, sizeof(raw_meta_header));
- if (count != sizeof(raw_meta_header))
- return CHDERR_READ_ERROR;
-
- /* update the length at offset 4, preserving the flags in the upper byte */
- oldlength = get_bigendian_uint32(&raw_meta_header[4]);
- length = (length & 0x00ffffff) | (oldlength & 0xff000000);
- put_bigendian_uint32(&raw_meta_header[4], length);
-
- /* write the raw header */
- core_fseek(chd->file, offset, SEEK_SET);
- count = core_fwrite(chd->file, raw_meta_header, sizeof(raw_meta_header));
- if (count != sizeof(raw_meta_header))
- return CHDERR_WRITE_ERROR;
-
- return CHDERR_NONE;
-}
-
-
-/*-------------------------------------------------
- metadata_compute_hash - compute the SHA1
- hash of all metadata that requests it
--------------------------------------------------*/
-
-static chd_error metadata_compute_hash(chd_file *chd, const UINT8 *rawsha1, UINT8 *finalsha1)
-{
- metadata_hash *hasharray = NULL;
- chd_error err = CHDERR_NONE;
- struct sha1_ctx sha1;
- UINT32 hashindex = 0;
- UINT32 hashalloc = 0;
- UINT64 offset, next;
-
- /* only works for V4 and above */
- if (chd->header.version < 4)
- {
- memcpy(finalsha1, rawsha1, SHA1_DIGEST_SIZE);
- return CHDERR_NONE;
- }
-
- /* loop until we run out of data */
- for (offset = chd->header.metaoffset; offset != 0; offset = next)
+ // loop until we run out of options
+ while (metaentry.offset != 0)
{
+ // read the raw header
UINT8 raw_meta_header[METADATA_HEADER_SIZE];
- UINT32 count, metalength, metatag;
- UINT8 *tempbuffer;
- UINT8 metaflags;
-
- /* read the raw header */
- core_fseek(chd->file, offset, SEEK_SET);
- count = core_fread(chd->file, raw_meta_header, sizeof(raw_meta_header));
- if (count != sizeof(raw_meta_header))
- break;
-
- /* extract the data */
- metatag = get_bigendian_uint32(&raw_meta_header[0]);
- metalength = get_bigendian_uint32(&raw_meta_header[4]);
- next = get_bigendian_uint64(&raw_meta_header[8]);
-
- /* flags are encoded in the high byte of length */
- metaflags = metalength >> 24;
- metalength &= 0x00ffffff;
-
- /* if not checksumming, continue */
- if (!(metaflags & CHD_MDFLAGS_CHECKSUM))
- continue;
+ file_read(metaentry.offset, raw_meta_header, sizeof(raw_meta_header));
- /* allocate memory */
- tempbuffer = (UINT8 *)malloc(metalength);
- if (tempbuffer == NULL)
- {
- err = CHDERR_OUT_OF_MEMORY;
- goto cleanup;
- }
-
- /* seek and read the metadata */
- core_fseek(chd->file, offset + METADATA_HEADER_SIZE, SEEK_SET);
- count = core_fread(chd->file, tempbuffer, metalength);
- if (count != metalength)
- {
- free(tempbuffer);
- err = CHDERR_READ_ERROR;
- goto cleanup;
- }
-
- /* compute this entry's hash */
- sha1_init(&sha1);
- sha1_update(&sha1, metalength, tempbuffer);
- sha1_final(&sha1);
- free(tempbuffer);
+ // extract the data
+ metaentry.metatag = be_read(&raw_meta_header[0], 4);
+ metaentry.flags = raw_meta_header[4];
+ metaentry.length = be_read(&raw_meta_header[5], 3);
+ metaentry.next = be_read(&raw_meta_header[8], 8);
- /* expand the hasharray if necessary */
- if (hashindex >= hashalloc)
- {
- hashalloc += 256;
- hasharray = (metadata_hash *)realloc(hasharray, hashalloc * sizeof(hasharray[0]));
- if (hasharray == NULL)
- {
- err = CHDERR_OUT_OF_MEMORY;
- goto cleanup;
- }
- }
+ // if we got a match, proceed
+ if (metatag == CHDMETATAG_WILDCARD || metaentry.metatag == metatag)
+ if (metaindex-- == 0)
+ return true;
- /* fill in the entry */
- put_bigendian_uint32(hasharray[hashindex].tag, metatag);
- sha1_digest(&sha1, SHA1_DIGEST_SIZE, hasharray[hashindex].sha1);
- hashindex++;
+ // no match, fetch the next link
+ metaentry.prev = metaentry.offset;
+ metaentry.offset = metaentry.next;
}
- /* sort the array */
- qsort(hasharray, hashindex, sizeof(hasharray[0]), metadata_hash_compare);
-
- /* compute the SHA1 of the raw plus the various metadata */
- sha1_init(&sha1);
- sha1_update(&sha1, CHD_SHA1_BYTES, rawsha1);
- sha1_update(&sha1, hashindex * sizeof(hasharray[0]), (const UINT8 *)hasharray);
- sha1_final(&sha1);
- sha1_digest(&sha1, SHA1_DIGEST_SIZE, finalsha1);
-
-cleanup:
- if (hasharray != NULL)
- free(hasharray);
- return err;
+ // if we get here, we didn't find it
+ return false;
}
-/*-------------------------------------------------
- metadata_hash_compare - compare two hash
- entries
--------------------------------------------------*/
+//-------------------------------------------------
+// metadata_set_previous_next - set the 'next'
+// offset of a piece of metadata
+//-------------------------------------------------
-static int CLIB_DECL metadata_hash_compare(const void *elem1, const void *elem2)
+void chd_file::metadata_set_previous_next(UINT64 prevoffset, UINT64 nextoffset)
{
- return memcmp(elem1, elem2, sizeof(metadata_hash));
-}
-
-
-
-/***************************************************************************
- ZLIB COMPRESSION CODEC
-***************************************************************************/
-
-/*-------------------------------------------------
- zlib_codec_init - initialize the ZLIB codec
--------------------------------------------------*/
-
-static chd_error zlib_codec_init(chd_file *chd)
-{
- zlib_codec_data *data;
- chd_error err;
- int zerr;
-
- /* allocate memory for the 2 stream buffers */
- data = (zlib_codec_data *)malloc(sizeof(*data));
- if (data == NULL)
- return CHDERR_OUT_OF_MEMORY;
+ UINT64 offset = 0;
- /* clear the buffers */
- memset(data, 0, sizeof(*data));
-
- /* init the inflater first */
- data->inflater.next_in = (Bytef *)data; /* bogus, but that's ok */
- data->inflater.avail_in = 0;
- data->inflater.zalloc = zlib_fast_alloc;
- data->inflater.zfree = zlib_fast_free;
- data->inflater.opaque = data;
- zerr = inflateInit2(&data->inflater, -MAX_WBITS);
-
- /* if that worked, initialize the deflater */
- if (zerr == Z_OK)
+ // if we were the first entry, make the next entry the first
+ if (prevoffset == 0)
{
- data->deflater.next_in = (Bytef *)data; /* bogus, but that's ok */
- data->deflater.avail_in = 0;
- data->deflater.zalloc = zlib_fast_alloc;
- data->deflater.zfree = zlib_fast_free;
- data->deflater.opaque = data;
- zerr = deflateInit2(&data->deflater, Z_BEST_COMPRESSION, Z_DEFLATED, -MAX_WBITS, 8, Z_DEFAULT_STRATEGY);
+ offset = m_metaoffset_offset;
+ m_metaoffset = nextoffset;
}
- /* convert errors */
- if (zerr == Z_MEM_ERROR)
- err = CHDERR_OUT_OF_MEMORY;
- else if (zerr != Z_OK)
- err = CHDERR_CODEC_ERROR;
- else
- err = CHDERR_NONE;
-
- /* handle an error */
- if (err == CHDERR_NONE)
- chd->codecdata = data;
+ // otherwise, update the link in the previous header
else
- free(data);
-
- return err;
+ offset = prevoffset + 8;
+
+ // create a big-endian version
+ UINT8 rawbuf[sizeof(UINT64)];
+ be_write(rawbuf, nextoffset, 8);
+
+ // write to the header and update our local copy
+ file_write(offset, rawbuf, sizeof(rawbuf));
}
-/*-------------------------------------------------
- zlib_codec_free - free data for the ZLIB
- codec
--------------------------------------------------*/
+//-------------------------------------------------
+// metadata_update_hash - compute the SHA1
+// hash of all metadata that requests it
+//-------------------------------------------------
-static void zlib_codec_free(chd_file *chd)
+void chd_file::metadata_update_hash()
{
- zlib_codec_data *data = (zlib_codec_data *)chd->codecdata;
-
- /* deinit the streams */
- if (data != NULL)
- {
- int i;
-
- inflateEnd(&data->inflater);
- deflateEnd(&data->deflater);
+ // only works for V4 and above, and only for compressed CHDs
+ if (m_version < 4 || !compressed())
+ return;
- /* free our fast memory */
- for (i = 0; i < MAX_ZLIB_ALLOCS; i++)
- if (data->allocptr[i])
- free(data->allocptr[i]);
- free(data);
- }
+ // compute the new overall hash
+ sha1_t fullsha1 = compute_overall_sha1(raw_sha1());
+
+ // create a big-endian version
+ UINT8 rawbuf[sizeof(sha1_t)];
+ be_write_sha1(&rawbuf[0], fullsha1);
+
+ // write to the header
+ file_write(m_sha1_offset, rawbuf, sizeof(rawbuf));
}
-/*-------------------------------------------------
- zlib_codec_compress - compress data using the
- ZLIB codec
--------------------------------------------------*/
+//-------------------------------------------------
+// metadata_hash_compare - compare two hash
+// entries
+//-------------------------------------------------
-static chd_error zlib_codec_compress(chd_file *chd, const void *src, UINT32 *length)
+int CLIB_DECL chd_file::metadata_hash_compare(const void *elem1, const void *elem2)
{
- zlib_codec_data *data = (zlib_codec_data *)chd->codecdata;
- int zerr;
-
- /* reset the decompressor */
- data->deflater.next_in = (Bytef *)src;
- data->deflater.avail_in = chd->header.hunkbytes;
- data->deflater.total_in = 0;
- data->deflater.next_out = chd->compressed;
- data->deflater.avail_out = chd->header.hunkbytes;
- data->deflater.total_out = 0;
- zerr = deflateReset(&data->deflater);
- if (zerr != Z_OK)
- return CHDERR_COMPRESSION_ERROR;
-
- /* do it */
- zerr = deflate(&data->deflater, Z_FINISH);
-
- /* if we ended up with more data than we started with, return an error */
- if (zerr != Z_STREAM_END || data->deflater.total_out >= chd->header.hunkbytes)
- return CHDERR_COMPRESSION_ERROR;
-
- /* otherwise, fill in the length and return success */
- *length = data->deflater.total_out;
- return CHDERR_NONE;
+ return memcmp(elem1, elem2, sizeof(metadata_hash));
}
-/*-------------------------------------------------
- zlib_codec_decompress - decomrpess data using
- the ZLIB codec
--------------------------------------------------*/
-
-static chd_error zlib_codec_decompress(chd_file *chd, UINT32 srclength, void *dest)
-{
- zlib_codec_data *data = (zlib_codec_data *)chd->codecdata;
- int zerr;
-
- /* reset the decompressor */
- data->inflater.next_in = chd->compressed;
- data->inflater.avail_in = srclength;
- data->inflater.total_in = 0;
- data->inflater.next_out = (Bytef *)dest;
- data->inflater.avail_out = chd->header.hunkbytes;
- data->inflater.total_out = 0;
- zerr = inflateReset(&data->inflater);
- if (zerr != Z_OK)
- return CHDERR_DECOMPRESSION_ERROR;
-
- /* do it */
- zerr = inflate(&data->inflater, Z_FINISH);
- if (data->inflater.total_out != chd->header.hunkbytes)
- return CHDERR_DECOMPRESSION_ERROR;
-
- return CHDERR_NONE;
-}
+//**************************************************************************
+// CHD COMPRESSOR
+//**************************************************************************
-/*-------------------------------------------------
- zlib_fast_alloc - fast malloc for ZLIB, which
- allocates and frees memory frequently
--------------------------------------------------*/
+//-------------------------------------------------
+// chd_file_compressor - constructor
+//-------------------------------------------------
-static voidpf zlib_fast_alloc(voidpf opaque, uInt items, uInt size)
+chd_file_compressor::chd_file_compressor()
+ : m_walking_parent(false),
+ m_total_in(0),
+ m_total_out(0),
+ m_read_queue(NULL),
+ m_read_queue_offset(0),
+ m_read_done_offset(0),
+ m_read_error(false),
+ m_work_queue(NULL),
+ m_write_hunk(0)
{
- zlib_codec_data *data = (zlib_codec_data *)opaque;
- UINT32 *ptr;
- int i;
+ // zap arrays
+ memset(m_work_item, 0, sizeof(m_work_item));
+ memset(m_codecs, 0, sizeof(m_codecs));
- /* compute the size, rounding to the nearest 1k */
- size = (size * items + 0x3ff) & ~0x3ff;
-
- /* reuse a hunk if we can */
- for (i = 0; i < MAX_ZLIB_ALLOCS; i++)
- {
- ptr = data->allocptr[i];
- if (ptr && size == *ptr)
- {
- /* set the low bit of the size so we don't match next time */
- *ptr |= 1;
- return ptr + 1;
- }
- }
-
- /* alloc a new one */
- ptr = (UINT32 *)malloc(size + sizeof(UINT32));
- if (!ptr)
- return NULL;
-
- /* put it into the list */
- for (i = 0; i < MAX_ZLIB_ALLOCS; i++)
- if (!data->allocptr[i])
- {
- data->allocptr[i] = ptr;
- break;
- }
-
- /* set the low bit of the size so we don't match next time */
- *ptr = size | 1;
- return ptr + 1;
+ // allocate work queues
+ m_read_queue = osd_work_queue_alloc(WORK_QUEUE_FLAG_IO);
+ m_work_queue = osd_work_queue_alloc(WORK_QUEUE_FLAG_MULTI);
}
-/*-------------------------------------------------
- zlib_fast_free - fast free for ZLIB, which
- allocates and frees memory frequently
--------------------------------------------------*/
+//-------------------------------------------------
+// ~chd_file_compressor - destructor
+//-------------------------------------------------
-static void zlib_fast_free(voidpf opaque, voidpf address)
+chd_file_compressor::~chd_file_compressor()
{
- zlib_codec_data *data = (zlib_codec_data *)opaque;
- UINT32 *ptr = (UINT32 *)address - 1;
- int i;
-
- /* find the hunk */
- for (i = 0; i < MAX_ZLIB_ALLOCS; i++)
- if (ptr == data->allocptr[i])
- {
- /* clear the low bit of the size to allow matches */
- *ptr &= ~1;
- return;
- }
+ // free the work queues
+ osd_work_queue_free(m_read_queue);
+ osd_work_queue_free(m_work_queue);
+
+ // delete allocated arrays
+ for (int codecnum = 0; codecnum < ARRAY_LENGTH(m_codecs); codecnum++)
+ delete m_codecs[codecnum];
}
-/*-------------------------------------------------
- flac_codec_compress - compress data using the
- FLAC codec
--------------------------------------------------*/
-
-
-const int INITIAL_BUFFER_SIZE = 0x20000;
-const int INITIAL_GROW_SIZE = 0x20000;
+//-------------------------------------------------
+// compress_begin - initiate compression
+//-------------------------------------------------
-struct flac_encoder_data
+void chd_file_compressor::compress_begin()
{
- FLAC__int32* pcm;
- FLAC__byte* tempbuffer;
- UINT8* flac_outputbuffer;
- size_t flac_outputbuffer_size;
- FLAC__uint64 flac_output_buffer_curpos;
- FLAC__uint64 flac_output_buffer_total;
-};
-
-static FLAC__StreamEncoderWriteStatus flac_encoder_write_callback(const FLAC__StreamEncoder *encoder, const FLAC__byte buffer[], size_t bytes, unsigned samples, unsigned current_frame, void *client_data)
-{
- if (((flac_encoder_data*)client_data)->flac_output_buffer_curpos + bytes >= ((flac_encoder_data*)client_data)->flac_outputbuffer_size)
+ // reset state
+ m_walking_parent = (m_parent != NULL);
+ m_total_in = 0;
+ m_total_out = 0;
+ m_compsha1.reset();
+
+ // reset our maps
+ m_parent_map.reset();
+ m_current_map.reset();
+
+ // reset read state
+ m_read_queue_offset = 0;
+ m_read_done_offset = 0;
+ m_read_error = false;
+
+ // reset work item state
+ m_work_buffer.resize(hunk_bytes() * (WORK_BUFFER_HUNKS + 1));
+ m_compressed_buffer.resize(hunk_bytes() * WORK_BUFFER_HUNKS);
+ for (int itemnum = 0; itemnum < WORK_BUFFER_HUNKS; itemnum++)
{
- ((flac_encoder_data*)client_data)->flac_outputbuffer = (UINT8*)realloc(((flac_encoder_data*)client_data)->flac_outputbuffer, ((flac_encoder_data*)client_data)->flac_outputbuffer_size+INITIAL_GROW_SIZE);
- ((flac_encoder_data*)client_data)->flac_outputbuffer_size = ((flac_encoder_data*)client_data)->flac_outputbuffer_size+INITIAL_GROW_SIZE;
+ work_item &item = m_work_item[itemnum];
+ item.m_compressor = this;
+ item.m_data = m_work_buffer + hunk_bytes() * itemnum;
+ item.m_compressed = m_compressed_buffer + hunk_bytes() * itemnum;
+ item.m_hash.resize(hunk_bytes() / unit_bytes());
}
-
- memcpy(((flac_encoder_data*)client_data)->flac_outputbuffer+((flac_encoder_data*)client_data)->flac_output_buffer_curpos, buffer, bytes);
- if (((flac_encoder_data*)client_data)->flac_output_buffer_curpos+bytes > ((flac_encoder_data*)client_data)->flac_output_buffer_total)
- ((flac_encoder_data*)client_data)->flac_output_buffer_total=((flac_encoder_data*)client_data)->flac_output_buffer_curpos+bytes;
-
- ((flac_encoder_data*)client_data)->flac_output_buffer_curpos+=bytes;
-
- return FLAC__STREAM_ENCODER_WRITE_STATUS_OK;
-}
-
-#define FLAC_HEADER_SIZE (86)
-
-static chd_error flac_codec_compress(chd_file *chd, const void *src, UINT32 *length, int swap)
-{
- int FLAC_ENCODER_READSIZE = (CD_MAX_SECTOR_DATA/4);
- int FLAC_ENCODER_FULLSIZE = ((CD_MAX_SECTOR_DATA+CD_MAX_SUBCODE_DATA)/4);
-
- flac_encoder_data flac_encoder_client;
- flac_encoder_data* flac_encoder_client_ptr = &flac_encoder_client;
- flac_encoder_client_ptr->flac_outputbuffer = (UINT8*)malloc(INITIAL_BUFFER_SIZE);
- flac_encoder_client_ptr->flac_outputbuffer_size = INITIAL_BUFFER_SIZE;
- flac_encoder_client_ptr->flac_output_buffer_curpos = 0;
- flac_encoder_client_ptr->flac_output_buffer_total = 0;
- flac_encoder_client_ptr->pcm=(FLAC__int32*)malloc(FLAC_ENCODER_READSIZE * 2 * 4);
- flac_encoder_client_ptr->tempbuffer=(FLAC__byte*)malloc(FLAC_ENCODER_READSIZE * 2 * 2);
-
-
-
- FLAC__StreamEncoder *encoder = 0;
- FLAC__bool ok = true;
-
- if((encoder = FLAC__stream_encoder_new()) == NULL)
+
+ // initialize codec instances
+ for (int instance = 0; instance < ARRAY_LENGTH(m_codecs); instance++)
{
- printf("ERROR: allocating encoder\n");
- return CHDERR_COMPRESSION_ERROR;
+ delete m_codecs[instance];
+ m_codecs[instance] = new chd_compressor_group(*this, m_compression);
}
+
+ // reset write state
+ m_write_hunk = 0;
+}
- ok &= FLAC__stream_encoder_set_verify(encoder, false); // we trust libFLAC ;-)
- ok &= FLAC__stream_encoder_set_compression_level(encoder, 8);
- ok &= FLAC__stream_encoder_set_channels(encoder, 2);
- ok &= FLAC__stream_encoder_set_bits_per_sample(encoder, 16);
- ok &= FLAC__stream_encoder_set_sample_rate(encoder, 44100);
- ok &= FLAC__stream_encoder_set_total_samples_estimate(encoder, 0);
- ok &= FLAC__stream_encoder_set_streamable_subset(encoder, false);
- ok &= FLAC__stream_encoder_set_blocksize(encoder, ((CD_MAX_SECTOR_DATA)*CD_FRAMES_PER_HUNK)/4 ); // /4 because this is in SAMPLES, not bytes
- if (!ok)
- {
- printf("error setting up stream encoder\n");
- return CHDERR_COMPRESSION_ERROR;
- }
+//-------------------------------------------------
+// compress_continue - continue compression
+//-------------------------------------------------
- if (FLAC__stream_encoder_init_stream(encoder, flac_encoder_write_callback, NULL, NULL, NULL, flac_encoder_client_ptr) != FLAC__STREAM_ENCODER_INIT_STATUS_OK)
+chd_error chd_file_compressor::compress_continue(double &progress, double &ratio)
+{
+ // if done reading, queue some more
+ while (m_read_queue_offset < m_logicalbytes && osd_work_queue_items(m_read_queue) < 2)
{
- printf("error initializing encoder\n");
- return CHDERR_COMPRESSION_ERROR;
- }
+ // if we got an error, return an error
+ if (m_read_error)
+ return CHDERR_READ_ERROR;
- size_t left = (size_t)chd->header.hunkbytes;
- UINT8* srcdata = (UINT8*)src;
+ // see if we have enough free work items to read the next half of a buffer
+ UINT32 startitem = m_read_queue_offset / hunk_bytes();
+ UINT32 enditem = startitem + WORK_BUFFER_HUNKS / 2;
+ UINT32 curitem;
+ for (curitem = startitem; curitem < enditem; curitem++)
+ if (m_work_item[curitem % WORK_BUFFER_HUNKS].m_status != WS_READY)
+ break;
- while(ok && left)
- {
- memcpy(flac_encoder_client_ptr->tempbuffer, srcdata, FLAC_ENCODER_READSIZE*4);
- srcdata += FLAC_ENCODER_FULLSIZE*4;
+ // if it's not all clear, defer
+ if (curitem != enditem)
+ break;
+
+ // if we're walking the parent, we want one more item to have cleared so we
+ // can read an extra hunk there
+ if (m_walking_parent && m_work_item[curitem % WORK_BUFFER_HUNKS].m_status != WS_READY)
+ break;
+
+ // queue the next read
+ for (curitem = startitem; curitem < enditem; curitem++)
+ m_work_item[curitem % WORK_BUFFER_HUNKS].m_status = WS_READING;
+ osd_work_item_queue(m_read_queue, async_read_static, this, WORK_ITEM_FLAG_AUTO_RELEASE);
+ m_read_queue_offset += WORK_BUFFER_HUNKS * hunk_bytes() / 2;
+ }
+
+ // flush out any finished items
+ while (m_work_item[m_write_hunk % WORK_BUFFER_HUNKS].m_status == WS_COMPLETE)
+ {
+ work_item &item = m_work_item[m_write_hunk % WORK_BUFFER_HUNKS];
+
+ // free any OSD work item
+ if (item.m_osd != NULL)
+ osd_work_item_release(item.m_osd);
+ item.m_osd = NULL;
+
+ // for parent walking, just add to the hashmap
+ if (m_walking_parent)
+ {
+ UINT32 uph = hunk_bytes() / unit_bytes();
+ UINT32 units = uph;
+ if (item.m_hunknum == hunk_count() - 1 || !compressed())
+ units = 1;
+ for (UINT32 unit = 0; unit < units; unit++)
+ if (m_parent_map.find(item.m_hash[unit].m_crc16, item.m_hash[unit].m_sha1) == hashmap::NOT_FOUND)
+ m_parent_map.add(item.m_hunknum * uph + unit, item.m_hash[unit].m_crc16, item.m_hash[unit].m_sha1);
+ }
+
+ // if we're uncompressed, use regular writes
+ else if (!compressed())
{
- size_t i;
- for(i = 0; i < FLAC_ENCODER_READSIZE*2; i++)
+ bool skip = true;
+
+ // see if it's all 0
+ for (UINT32 offs = 0; offs < m_hunkbytes && skip; offs++)
+ if (item.m_data[offs] != 0)
+ skip = false;
+
+ // see if it's in the parent map
+ if (!skip && m_parent != NULL && m_parent_map.find(item.m_hash[0].m_crc16, item.m_hash[0].m_sha1) != hashmap::NOT_FOUND)
+ skip = true;
+
+ // write the block
+ if (!skip)
{
- if (!swap) flac_encoder_client_ptr->pcm[i] = (FLAC__int32)(((FLAC__int16)(FLAC__int8)flac_encoder_client_ptr->tempbuffer[2*i] << 8) | (FLAC__int16)flac_encoder_client_ptr->tempbuffer[2*i+1]);
- else flac_encoder_client_ptr->pcm[i] = (FLAC__int32)(((FLAC__int16)(FLAC__int8)flac_encoder_client_ptr->tempbuffer[2*i+1] << 8) | (FLAC__int16)flac_encoder_client_ptr->tempbuffer[2*i]);
+ chd_error err = write_hunk(item.m_hunknum, item.m_data);
+ if (err != CHDERR_NONE)
+ return err;
+ m_total_out += m_hunkbytes;
+ }
+ }
+
+ // for compressing, process the result
+ else do
+ {
+ // first see if the hunk is in the parent or self maps
+ UINT64 selfhunk = m_current_map.find(item.m_hash[0].m_crc16, item.m_hash[0].m_sha1);
+ if (selfhunk != hashmap::NOT_FOUND)
+ {
+ hunk_copy_from_self(item.m_hunknum, selfhunk);
+ break;
+ }
+
+ // if not, see if it's in the parent map
+ if (m_parent != NULL)
+ {
+ UINT64 parentunit = m_parent_map.find(item.m_hash[0].m_crc16, item.m_hash[0].m_sha1);
+ if (parentunit != hashmap::NOT_FOUND)
+ {
+ hunk_copy_from_parent(item.m_hunknum, parentunit);
+ break;
+ }
+ }
+
+ // otherwise, append it compressed and add to the self map
+ hunk_write_compressed(item.m_hunknum, item.m_compression, item.m_compressed, item.m_complen, item.m_hash[0].m_crc16);
+ m_total_out += item.m_complen;
+ m_current_map.add(item.m_hunknum, item.m_hash[0].m_crc16, item.m_hash[0].m_sha1);
+ } while (0);
+
+ // reset the item and advance
+ item.m_status = WS_READY;
+ m_write_hunk++;
+
+ // if we hit the end, finalize
+ if (m_write_hunk == m_hunkcount)
+ {
+ // if this is just walking the parent, reset and get ready for compression
+ if (m_walking_parent)
+ {
+ m_walking_parent = false;
+ m_read_queue_offset = m_read_done_offset = 0;
+ m_write_hunk = 0;
+ for (int itemnum = 0; itemnum < WORK_BUFFER_HUNKS; itemnum++)
+ m_work_item[itemnum].m_status = WS_READY;
+ }
+
+ // wait for all reads to finish and if we're compressed, write the final SHA1 and map
+ else
+ {
+ osd_work_queue_wait(m_read_queue, 30 * osd_ticks_per_second());
+ if (!compressed())
+ return CHDERR_NONE;
+ set_raw_sha1(m_compsha1.finish());
+ return compress_v5_map();
}
-
- ok = FLAC__stream_encoder_process_interleaved(encoder, flac_encoder_client_ptr->pcm, FLAC_ENCODER_READSIZE);
}
-
-
- left -= (FLAC_ENCODER_FULLSIZE*4);
- }
-
- if (!ok)
- {
- printf("error encoding!\n");
- return CHDERR_COMPRESSION_ERROR;
- }
-
- ok &= FLAC__stream_encoder_finish(encoder);
-
- if (!ok)
- {
- printf("error finishing!\n");
- return CHDERR_COMPRESSION_ERROR;
- }
-
- int totalout = flac_encoder_client_ptr->flac_output_buffer_total-FLAC_HEADER_SIZE;
-
-
-
- FLAC__stream_encoder_delete(encoder);
-
- if (totalout >= chd->header.hunkbytes)
- {
- free(flac_encoder_client_ptr->flac_outputbuffer);
- return CHDERR_COMPRESSION_ERROR;
}
+
+ // update progress and ratio
+ if (m_walking_parent)
+ progress = double(m_read_done_offset) / double(logical_bytes());
+ else
+ progress = double(m_write_hunk) / double(m_hunkcount);
+ ratio = (m_total_in == 0) ? 1.0 : double(m_total_out) / double(m_total_in);
- *length = totalout;
- memcpy(chd->compressed, flac_encoder_client_ptr->flac_outputbuffer+FLAC_HEADER_SIZE, flac_encoder_client_ptr->flac_output_buffer_total-FLAC_HEADER_SIZE);
-
- free(flac_encoder_client_ptr->flac_outputbuffer);
- free(flac_encoder_client_ptr->pcm);
- free(flac_encoder_client_ptr->tempbuffer);
- return CHDERR_NONE;
-}
+ // if we're waiting for work, wait
+ while (m_work_item[m_write_hunk % WORK_BUFFER_HUNKS].m_status != WS_COMPLETE && m_work_item[m_write_hunk % WORK_BUFFER_HUNKS].m_osd != NULL)
+ osd_work_item_wait(m_work_item[m_write_hunk % WORK_BUFFER_HUNKS].m_osd, osd_ticks_per_second());
-static chd_error flac_codec_compress_normal(chd_file *chd, const void *src, UINT32 *length)
-{
- return flac_codec_compress(chd, src, length, 0);
+ return m_walking_parent ? CHDERR_WALKING_PARENT : CHDERR_COMPRESSING;
}
-// this data is always the same for our blocks, so don't store it in the file.
-static UINT8 flacHeader[FLAC_HEADER_SIZE] = {
- 0x66, 0x4C, 0x61, 0x43, 0x00, 0x00, 0x00, 0x22, 0x12, 0x60, 0x12, 0x60,
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0A, 0xC4, 0x42, 0xF0, 0x00, 0x00,
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x84, 0x00, 0x00, 0x28, 0x20, 0x00,
- 0x00, 0x00, 0x72, 0x65, 0x66, 0x65, 0x72, 0x65, 0x6E, 0x63, 0x65, 0x20,
- 0x6C, 0x69, 0x62, 0x46, 0x4C, 0x41, 0x43, 0x20, 0x31, 0x2E, 0x32, 0x2E,
- 0x31, 0x20, 0x32, 0x30, 0x30, 0x37, 0x30, 0x39, 0x31, 0x37, 0x00, 0x00,
- 0x00, 0x00,
-};
-/*-------------------------------------------------
- flac_codec_decompress - decomrpess data using
- the FLAC codec
--------------------------------------------------*/
-
-
-struct flac_decoder_data
-{
- int readoffset;
- size_t readbuffersize;
- UINT8* readbuffer;
- int writeoffset;
- INT16 tempbuffer[(CD_MAX_SECTOR_DATA * CD_FRAMES_PER_HUNK)/2];
- UINT64 amount_to_decode;
-};
+//-------------------------------------------------
+// async_walk_parent - handle asynchronous parent
+// walking operations
+//-------------------------------------------------
-FLAC__StreamDecoderWriteStatus flac_decoder_write_callback(const FLAC__StreamDecoder *decoder, const FLAC__Frame *frame, const FLAC__int32 *const buffer[], void *client_data)
+void *chd_file_compressor::async_walk_parent_static(void *param, int threadid)
{
-
- int blocksize = frame->header.blocksize;
- int i = 0;
- while (blocksize && ((flac_decoder_data*)client_data)->amount_to_decode)
- {
- ((flac_decoder_data*)client_data)->tempbuffer[(((flac_decoder_data*)client_data)->writeoffset*2)+0] = buffer[0][i];
- ((flac_decoder_data*)client_data)->tempbuffer[(((flac_decoder_data*)client_data)->writeoffset*2)+1] = buffer[1][i];
-
- blocksize--;
- i++;
- ((flac_decoder_data*)client_data)->amount_to_decode-=4;
- ((flac_decoder_data*)client_data)->writeoffset++;
- }
-
- return FLAC__STREAM_DECODER_WRITE_STATUS_CONTINUE;
+ work_item *item = reinterpret_cast<work_item *>(param);
+ item->m_compressor->async_walk_parent(*item);
+ return NULL;
}
-FLAC__StreamDecoderReadStatus flac_decoder_read_callback(const FLAC__StreamDecoder *decoder, FLAC__byte buffer[], size_t *bytes, void *client_data)
+void chd_file_compressor::async_walk_parent(work_item &item)
{
- size_t readsize = *bytes;
- size_t readbuffersize = ((flac_decoder_data*)client_data)->readbuffersize;
-
- if ((((flac_decoder_data*)client_data)->readoffset + readsize) > readbuffersize)
+ // compute CRC-16 and SHA-1 hashes for each unit, unless we're the last one or we're uncompressed
+ UINT32 units = hunk_bytes() / unit_bytes();
+ if (item.m_hunknum == m_hunkcount - 1 || !compressed())
+ units = 1;
+ for (UINT32 unit = 0; unit < units; unit++)
{
- readsize = ((flac_decoder_data*)client_data)->readbuffersize-((flac_decoder_data*)client_data)->readoffset;
+ item.m_hash[unit].m_crc16 = crc16_creator::simple(item.m_data + unit * unit_bytes(), hunk_bytes());
+ item.m_hash[unit].m_sha1 = sha1_creator::simple(item.m_data + unit * unit_bytes(), hunk_bytes());
}
-
- if (readsize==0) return FLAC__STREAM_DECODER_READ_STATUS_END_OF_STREAM;
-
- memcpy(buffer, ((flac_decoder_data*)client_data)->readbuffer+((flac_decoder_data*)client_data)->readoffset, readsize);
-
- ((flac_decoder_data*)client_data)->readoffset += readsize;
-
- return FLAC__STREAM_DECODER_READ_STATUS_CONTINUE;
+ item.m_status = WS_COMPLETE;
}
+//-------------------------------------------------
+// async_compress_hunk - handle asynchronous
+// hunk compression
+//-------------------------------------------------
-void flac_decoder_metadata_callback(const FLAC__StreamDecoder *decoder, const FLAC__StreamMetadata *metadata, void *client_data)
+void *chd_file_compressor::async_compress_hunk_static(void *param, int threadid)
{
-
+ work_item *item = reinterpret_cast<work_item *>(param);
+ item->m_compressor->async_compress_hunk(*item, threadid);
+ return NULL;
}
-void flac_decoder_error_callback(const FLAC__StreamDecoder *decoder, FLAC__StreamDecoderErrorStatus status, void *client_data)
+void chd_file_compressor::async_compress_hunk(work_item &item, int threadid)
{
+ // use our thread's codec
+ assert(threadid < ARRAY_LENGTH(m_codecs));
+ item.m_codecs = m_codecs[threadid];
-}
+ // compute CRC-16 and SHA-1 hashes
+ item.m_hash[0].m_crc16 = crc16_creator::simple(item.m_data, hunk_bytes());
+ item.m_hash[0].m_sha1 = sha1_creator::simple(item.m_data, hunk_bytes());
-static chd_error flac_codec_decompress(chd_file *chd, UINT32 srclength, void *dest)
-{
- FLAC__StreamDecoder *decoder = FLAC__stream_decoder_new();
- flac_decoder_data flac_decoder_client;
- flac_decoder_data* flac_decoder_client_ptr = &flac_decoder_client;
+ // find the best compression scheme, unless we already have a self or parent match
+ // (note we may miss a self match from blocks not yet added, but this just results in extra work)
+ if (m_current_map.find(item.m_hash[0].m_crc16, item.m_hash[0].m_sha1) == hashmap::NOT_FOUND &&
+ m_parent_map.find(item.m_hash[0].m_crc16, item.m_hash[0].m_sha1) == hashmap::NOT_FOUND)
+ item.m_compression = item.m_codecs->find_best_compressor(item.m_data, item.m_compressed, item.m_complen);
- flac_decoder_client_ptr->readoffset = 0;
-
- flac_decoder_client_ptr->readbuffersize = srclength+FLAC_HEADER_SIZE;
- flac_decoder_client_ptr->readbuffer = (UINT8*)malloc(flac_decoder_client_ptr->readbuffersize);
- flac_decoder_client_ptr->amount_to_decode = (CD_MAX_SECTOR_DATA*CD_FRAMES_PER_HUNK);
- int frames_to_decode = flac_decoder_client_ptr->amount_to_decode / CD_MAX_SECTOR_DATA;
-
- flac_decoder_client_ptr->writeoffset = 0;
-
- memcpy(flac_decoder_client_ptr->readbuffer, flacHeader, FLAC_HEADER_SIZE);
- memcpy(flac_decoder_client_ptr->readbuffer+FLAC_HEADER_SIZE, chd->compressed, srclength);
+ // mark us complete
+ item.m_status = WS_COMPLETE;
+}
- if (!decoder)
- {
- printf("unable to create FLAC decoder\n");
- return CHDERR_READ_ERROR;
- }
- if(FLAC__stream_decoder_init_stream(
- decoder,
- flac_decoder_read_callback,
- NULL,
- NULL,
- NULL,
- NULL,
- flac_decoder_write_callback,
- flac_decoder_metadata_callback,
- flac_decoder_error_callback,
- flac_decoder_client_ptr ) != FLAC__STREAM_DECODER_INIT_STATUS_OK)
- {
- printf("unable to init FLAC decoder\n");
- return CHDERR_READ_ERROR;
- }
+//-------------------------------------------------
+// async_read - handle asynchronous source file
+// reading
+//-------------------------------------------------
- if (FLAC__stream_decoder_process_until_end_of_metadata(decoder) != FLAC__STREAM_DECODER_READ_STATUS_END_OF_STREAM)
- {
- printf("Fail FLAC__stream_decoder_process_until_end_of_metadata\n");
- return CHDERR_READ_ERROR;
- }
+void *chd_file_compressor::async_read_static(void *param, int threadid)
+{
+ reinterpret_cast<chd_file_compressor *>(param)->async_read();
+ return NULL;
+}
- /* only ever a single frame */
- if (FLAC__stream_decoder_process_single(decoder) != FLAC__STREAM_DECODER_READ_STATUS_END_OF_STREAM)
- {
- printf("Fail FLAC__stream_decoder_process_until_end_of_metadata\n");
- return CHDERR_READ_ERROR;
- }
+void chd_file_compressor::async_read()
+{
+ // if in the error or complete state, stop
+ if (m_read_error)
+ return;
+ // determine parameters for the read
+ UINT32 work_buffer_bytes = WORK_BUFFER_HUNKS * hunk_bytes();
+ UINT32 numbytes = work_buffer_bytes / 2;
+ if (m_read_done_offset + numbytes > logical_bytes())
+ numbytes = logical_bytes() - m_read_done_offset;
- int srcoffset = 0;
- UINT8* dest2 = (UINT8*)dest;
- for (int frame = 0; frame<frames_to_decode;frame++)
+ // catch any exceptions coming out of here
+ try
{
- int destoffset = frame * (CD_FRAME_SIZE);
+ // do the read
+ UINT8 *dest = m_work_buffer + (m_read_done_offset % work_buffer_bytes);
+ assert(dest == m_work_buffer || dest == m_work_buffer + work_buffer_bytes/2);
+ UINT64 end_offset = m_read_done_offset + numbytes;
-
- int index;
-
- for (index = 0; index < 2352; index += 2 )
+ // if walking the parent, read in hunks from the parent CHD
+ if (m_walking_parent)
{
- dest2[ destoffset + index +1 ] = flac_decoder_client_ptr->tempbuffer[srcoffset] & 0xff;
- dest2[ destoffset + index ] = flac_decoder_client_ptr->tempbuffer[srcoffset] >> 8;
-
- srcoffset++;
+ UINT8 *curdest = dest;
+ for (UINT64 curoffs = m_read_done_offset; curoffs < end_offset + 1; curoffs += hunk_bytes())
+ {
+ m_parent->read_hunk(curoffs / hunk_bytes(), curdest);
+ curdest += hunk_bytes();
+ }
+ }
+
+ // otherwise, call the virtual function
+ else
+ read_data(dest, m_read_done_offset, numbytes);
+
+ // spawn off work for each hunk
+ for (UINT64 curoffs = m_read_done_offset; curoffs < end_offset; curoffs += hunk_bytes())
+ {
+ UINT32 hunknum = curoffs / hunk_bytes();
+ work_item &item = m_work_item[hunknum % WORK_BUFFER_HUNKS];
+ assert(item.m_status == WS_READING);
+ item.m_status = WS_QUEUED;
+ item.m_hunknum = hunknum;
+ item.m_osd = osd_work_item_queue(m_work_queue, m_walking_parent ? async_walk_parent_static : async_compress_hunk_static, &item, 0);
}
- }
-
- if (FLAC__stream_decoder_finish (decoder) != true)
- {
- printf("unable to finish FLAC decoder\n");
- return CHDERR_READ_ERROR;
+ // continue the running SHA-1
+ if (!m_walking_parent)
+ {
+ if (compressed())
+ m_compsha1.append(dest, numbytes);
+ m_total_in += numbytes;
+ }
+
+ // advance the read pointer
+ m_read_done_offset += numbytes;
}
-
- FLAC__stream_decoder_delete(decoder);
- free(flac_decoder_client_ptr->readbuffer);
-
- return CHDERR_NONE;
-}
-
-
-
-
-/***************************************************************************
- AV COMPRESSION CODEC
-***************************************************************************/
-
-/*-------------------------------------------------
- av_raw_data_size - compute the raw data size
--------------------------------------------------*/
-
-INLINE UINT32 av_raw_data_size(const UINT8 *data)
-{
- int size = 0;
-
- /* make sure we have a correct header */
- if (data[0] == 'c' && data[1] == 'h' && data[2] == 'a' && data[3] == 'v')
+ catch (...)
{
- /* add in header size plus metadata length */
- size = 12 + data[4];
-
- /* add in channels * samples */
- size += 2 * data[5] * ((data[6] << 8) + data[7]);
-
- /* add in 2 * width * height */
- size += 2 * ((data[8] << 8) + data[9]) * (((data[10] << 8) + data[11]) & 0x7fff);
+ m_read_error = true;
}
- return size;
}
-/*-------------------------------------------------
- av_codec_init - initialize the A/V codec
--------------------------------------------------*/
-
-static chd_error av_codec_init(chd_file *chd)
-{
- av_codec_data *data;
-
- /* allocate memory for the 2 stream buffers */
- data = new(std::nothrow) av_codec_data;
- if (data == NULL)
- return CHDERR_OUT_OF_MEMORY;
-
- /* clear the buffers */
- chd->codecdata = data;
- /* attempt to do a post-init now; if we're creating a new CHD, this won't work */
- /* but that's ok */
- av_codec_postinit(chd);
- return CHDERR_NONE;
-}
+//**************************************************************************
+// CHD COMPRESSOR HASHMAP
+//**************************************************************************
+//-------------------------------------------------
+// hashmap - constructor
+//-------------------------------------------------
-/*-------------------------------------------------
- av_codec_free - free data for the A/V
- codec
--------------------------------------------------*/
-
-static void av_codec_free(chd_file *chd)
+chd_file_compressor::hashmap::hashmap()
+ : m_block_list(new entry_block(NULL))
{
- av_codec_data *data = (av_codec_data *)chd->codecdata;
-
- /* deinit avcomp */
- if (data != NULL)
- {
- if (data->compstate != NULL)
- avcomp_free(data->compstate);
- delete data;
- }
+ // initialize the map to empty
+ memset(m_map, 0, sizeof(m_map));
}
-/*-------------------------------------------------
- av_codec_compress - compress data using the
- A/V codec
--------------------------------------------------*/
+//-------------------------------------------------
+// ~hashmap - destructor
+//-------------------------------------------------
-static chd_error av_codec_compress(chd_file *chd, const void *src, UINT32 *length)
+chd_file_compressor::hashmap::~hashmap()
{
- av_codec_data *data = (av_codec_data *)chd->codecdata;
- int averr;
- int size;
-
- /* if we haven't yet set up the avcomp code, do it now */
- if (data->compstate == NULL)
- {
- chd_error chderr = av_codec_postinit(chd);
- if (chderr != CHDERR_NONE)
- return chderr;
- }
-
- /* make sure short frames are padded with 0 */
- if (src != NULL)
- {
- size = av_raw_data_size((const UINT8 *)src);
- while (size < chd->header.hunkbytes)
- if (((const UINT8 *)src)[size++] != 0)
- return CHDERR_INVALID_DATA;
- }
-
- /* encode the audio and video */
- averr = avcomp_encode_data(data->compstate, (const UINT8 *)src, chd->compressed, length);
- if (averr != AVCERR_NONE || *length > chd->header.hunkbytes)
- return CHDERR_COMPRESSION_ERROR;
-
- return CHDERR_NONE;
+ reset();
}
-/*-------------------------------------------------
- av_codec_decompress - decomrpess data using
- the A/V codec
--------------------------------------------------*/
+//-------------------------------------------------
+// reset - reset the state of the map
+//-------------------------------------------------
-static chd_error av_codec_decompress(chd_file *chd, UINT32 srclength, void *dest)
+void chd_file_compressor::hashmap::reset()
{
- av_codec_data *data = (av_codec_data *)chd->codecdata;
- const UINT8 *source;
- avcomp_error averr;
- int size;
-
- /* if we haven't yet set up the avcomp code, do it now */
- if (data->compstate == NULL)
+ // delete all the blocks
+ while (m_block_list->m_next != NULL)
{
- chd_error chderr = av_codec_postinit(chd);
- if (chderr != CHDERR_NONE)
- return chderr;
+ entry_block *block = m_block_list;
+ m_block_list = block->m_next;
+ delete block;
}
-
- /* decode the audio and video */
- source = chd->compressed;
- averr = avcomp_decode_data(data->compstate, source, srclength, (UINT8 *)dest);
- if (averr != AVCERR_NONE)
- return CHDERR_DECOMPRESSION_ERROR;
-
- /* pad short frames with 0 */
- if (dest != NULL)
- {
- size = av_raw_data_size((const UINT8 *)dest);
- while (size < chd->header.hunkbytes)
- ((UINT8 *)dest)[size++] = 0;
- }
-
- return CHDERR_NONE;
+ m_block_list->m_nextalloc = 0;
+
+ // reset the hash
+ memset(m_map, 0, sizeof(m_map));
}
-/*-------------------------------------------------
- av_codec_config - codec-specific configuration
- for the A/V codec
--------------------------------------------------*/
+//-------------------------------------------------
+// find - find an item in the CRC map
+//-------------------------------------------------
-static chd_error av_codec_config(chd_file *chd, int param, void *config)
+UINT64 chd_file_compressor::hashmap::find(crc16_t crc16, sha1_t sha1)
{
- av_codec_data *data = (av_codec_data *)chd->codecdata;
-
- /* if we're getting the compression configuration, apply it now */
- if (param == AV_CODEC_COMPRESS_CONFIG)
- {
- av_codec_compress_config *configsrc = reinterpret_cast<av_codec_compress_config *>(config);
- data->compress.video.wrap(configsrc->video, configsrc->video.cliprect());
- data->compress.channels = configsrc->channels;
- data->compress.samples = configsrc->samples;
- memcpy(data->compress.audio, configsrc->audio, sizeof(data->compress.audio));
- data->compress.metalength = configsrc->metalength;
- data->compress.metadata = configsrc->metadata;
- if (data->compstate != NULL)
- avcomp_config_compress(data->compstate, &data->compress);
- return CHDERR_NONE;
- }
-
- /* if we're getting the decompression configuration, apply it now */
- else if (param == AV_CODEC_DECOMPRESS_CONFIG)
- {
- av_codec_decompress_config *configsrc = reinterpret_cast<av_codec_decompress_config *>(config);
- data->decompress.video.wrap(configsrc->video, configsrc->video.cliprect());
- data->decompress.maxsamples = configsrc->maxsamples;
- data->decompress.actsamples = configsrc->actsamples;
- memcpy(data->decompress.audio, configsrc->audio, sizeof(data->decompress.audio));
- data->decompress.maxmetalength = configsrc->maxmetalength;
- data->decompress.actmetalength = configsrc->actmetalength;
- data->decompress.metadata = configsrc->metadata;
- if (data->compstate != NULL)
- avcomp_config_decompress(data->compstate, &data->decompress);
- return CHDERR_NONE;
- }
-
- /* anything else is invalid */
- return CHDERR_INVALID_PARAMETER;
+ // look up the entry in the map
+ for (entry_t *entry = m_map[crc16]; entry != NULL; entry = entry->m_next)
+ if (entry->m_sha1 == sha1)
+ return entry->m_itemnum;
+ return NOT_FOUND;
}
-/*-------------------------------------------------
- av_codec_postinit - actual initialization of
- avcomp happens here, on the first attempt
- to compress or decompress data
--------------------------------------------------*/
+//-------------------------------------------------
+// add - add an item to the CRC map
+//-------------------------------------------------
-static chd_error av_codec_postinit(chd_file *chd)
+void chd_file_compressor::hashmap::add(UINT64 itemnum, crc16_t crc16, sha1_t sha1)
{
- int fps, fpsfrac, width, height, interlaced, channels, rate;
- UINT32 fps_times_1million, max_samples_per_frame, bytes_per_frame;
- av_codec_data *data = (av_codec_data *)chd->codecdata;
- char metadata[256];
- chd_error err;
-
- /* the code below won't work asynchronously */
- if (chd->workitem != NULL)
- return CHDERR_OPERATION_PENDING;
-
- /* get the metadata */
- err = chd_get_metadata(chd, AV_METADATA_TAG, 0, metadata, sizeof(metadata), NULL, NULL, NULL);
- if (err != CHDERR_NONE)
- return err;
-
- /* extract the info */
- if (sscanf(metadata, AV_METADATA_FORMAT, &fps, &fpsfrac, &width, &height, &interlaced, &channels, &rate) != 7)
- return CHDERR_INVALID_METADATA;
-
- /* compute the bytes per frame */
- fps_times_1million = fps * 1000000 + fpsfrac;
- max_samples_per_frame = ((UINT64)rate * 1000000 + fps_times_1million - 1) / fps_times_1million;
- bytes_per_frame = 12 + channels * max_samples_per_frame * 2 + width * height * 2;
- if (bytes_per_frame > chd->header.hunkbytes)
- return CHDERR_INVALID_METADATA;
-
- /* create the avcomp state */
- data->compstate = avcomp_init(width, height, channels);
-
- /* configure the codec */
- avcomp_config_compress(data->compstate, &data->compress);
- avcomp_config_decompress(data->compstate, &data->decompress);
- return CHDERR_NONE;
+ // add to the appropriate map
+ if (m_block_list->m_nextalloc == ARRAY_LENGTH(m_block_list->m_array))
+ m_block_list = new entry_block(m_block_list);
+ entry_t *entry = &m_block_list->m_array[m_block_list->m_nextalloc++];
+ entry->m_itemnum = itemnum;
+ entry->m_sha1 = sha1;
+ entry->m_next = m_map[crc16];
+ m_map[crc16] = entry;
}
diff --git a/src/lib/util/chd.h b/src/lib/util/chd.h
index 6cdb7463681..743d3774b93 100644
--- a/src/lib/util/chd.h
+++ b/src/lib/util/chd.h
@@ -43,16 +43,20 @@
#define __CHD_H__
#include "osdcore.h"
+#include "coretmpl.h"
+#include "astring.h"
#include "bitmap.h"
#include "corefile.h"
-#include "avcomp.h"
+#include "hashing.h"
+#include "chdcodec.h"
/***************************************************************************
Compressed Hunks of Data header format. All numbers are stored in
- Motorola (big-endian) byte ordering. The header is 76 (V1) or 80 (V2)
- bytes long.
+ Motorola (big-endian) byte ordering.
+
+ =========================================================================
V1 header:
@@ -70,6 +74,21 @@
[ 60] UINT8 parentmd5[16]; // MD5 checksum of parent file
[ 76] (V1 header length)
+ Flags:
+ 0x00000001 - set if this drive has a parent
+ 0x00000002 - set if this drive allows writes
+
+ Compression types:
+ CHDCOMPRESSION_NONE = 0
+ CHDCOMPRESSION_ZLIB = 1
+
+ V1 map format:
+
+ [ 0] UINT64 offset : 44; // starting offset within the file
+ [ 0] UINT64 length : 20; // length of data; if == hunksize, data is uncompressed
+
+ =========================================================================
+
V2 header:
[ 0] char tag[8]; // 'MComprHD'
@@ -86,6 +105,10 @@
[ 60] UINT8 parentmd5[16]; // MD5 checksum of parent file
[ 76] UINT32 seclen; // number of bytes per sector
[ 80] (V2 header length)
+
+ Flags and map format are same as V1
+
+ =========================================================================
V3 header:
@@ -103,6 +126,23 @@
[ 80] UINT8 sha1[20]; // SHA1 checksum of raw data
[100] UINT8 parentsha1[20];// SHA1 checksum of parent file
[120] (V3 header length)
+
+ Flags are the same as V1
+
+ Compression types:
+ CHDCOMPRESSION_NONE = 0
+ CHDCOMPRESSION_ZLIB = 1
+ CHDCOMPRESSION_ZLIB_PLUS = 2
+
+ V3 map format:
+
+ [ 0] UINT64 offset; // starting offset within the file
+ [ 8] UINT32 crc32; // 32-bit CRC of the uncompressed data
+ [ 12] UINT16 length_lo; // lower 16 bits of length
+ [ 14] UINT8 length_hi; // upper 8 bits of length
+ [ 15] UINT8 flags; // flags, indicating compression info
+
+ =========================================================================
V4 header:
@@ -119,90 +159,117 @@
[ 68] UINT8 parentsha1[20];// combined raw+meta SHA1 of parent
[ 88] UINT8 rawsha1[20]; // raw data SHA1
[108] (V4 header length)
-
- Flags:
- 0x00000001 - set if this drive has a parent
- 0x00000002 - set if this drive allows writes
+
+ Flags are the same as V1
+
+ Compression types:
+ CHDCOMPRESSION_NONE = 0
+ CHDCOMPRESSION_ZLIB = 1
+ CHDCOMPRESSION_ZLIB_PLUS = 2
+ CHDCOMPRESSION_AV = 3
+
+ Map format is the same as V3
+
+ =========================================================================
+
+ V5 header:
+
+ [ 0] char tag[8]; // 'MComprHD'
+ [ 8] UINT32 length; // length of header (including tag and length fields)
+ [ 12] UINT32 version; // drive format version
+ [ 16] UINT32 compressors[4];// which custom compressors are used?
+ [ 32] UINT64 logicalbytes; // logical size of the data (in bytes)
+ [ 40] UINT64 mapoffset; // offset to the map
+ [ 48] UINT64 metaoffset; // offset to the first blob of metadata
+ [ 56] UINT32 hunkbytes; // number of bytes per hunk (512k maximum)
+ [ 60] UINT32 unitbytes; // number of bytes per unit within each hunk
+ [ 64] UINT8 rawsha1[20]; // raw data SHA1
+ [ 84] UINT8 sha1[20]; // combined raw+meta SHA1
+ [104] UINT8 parentsha1[20];// combined raw+meta SHA1 of parent
+ [124] (V5 header length)
+
+ If parentsha1 != 0, we have a parent (no need for flags)
+ If compressors[0] == 0, we are uncompressed (including maps)
+
+ V5 uncompressed map format:
+
+ [ 0] UINT32 offset; // starting offset / hunk size
+
+ V5 compressed map format header:
+
+ [ 0] UINT32 length; // length of compressed map
+ [ 4] UINT48 datastart; // offset of first block
+ [ 10] UINT16 crc; // crc-16 of the map
+ [ 12] UINT8 lengthbits; // bits used to encode complength
+ [ 13] UINT8 hunkbits; // bits used to encode self-refs
+ [ 14] UINT8 parentunitbits; // bits used to encode parent unit refs
+ [ 15] UINT8 reserved; // future use
+ [ 16] (compressed header length)
+
+ Each compressed map entry, once expanded, looks like:
+
+ [ 0] UINT8 compression; // compression type
+ [ 1] UINT24 complength; // compressed length
+ [ 4] UINT48 offset; // offset
+ [ 10] UINT16 crc; // crc-16 of the data
***************************************************************************/
-/***************************************************************************
- CONSTANTS
-***************************************************************************/
+//**************************************************************************
+// CONSTANTS
+//**************************************************************************
+
+// pseudo-codecs returned by hunk_info
+const chd_codec_type CHD_CODEC_SELF = 1; // copy of another hunk
+const chd_codec_type CHD_CODEC_PARENT = 2; // copy of a parent's hunk
+const chd_codec_type CHD_CODEC_MINI = 3; // legacy "mini" 8-byte repeat
+
+// core types
+typedef UINT32 chd_metadata_tag;
+
+// metadata parameters
+const chd_metadata_tag CHDMETATAG_WILDCARD = 0;
+const UINT32 CHDMETAINDEX_APPEND = ~0;
+
+// metadata flags
+const UINT8 CHD_MDFLAGS_CHECKSUM = 0x01; // indicates data is checksummed
-/* header information */
-#define CHD_HEADER_VERSION 4
-#define CHD_V1_HEADER_SIZE 76
-#define CHD_V2_HEADER_SIZE 80
-#define CHD_V3_HEADER_SIZE 120
-#define CHD_V4_HEADER_SIZE 108
-#define CHD_MAX_HEADER_SIZE CHD_V4_HEADER_SIZE
-
-/* checksumming information */
-#define CHD_MD5_BYTES 16
-#define CHD_SHA1_BYTES 20
-
-/* CHD global flags */
-#define CHDFLAGS_HAS_PARENT 0x00000001
-#define CHDFLAGS_IS_WRITEABLE 0x00000002
-#define CHDFLAGS_UNDEFINED 0xfffffffc
-
-/* compression types */
-#define CHDCOMPRESSION_NONE 0
-#define CHDCOMPRESSION_ZLIB 1
-#define CHDCOMPRESSION_ZLIB_PLUS 2
-#define CHDCOMPRESSION_AV 3
-#define CHDCOMPRESSION_ZLIB_PLUS_WITH_FLAC 4
-
-/* A/V codec configuration parameters */
-#define AV_CODEC_COMPRESS_CONFIG 1
-#define AV_CODEC_DECOMPRESS_CONFIG 2
-
-/* metadata parameters */
-#define CHDMETATAG_WILDCARD 0
-#define CHD_METAINDEX_APPEND ((UINT32)-1)
-
-/* metadata flags */
-#define CHD_MDFLAGS_CHECKSUM 0x01 /* indicates data is checksummed */
-
-/* standard hard disk metadata */
-#define HARD_DISK_METADATA_TAG 0x47444444 /* 'GDDD' */
-#define HARD_DISK_METADATA_FORMAT "CYLS:%d,HEADS:%d,SECS:%d,BPS:%d"
-
-/* hard disk identify information */
-#define HARD_DISK_IDENT_METADATA_TAG 0x49444e54 /* 'IDNT' */
-
-/* hard disk key information */
-#define HARD_DISK_KEY_METADATA_TAG 0x4b455920 /* 'KEY ' */
-
-/* pcmcia CIS information */
-#define PCMCIA_CIS_METADATA_TAG 0x43495320 /* 'CIS ' */
-
-/* standard CD-ROM metadata */
-#define CDROM_OLD_METADATA_TAG 0x43484344 /* 'CHCD' */
-#define CDROM_TRACK_METADATA_TAG 0x43485452 /* 'CHTR' */
-#define CDROM_TRACK_METADATA_FORMAT "TRACK:%d TYPE:%s SUBTYPE:%s FRAMES:%d"
-#define CDROM_TRACK_METADATA2_TAG 0x43485432 /* 'CHT2' */
-#define CDROM_TRACK_METADATA2_FORMAT "TRACK:%d TYPE:%s SUBTYPE:%s FRAMES:%d PREGAP:%d PGTYPE:%s PGSUB:%s POSTGAP:%d"
-
-/* standard A/V metadata */
-#define AV_METADATA_TAG 0x41564156 /* 'AVAV' */
-#define AV_METADATA_FORMAT "FPS:%d.%06d WIDTH:%d HEIGHT:%d INTERLACED:%d CHANNELS:%d SAMPLERATE:%d"
-
-/* A/V laserdisc frame metadata */
-#define AV_LD_METADATA_TAG 0x41564C44 /* 'AVLD' */
-
-/* CHD open values */
-#define CHD_OPEN_READ 1
-#define CHD_OPEN_READWRITE 2
-
-/* error types */
-enum _chd_error
+// standard hard disk metadata
+const chd_metadata_tag HARD_DISK_METADATA_TAG = CHD_MAKE_TAG('G','D','D','D');
+extern const char *HARD_DISK_METADATA_FORMAT;
+
+// hard disk identify information
+const chd_metadata_tag HARD_DISK_IDENT_METADATA_TAG = CHD_MAKE_TAG('I','D','N','T');
+
+// hard disk key information
+const chd_metadata_tag HARD_DISK_KEY_METADATA_TAG = CHD_MAKE_TAG('K','E','Y',' ');
+
+// pcmcia CIS information
+const chd_metadata_tag PCMCIA_CIS_METADATA_TAG = CHD_MAKE_TAG('C','I','S',' ');
+
+// standard CD-ROM metadata
+const chd_metadata_tag CDROM_OLD_METADATA_TAG = CHD_MAKE_TAG('C','H','C','D');
+const chd_metadata_tag CDROM_TRACK_METADATA_TAG = CHD_MAKE_TAG('C','H','T','R');
+extern const char *CDROM_TRACK_METADATA_FORMAT;
+const chd_metadata_tag CDROM_TRACK_METADATA2_TAG = CHD_MAKE_TAG('C','H','T','2');
+extern const char *CDROM_TRACK_METADATA2_FORMAT;
+
+// standard A/V metadata
+const chd_metadata_tag AV_METADATA_TAG = CHD_MAKE_TAG('A','V','A','V');
+extern const char *AV_METADATA_FORMAT;
+
+// A/V laserdisc frame metadata
+const chd_metadata_tag AV_LD_METADATA_TAG = CHD_MAKE_TAG('A','V','L','D');
+
+// error types
+enum chd_error
{
CHDERR_NONE,
CHDERR_NO_INTERFACE,
CHDERR_OUT_OF_MEMORY,
+ CHDERR_NOT_OPEN,
+ CHDERR_ALREADY_OPEN,
CHDERR_INVALID_FILE,
CHDERR_INVALID_PARAMETER,
CHDERR_INVALID_DATA,
@@ -226,166 +293,303 @@ enum _chd_error
CHDERR_INVALID_METADATA,
CHDERR_INVALID_STATE,
CHDERR_OPERATION_PENDING,
- CHDERR_NO_ASYNC_OPERATION,
- CHDERR_UNSUPPORTED_FORMAT
+ CHDERR_UNSUPPORTED_FORMAT,
+ CHDERR_UNKNOWN_COMPRESSION,
+ CHDERR_WALKING_PARENT,
+ CHDERR_COMPRESSING
};
-typedef enum _chd_error chd_error;
-/***************************************************************************
- TYPE DEFINITIONS
-***************************************************************************/
+//**************************************************************************
+// TYPE DEFINITIONS
+//**************************************************************************
-/* opaque types */
-typedef struct _chd_file chd_file;
+class chd_codec;
-/* extract header structure (NOT the on-disk header structure) */
-typedef struct _chd_header chd_header;
-struct _chd_header
-{
- UINT32 length; /* length of header data */
- UINT32 version; /* drive format version */
- UINT32 flags; /* flags field */
- UINT32 compression; /* compression type */
- UINT32 hunkbytes; /* number of bytes per hunk */
- UINT32 totalhunks; /* total # of hunks represented */
- UINT64 logicalbytes; /* logical size of the data */
- UINT64 metaoffset; /* offset in file of first metadata */
- UINT8 md5[CHD_MD5_BYTES]; /* overall MD5 checksum */
- UINT8 parentmd5[CHD_MD5_BYTES]; /* overall MD5 checksum of parent */
- UINT8 sha1[CHD_SHA1_BYTES]; /* overall SHA1 checksum */
- UINT8 rawsha1[CHD_SHA1_BYTES]; /* SHA1 checksum of raw data */
- UINT8 parentsha1[CHD_SHA1_BYTES]; /* overall SHA1 checksum of parent */
-
- UINT32 obsolete_cylinders; /* obsolete field -- do not use! */
- UINT32 obsolete_sectors; /* obsolete field -- do not use! */
- UINT32 obsolete_heads; /* obsolete field -- do not use! */
- UINT32 obsolete_hunksize; /* obsolete field -- do not use! */
-};
+// ======================> chd_file
-
-/* structure for returning information about a verification pass */
-typedef struct _chd_verify_result chd_verify_result;
-struct _chd_verify_result
+// core file class
+class chd_file
{
- UINT8 md5[CHD_MD5_BYTES]; /* overall MD5 checksum */
- UINT8 sha1[CHD_SHA1_BYTES]; /* overall SHA1 checksum */
- UINT8 rawsha1[CHD_SHA1_BYTES]; /* SHA1 checksum of raw data */
- UINT8 metasha1[CHD_SHA1_BYTES]; /* SHA1 checksum of metadata */
+ friend class chd_file_compressor;
+ friend class chd_verifier;
+
+ // constants
+ static const UINT32 HEADER_VERSION = 5;
+ static const UINT32 V3_HEADER_SIZE = 120;
+ static const UINT32 V4_HEADER_SIZE = 108;
+ static const UINT32 V5_HEADER_SIZE = 124;
+ static const UINT32 MAX_HEADER_SIZE = V5_HEADER_SIZE;
+
+public:
+ // construction/destruction
+ chd_file();
+ virtual ~chd_file();
+
+ // operators
+ operator core_file *() { return m_file; }
+
+ // getters
+ bool opened() const { return (m_file != NULL); }
+ UINT32 version() const { return m_version; }
+ UINT64 logical_bytes() const { return m_logicalbytes; }
+ UINT32 hunk_bytes() const { return m_hunkbytes; }
+ UINT32 hunk_count() const { return m_hunkcount; }
+ UINT32 unit_bytes() const { return m_unitbytes; }
+ UINT64 unit_count() const { return m_unitcount; }
+ bool compressed() const { return (m_compression[0] != CHD_CODEC_NONE); }
+ chd_codec_type compression(int index) const { return m_compression[index]; }
+ chd_file *parent() const { return m_parent; }
+ sha1_t sha1();
+ sha1_t raw_sha1();
+ sha1_t parent_sha1();
+ chd_error hunk_info(UINT32 hunknum, chd_codec_type &compressor, UINT32 &compbytes);
+
+ // setters
+ void set_raw_sha1(sha1_t rawdata);
+ void set_parent_sha1(sha1_t parent);
+
+ // file create
+ chd_error create(const char *filename, UINT64 logicalbytes, UINT32 hunkbytes, UINT32 unitbytes, chd_codec_type compression[4]);
+ chd_error create(core_file &file, UINT64 logicalbytes, UINT32 hunkbytes, UINT32 unitbytes, chd_codec_type compression[4]);
+ chd_error create(const char *filename, UINT64 logicalbytes, UINT32 hunkbytes, chd_codec_type compression[4], chd_file &parent);
+ chd_error create(core_file &file, UINT64 logicalbytes, UINT32 hunkbytes, chd_codec_type compression[4], chd_file &parent);
+
+ // file open
+ chd_error open(const char *filename, bool writeable = false, chd_file *parent = NULL);
+ chd_error open(core_file &file, bool writeable = false, chd_file *parent = NULL);
+
+ // file close
+ void close();
+
+ // read/write
+ chd_error read_hunk(UINT32 hunknum, void *buffer);
+ chd_error write_hunk(UINT32 hunknum, const void *buffer);
+ chd_error read_units(UINT64 unitnum, void *buffer, UINT32 count = 1);
+ chd_error write_units(UINT64 unitnum, const void *buffer, UINT32 count = 1);
+ chd_error read_bytes(UINT64 offset, void *buffer, UINT32 bytes);
+ chd_error write_bytes(UINT64 offset, const void *buffer, UINT32 bytes);
+
+ // metadata management
+ chd_error read_metadata(chd_metadata_tag searchtag, UINT32 searchindex, astring &output);
+ chd_error read_metadata(chd_metadata_tag searchtag, UINT32 searchindex, dynamic_buffer &output);
+ chd_error read_metadata(chd_metadata_tag searchtag, UINT32 searchindex, void *output, UINT32 outputlen, UINT32 &resultlen);
+ chd_error read_metadata(chd_metadata_tag searchtag, UINT32 searchindex, dynamic_buffer &output, chd_metadata_tag &resulttag, UINT8 &resultflags);
+ chd_error write_metadata(chd_metadata_tag metatag, UINT32 metaindex, const void *inputbuf, UINT32 inputlen, UINT8 flags = CHD_MDFLAGS_CHECKSUM);
+ chd_error write_metadata(chd_metadata_tag metatag, UINT32 metaindex, const astring &input, UINT8 flags = CHD_MDFLAGS_CHECKSUM) { return write_metadata(metatag, metaindex, input.cstr(), input.len() + 1, flags = CHD_MDFLAGS_CHECKSUM); }
+ chd_error write_metadata(chd_metadata_tag metatag, UINT32 metaindex, const dynamic_buffer &input, UINT8 flags = CHD_MDFLAGS_CHECKSUM) { return write_metadata(metatag, metaindex, input, input.count(), flags = CHD_MDFLAGS_CHECKSUM); }
+ chd_error delete_metadata(chd_metadata_tag metatag, UINT32 metaindex);
+ chd_error clone_all_metadata(chd_file &source);
+
+ // hashing helper
+ sha1_t compute_overall_sha1(sha1_t rawsha1);
+
+ // codec interfaces
+ chd_error codec_configure(chd_codec_type codec, int param, void *config);
+
+ // static helpers
+ static const char *error_string(chd_error err);
+
+private:
+ struct metadata_entry;
+ struct metadata_hash;
+
+ // inline helpers
+ UINT64 be_read(const UINT8 *base, int numbytes);
+ void be_write(UINT8 *base, UINT64 value, int numbytes);
+ sha1_t be_read_sha1(const UINT8 *base);
+ void be_write_sha1(UINT8 *base, sha1_t value);
+ void file_read(UINT64 offset, void *dest, UINT32 length);
+ void file_write(UINT64 offset, const void *source, UINT32 length);
+ UINT64 file_append(const void *source, UINT32 length, UINT32 alignment = 0);
+ UINT8 bits_for_value(UINT64 value);
+
+ // internal helpers
+ UINT32 guess_unitbytes();
+ void parse_v3_header(UINT8 *rawheader, sha1_t &parentsha1);
+ void parse_v4_header(UINT8 *rawheader, sha1_t &parentsha1);
+ void parse_v5_header(UINT8 *rawheader, sha1_t &parentsha1);
+ chd_error compress_v5_map();
+ void decompress_v5_map();
+ chd_error create_common();
+ chd_error open_common(bool writeable);
+ void create_open_common();
+ void verify_proper_compression_append(UINT32 hunknum);
+ void hunk_write_compressed(UINT32 hunknum, INT8 compression, const UINT8 *compressed, UINT32 complength, crc16_t crc16);
+ void hunk_copy_from_self(UINT32 hunknum, UINT32 otherhunk);
+ void hunk_copy_from_parent(UINT32 hunknum, UINT64 parentunit);
+ bool metadata_find(chd_metadata_tag metatag, INT32 metaindex, metadata_entry &metaentry, bool resume = false);
+ void metadata_set_previous_next(UINT64 prevoffset, UINT64 nextoffset);
+ void metadata_update_hash();
+ static int CLIB_DECL metadata_hash_compare(const void *elem1, const void *elem2);
+
+ // file characteristics
+ core_file * m_file; // handle to the open core file
+ bool m_owns_file; // flag indicating if this file should be closed on chd_close()
+ bool m_allow_reads; // permit reads from this CHD?
+ bool m_allow_writes; // permit writes to this CHD?
+
+ // core parameters from the header
+ UINT32 m_version; // version of the header
+ UINT64 m_logicalbytes; // logical size of the raw CHD data in bytes
+ UINT64 m_mapoffset; // offset of map
+ UINT64 m_metaoffset; // offset to first metadata bit
+ UINT32 m_hunkbytes; // size of each raw hunk in bytes
+ UINT32 m_hunkcount; // number of hunks represented
+ UINT32 m_unitbytes; // size of each unit in bytes
+ UINT64 m_unitcount; // number of units represented
+ chd_codec_type m_compression[4]; // array of compression types used
+ chd_file * m_parent; // pointer to parent file, or NULL if none
+ bool m_parent_missing; // are we missing our parent?
+
+ // key offsets within the header
+ UINT64 m_mapoffset_offset; // offset of map offset field
+ UINT64 m_metaoffset_offset;// offset of metaoffset field
+ UINT64 m_sha1_offset; // offset of SHA1 field
+ UINT64 m_rawsha1_offset; // offset of raw SHA1 field
+ UINT64 m_parentsha1_offset;// offset of paren SHA1 field
+
+ // map information
+ UINT32 m_mapentrybytes; // length of each entry in a map
+ dynamic_buffer m_rawmap; // raw map data
+
+ // compression management
+ chd_decompressor * m_decompressor[4]; // array of decompression codecs
+ dynamic_buffer m_compressed; // temporary buffer for compressed data
+
+ // caching
+ dynamic_buffer m_cache; // single-hunk cache for partial reads/writes
+ UINT32 m_cachehunk; // which hunk is in the cache?
};
+// ======================> chd_file_compressor
-/***************************************************************************
- FUNCTION PROTOTYPES
-***************************************************************************/
-
-
-/* ----- CHD file management ----- */
-
-/* create a new CHD file fitting the given description */
-chd_error chd_create(const char *filename, UINT64 logicalbytes, UINT32 hunkbytes, UINT32 compression, chd_file *parent);
-
-/* same as chd_create(), but accepts an already-opened core_file object */
-chd_error chd_create_file(core_file *file, UINT64 logicalbytes, UINT32 hunkbytes, UINT32 compression, chd_file *parent);
-
-/* open an existing CHD file */
-chd_error chd_open(const char *filename, int mode, chd_file *parent, chd_file **chd);
-
-/* same as chd_open(), but accepts an already-opened core_file object */
-chd_error chd_open_file(core_file *file, int mode, chd_file *parent, chd_file **chd);
-
-/* close a CHD file */
-void chd_close(chd_file *chd);
-
-/* return the associated core_file */
-core_file *chd_core_file(chd_file *chd);
-
-/* return an error string for the given CHD error */
-const char *chd_error_string(chd_error err);
-
-
-
-/* ----- CHD header management ----- */
-
-/* return a pointer to the extracted CHD header data */
-const chd_header *chd_get_header(chd_file *chd);
-
-/* set a modified header */
-chd_error chd_set_header(const char *filename, const chd_header *header);
-
-/* same as chd_set_header(), but accepts an already-opened core_file object */
-chd_error chd_set_header_file(core_file *file, const chd_header *header);
-
-
-
-/* ----- core data read/write ----- */
-
-/* read one hunk from the CHD file */
-chd_error chd_read(chd_file *chd, UINT32 hunknum, void *buffer);
-
-/* read one hunk from the CHD file asynchronously */
-chd_error chd_read_async(chd_file *chd, UINT32 hunknum, void *buffer);
-
-/* write one hunk to a CHD file */
-chd_error chd_write(chd_file *chd, UINT32 hunknum, const void *buffer);
-
-/* write one hunk to a CHD file asynchronously */
-chd_error chd_write_async(chd_file *chd, UINT32 hunknum, const void *buffer);
-
-/* wait for a previously issued async read/write to complete and return the error */
-chd_error chd_async_complete(chd_file *chd);
-
-
-
-/* ----- metadata management ----- */
-
-/* get indexed metadata of a particular sort */
-chd_error chd_get_metadata(chd_file *chd, UINT32 searchtag, UINT32 searchindex, void *output, UINT32 outputlen, UINT32 *resultlen, UINT32 *resulttag, UINT8 *resultflags);
-
-/* set indexed metadata of a particular sort */
-chd_error chd_set_metadata(chd_file *chd, UINT32 metatag, UINT32 metaindex, const void *inputbuf, UINT32 inputlen, UINT8 flags);
-
-/* clone all of the metadata from one CHD to another */
-chd_error chd_clone_metadata(chd_file *source, chd_file *dest);
-
-
-
-/* ----- compression management ----- */
-
-/* begin compressing data to a CHD */
-chd_error chd_compress_begin(chd_file *chd);
-
-/* compress the next hunk of data */
-chd_error chd_compress_hunk(chd_file *chd, const void *data, double *curratio, int is_half_hunk = 0);
-
-/* finish compressing data to a CHD */
-chd_error chd_compress_finish(chd_file *chd, int write_protect);
-
-
-
-/* ----- verification management ----- */
-
-/* begin verifying a CHD */
-chd_error chd_verify_begin(chd_file *chd);
-
-/* verify a single hunk of data */
-chd_error chd_verify_hunk(chd_file *chd);
-
-/* finish verifying a CHD, returning the computed MD5 and SHA1 */
-chd_error chd_verify_finish(chd_file *chd, chd_verify_result *result);
-
-
-
-/* ----- codec interfaces ----- */
-
-/* set internal codec parameters */
-chd_error chd_codec_config(chd_file *chd, int param, void *config);
-
-/* return a string description of a codec */
-const char *chd_get_codec_name(UINT32 codec);
+// class for creating a new compressed CHD
+class chd_file_compressor : public chd_file
+{
+public:
+ // construction/destruction
+ chd_file_compressor();
+ virtual ~chd_file_compressor();
+
+ // compression management
+ void compress_begin();
+ chd_error compress_continue(double &progress, double &ratio);
+
+protected:
+ // required override: read more data
+ virtual UINT32 read_data(void *dest, UINT64 offset, UINT32 length) = 0;
+
+private:
+ // hash map for looking up values
+ class hashmap
+ {
+ public:
+ // construction/destruction
+ hashmap();
+ ~hashmap();
+
+ // operations
+ void reset();
+ UINT64 find(crc16_t crc16, sha1_t sha1);
+ void add(UINT64 itemnum, crc16_t crc16, sha1_t sha1);
+
+ // constants
+ static const UINT64 NOT_FOUND = ~UINT64(0);
+ private:
+ // internal entry
+ struct entry_t
+ {
+ entry_t * m_next; // next entry in list
+ UINT64 m_itemnum; // item number
+ sha1_t m_sha1; // SHA-1 of the block
+ };
+
+ // block of entries
+ struct entry_block
+ {
+ entry_block(entry_block *prev)
+ : m_next(prev), m_nextalloc(0) { }
+
+ entry_block * m_next; // next block in list
+ UINT32 m_nextalloc; // next to be allocated
+ entry_t m_array[16384]; // array of entries
+ };
+
+ // internal state
+ entry_t * m_map[65536]; // map, hashed by CRC-16
+ entry_block * m_block_list; // list of allocated blocks
+ };
+
+ // status of a given work item
+ enum work_status
+ {
+ WS_READY = 0,
+ WS_READING,
+ WS_QUEUED,
+ WS_COMPLETE
+ };
+
+ // a CRC-16/SHA-1 pair
+ struct hash_pair
+ {
+ crc16_t m_crc16; // calculated CRC-16
+ sha1_t m_sha1; // calculated SHA-1
+ };
+
+ // a single work item
+ struct work_item
+ {
+ osd_work_item * m_osd; // OSD work item running on this block
+ chd_file_compressor *m_compressor; // pointer back to the compressor
+ volatile work_status m_status; // current status of this item
+ UINT32 m_hunknum; // number of the hunk we're working on
+ UINT8 * m_data; // pointer to the data we are working on
+ UINT8 * m_compressed; // pointer to the compressed data
+ UINT32 m_complen; // compressed data length
+ INT8 m_compression; // type of compression used
+ chd_compressor_group *m_codecs; // codec instance
+ dynamic_array<hash_pair> m_hash; // array of hashes
+ };
+
+ // internal helpers
+ static void *async_walk_parent_static(void *param, int threadid);
+ void async_walk_parent(work_item &item);
+ static void *async_compress_hunk_static(void *param, int threadid);
+ void async_compress_hunk(work_item &item, int threadid);
+ static void *async_read_static(void *param, int threadid);
+ void async_read();
+
+ // current compression status
+ bool m_walking_parent; // are we building the parent map?
+ UINT64 m_total_in; // total bytes in
+ UINT64 m_total_out; // total bytes out
+ sha1_creator m_compsha1; // running SHA-1 on raw data
+
+ // hash lookup maps
+ hashmap m_parent_map; // hash map for parent
+ hashmap m_current_map; // hash map for current
+
+ // read I/O thread
+ osd_work_queue * m_read_queue; // work queue for reading
+ UINT64 m_read_queue_offset;// next offset to enqueue
+ UINT64 m_read_done_offset; // next offset that will complete
+ bool m_read_error; // error during reading?
+
+ // work item thread
+ static const int WORK_BUFFER_HUNKS = 256;
+ osd_work_queue * m_work_queue; // queue for doing work on other threads
+ dynamic_buffer m_work_buffer; // buffer containing hunk data to work on
+ dynamic_buffer m_compressed_buffer;// buffer containing compressed data
+ work_item m_work_item[WORK_BUFFER_HUNKS]; // status of each hunk
+ chd_compressor_group * m_codecs[WORK_MAX_THREADS]; // codecs to use
+
+ // output state
+ UINT32 m_write_hunk; // next hunk to write
+};
-#endif /* __CHD_H__ */
+#endif // __CHD_H__
diff --git a/src/lib/util/chdcd.c b/src/lib/util/chdcd.c
index 0d8c02077d7..afd577129c4 100644
--- a/src/lib/util/chdcd.c
+++ b/src/lib/util/chdcd.c
@@ -304,7 +304,7 @@ UINT64 read_uint64(FILE *infile)
chdcd_parse_toc - parse a CDRWin format CUE file
-------------------------------------------------*/
-chd_error chdcd_parse_nero(const char *tocfname, cdrom_toc *outtoc, chdcd_track_input_info *outinfo)
+chd_error chdcd_parse_nero(const char *tocfname, cdrom_toc &outtoc, chdcd_track_input_info &outinfo)
{
FILE *infile;
unsigned char buffer[12];
@@ -322,8 +322,8 @@ chd_error chdcd_parse_nero(const char *tocfname, cdrom_toc *outtoc, chdcd_track_
}
/* clear structures */
- memset(outtoc, 0, sizeof(cdrom_toc));
- memset(outinfo, 0, sizeof(chdcd_track_input_info));
+ memset(&outtoc, 0, sizeof(outtoc));
+ outinfo.reset();
// seek to 12 bytes before the end
fseek(infile, -12, SEEK_END);
@@ -372,7 +372,7 @@ chd_error chdcd_parse_nero(const char *tocfname, cdrom_toc *outtoc, chdcd_track_
// printf("TOC type: %08x. Start track %d End track: %d\n", toc_type, start, end);
- outtoc->numtrks = (end-start) + 1;
+ outtoc.numtrks = (end-start) + 1;
offset = 0;
for (track = start; track <= end; track++)
@@ -388,27 +388,26 @@ chd_error chdcd_parse_nero(const char *tocfname, cdrom_toc *outtoc, chdcd_track_
index2 = read_uint64(infile);
// printf("Track %d: sector size %d mode %x index0 %llx index1 %llx index2 %llx (pregap %d sectors, length %d sectors)\n", track, size, mode, index0, index1, index2, (UINT32)(index1-index0)/size, (UINT32)(index2-index1)/size);
- strncpy(outinfo->fname[track-1], path.cstr(), 256);
- strncat(outinfo->fname[track-1], tocfname, 256);
- outinfo->offset[track-1] = offset + (UINT32)(index1-index0);
- outinfo->idx0offs[track-1] = 0;
- outinfo->idx1offs[track-1] = 0;
+ outinfo.track[track-1].fname.cpy(path.cstr()).cat(tocfname);
+ outinfo.track[track-1].offset = offset + (UINT32)(index1-index0);
+ outinfo.track[track-1].idx0offs = 0;
+ outinfo.track[track-1].idx1offs = 0;
switch (mode>>24)
{
case 0x00: // 2048 byte data
- outtoc->tracks[track-1].trktype = CD_TRACK_MODE1;
- outinfo->swap[track-1] = 0;
+ outtoc.tracks[track-1].trktype = CD_TRACK_MODE1;
+ outinfo.track[track-1].swap = false;
break;
case 0x06: // 2352 byte mode 2 raw
- outtoc->tracks[track-1].trktype = CD_TRACK_MODE2_RAW;
- outinfo->swap[track-1] = 0;
+ outtoc.tracks[track-1].trktype = CD_TRACK_MODE2_RAW;
+ outinfo.track[track-1].swap = false;
break;
case 0x07: // 2352 byte audio
- outtoc->tracks[track-1].trktype = CD_TRACK_AUDIO;
- outinfo->swap[track-1] = 1;
+ outtoc.tracks[track-1].trktype = CD_TRACK_AUDIO;
+ outinfo.track[track-1].swap = true;
break;
default:
@@ -416,18 +415,18 @@ chd_error chdcd_parse_nero(const char *tocfname, cdrom_toc *outtoc, chdcd_track_
break;
}
- outtoc->tracks[track-1].datasize = size;
+ outtoc.tracks[track-1].datasize = size;
- outtoc->tracks[track-1].subtype = CD_SUB_NONE;
- outtoc->tracks[track-1].subsize = 0;
+ outtoc.tracks[track-1].subtype = CD_SUB_NONE;
+ outtoc.tracks[track-1].subsize = 0;
- outtoc->tracks[track-1].pregap = (UINT32)(index1-index0)/size;
- outtoc->tracks[track-1].frames = (UINT32)(index2-index1)/size;
- outtoc->tracks[track-1].postgap = 0;
- outtoc->tracks[track-1].pgtype = 0;
- outtoc->tracks[track-1].pgsub = CD_SUB_NONE;
- outtoc->tracks[track-1].pgdatasize = 0;
- outtoc->tracks[track-1].pgsubsize = 0;
+ outtoc.tracks[track-1].pregap = (UINT32)(index1-index0)/size;
+ outtoc.tracks[track-1].frames = (UINT32)(index2-index1)/size;
+ outtoc.tracks[track-1].postgap = 0;
+ outtoc.tracks[track-1].pgtype = 0;
+ outtoc.tracks[track-1].pgsub = CD_SUB_NONE;
+ outtoc.tracks[track-1].pgdatasize = 0;
+ outtoc.tracks[track-1].pgsubsize = 0;
offset += (UINT32)index2-index1;
}
@@ -452,7 +451,7 @@ chd_error chdcd_parse_nero(const char *tocfname, cdrom_toc *outtoc, chdcd_track_
chdcd_parse_gdi - parse a Sega GD-ROM rip
-------------------------------------------------*/
-static chd_error chdcd_parse_gdi(const char *tocfname, cdrom_toc *outtoc, chdcd_track_input_info *outinfo)
+static chd_error chdcd_parse_gdi(const char *tocfname, cdrom_toc &outtoc, chdcd_track_input_info &outinfo)
{
FILE *infile;
int i, numtracks;
@@ -469,8 +468,8 @@ static chd_error chdcd_parse_gdi(const char *tocfname, cdrom_toc *outtoc, chdcd_
}
/* clear structures */
- memset(outtoc, 0, sizeof(cdrom_toc));
- memset(outinfo, 0, sizeof(chdcd_track_input_info));
+ memset(&outtoc, 0, sizeof(outtoc));
+ outinfo.reset();
fgets(linebuffer,511,infile);
@@ -491,16 +490,16 @@ static chd_error chdcd_parse_gdi(const char *tocfname, cdrom_toc *outtoc, chdcd_
trknum=atoi(tok)-1;
- outinfo->swap[trknum]=0;
- outinfo->offset[trknum]=0;
+ outinfo.track[trknum].swap=false;
+ outinfo.track[trknum].offset=0;
- //outtoc->tracks[trknum].trktype = CD_TRACK_MODE1;
- outtoc->tracks[trknum].datasize = 0;
- outtoc->tracks[trknum].subtype = CD_SUB_NONE;
- outtoc->tracks[trknum].subsize = 0;
+ //outtoc.tracks[trknum].trktype = CD_TRACK_MODE1;
+ outtoc.tracks[trknum].datasize = 0;
+ outtoc.tracks[trknum].subtype = CD_SUB_NONE;
+ outtoc.tracks[trknum].subsize = 0;
tok=strtok(NULL," ");
- outtoc->tracks[trknum].physframeofs=atoi(tok);
+ outtoc.tracks[trknum].physframeofs=atoi(tok);
tok=strtok(NULL," ");
trktype=atoi(tok);
@@ -510,19 +509,19 @@ static chd_error chdcd_parse_gdi(const char *tocfname, cdrom_toc *outtoc, chdcd_
if(trktype==4 && trksize==2352)
{
- outtoc->tracks[trknum].trktype=CD_TRACK_MODE1_RAW;
- outtoc->tracks[trknum].datasize=2352;
+ outtoc.tracks[trknum].trktype=CD_TRACK_MODE1_RAW;
+ outtoc.tracks[trknum].datasize=2352;
}
if(trktype==4 && trksize==2048)
{
- outtoc->tracks[trknum].trktype=CD_TRACK_MODE1;
- outtoc->tracks[trknum].datasize=2048;
+ outtoc.tracks[trknum].trktype=CD_TRACK_MODE1;
+ outtoc.tracks[trknum].datasize=2048;
}
if(trktype==0)
{
//assert(trksize==2352);
- outtoc->tracks[trknum].trktype=CD_TRACK_AUDIO;
- outtoc->tracks[trknum].datasize=2352;
+ outtoc.tracks[trknum].trktype=CD_TRACK_AUDIO;
+ outtoc.tracks[trknum].datasize=2352;
}
astring name;
@@ -539,43 +538,42 @@ static chd_error chdcd_parse_gdi(const char *tocfname, cdrom_toc *outtoc, chdcd_
} while(tok!=NULL && (strrchr(tok,'"')-tok !=(strlen(tok)-1)));
name = name.delchr('"');
}
- strncpy(outinfo->fname[trknum], path.cstr(), 256);
- strncat(outinfo->fname[trknum], name, 256);
+ outinfo.track[trknum].fname.cpy(path.cstr()).cat(name);
- sz=get_file_size(outinfo->fname[trknum]);
+ sz=get_file_size(outinfo.track[trknum].fname);
- outtoc->tracks[trknum].frames=sz/trksize;
- outtoc->tracks[trknum].extraframes=0;
+ outtoc.tracks[trknum].frames=sz/trksize;
+ outtoc.tracks[trknum].extraframes=0;
if(trknum!=0)
{
- int dif=outtoc->tracks[trknum].physframeofs-(outtoc->tracks[trknum-1].frames+outtoc->tracks[trknum-1].physframeofs);
- outtoc->tracks[trknum-1].frames+=dif;
+ int dif=outtoc.tracks[trknum].physframeofs-(outtoc.tracks[trknum-1].frames+outtoc.tracks[trknum-1].physframeofs);
+ outtoc.tracks[trknum-1].frames+=dif;
}
/*
if(trknum!=0)
{
- outtoc->tracks[trknum-1].extraframes=outtoc->tracks[trknum].physframeofs-(outtoc->tracks[trknum-1].frames+outtoc->tracks[trknum-1].physframeofs);
+ outtoc.tracks[trknum-1].extraframes=outtoc.tracks[trknum].physframeofs-(outtoc.tracks[trknum-1].frames+outtoc.tracks[trknum-1].physframeofs);
}
*/
- hunks = (outtoc->tracks[trknum].frames+CD_FRAMES_PER_HUNK - 1) / CD_FRAMES_PER_HUNK;
- outtoc->tracks[trknum].extraframes = hunks * CD_FRAMES_PER_HUNK - outtoc->tracks[trknum].frames;
+ hunks = (outtoc.tracks[trknum].frames+CD_FRAMES_PER_HUNK - 1) / CD_FRAMES_PER_HUNK;
+ outtoc.tracks[trknum].extraframes = hunks * CD_FRAMES_PER_HUNK - outtoc.tracks[trknum].frames;
- //chdpos+=outtoc->tracks[trknum].frames+outtoc->tracks[trknum].extraframes;
+ //chdpos+=outtoc.tracks[trknum].frames+outtoc.tracks[trknum].extraframes;
}
/*
for(i=0;i<numtracks;++i)
{
- printf("%s %d %d %d\n",outinfo->fname[i],outtoc->tracks[i].frames,outtoc->tracks[i].extraframes,outtoc->tracks[i].physframeofs);
+ printf("%s %d %d %d\n",outinfo.track[i].fname,outtoc.tracks[i].frames,outtoc.tracks[i].extraframes,outtoc.tracks[i].physframeofs);
}
*/
/* close the input TOC */
fclose(infile);
/* store the number of tracks found */
- outtoc->numtrks = numtracks;
+ outtoc.numtrks = numtracks;
return CHDERR_NONE;
}
@@ -584,7 +582,7 @@ static chd_error chdcd_parse_gdi(const char *tocfname, cdrom_toc *outtoc, chdcd_
chdcd_parse_toc - parse a CDRWin format CUE file
-------------------------------------------------*/
-chd_error chdcd_parse_cue(const char *tocfname, cdrom_toc *outtoc, chdcd_track_input_info *outinfo)
+chd_error chdcd_parse_cue(const char *tocfname, cdrom_toc &outtoc, chdcd_track_input_info &outinfo)
{
FILE *infile;
int i, trknum;
@@ -601,8 +599,8 @@ chd_error chdcd_parse_cue(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
}
/* clear structures */
- memset(outtoc, 0, sizeof(cdrom_toc));
- memset(outinfo, 0, sizeof(chdcd_track_input_info));
+ memset(&outtoc, 0, sizeof(outtoc));
+ outinfo.reset();
trknum = -1;
wavoffs = wavlen = 0;
@@ -633,11 +631,11 @@ chd_error chdcd_parse_cue(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
if (!strcmp(token, "BINARY"))
{
- outinfo->swap[trknum] = 0;
+ outinfo.track[trknum].swap = false;
}
else if (!strcmp(token, "MOTOROLA"))
{
- outinfo->swap[trknum] = 1;
+ outinfo.track[trknum].swap = true;
}
else if (!strcmp(token, "WAVE"))
{
@@ -672,28 +670,28 @@ chd_error chdcd_parse_cue(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
if (wavlen != 0)
{
- outtoc->tracks[trknum].trktype = CD_TRACK_AUDIO;
- outtoc->tracks[trknum].frames = wavlen/2352;
- outinfo->offset[trknum] = wavoffs;
+ outtoc.tracks[trknum].trktype = CD_TRACK_AUDIO;
+ outtoc.tracks[trknum].frames = wavlen/2352;
+ outinfo.track[trknum].offset = wavoffs;
wavoffs = wavlen = 0;
}
else
{
- outtoc->tracks[trknum].trktype = CD_TRACK_MODE1;
- outtoc->tracks[trknum].datasize = 0;
- outinfo->offset[trknum] = 0;
+ outtoc.tracks[trknum].trktype = CD_TRACK_MODE1;
+ outtoc.tracks[trknum].datasize = 0;
+ outinfo.track[trknum].offset = 0;
}
- outtoc->tracks[trknum].subtype = CD_SUB_NONE;
- outtoc->tracks[trknum].subsize = 0;
- outtoc->tracks[trknum].pregap = 0;
- outinfo->idx0offs[trknum] = -1;
- outinfo->idx1offs[trknum] = 0;
+ outtoc.tracks[trknum].subtype = CD_SUB_NONE;
+ outtoc.tracks[trknum].subsize = 0;
+ outtoc.tracks[trknum].pregap = 0;
+ outinfo.track[trknum].idx0offs = -1;
+ outinfo.track[trknum].idx1offs = 0;
- strncpy(outinfo->fname[trknum], lastfname, 256); // default filename to the last one
-// printf("trk %d: fname %s offset %d\n", trknum, &outinfo->fname[trknum][0], outinfo->offset[trknum]);
+ outinfo.track[trknum].fname.cpy(lastfname); // default filename to the last one
+// printf("trk %d: fname %s offset %d\n", trknum, outinfo.track[trknum].fname.cstr(), outinfo.track[trknum].offset);
- cdrom_convert_type_string_to_track_info(token, &outtoc->tracks[trknum]);
- if (outtoc->tracks[trknum].datasize == 0)
+ cdrom_convert_type_string_to_track_info(token, &outtoc.tracks[trknum]);
+ if (outtoc.tracks[trknum].datasize == 0)
{
printf("ERROR: Unknown track type [%s]. Contact MAMEDEV.\n", token);
return CHDERR_FILE_NOT_FOUND;
@@ -702,7 +700,7 @@ chd_error chdcd_parse_cue(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
/* next (optional) token on the line is the subcode type */
TOKENIZE
- cdrom_convert_subtype_string_to_track_info(token, &outtoc->tracks[trknum]);
+ cdrom_convert_subtype_string_to_track_info(token, &outtoc.tracks[trknum]);
}
else if (!strcmp(token, "INDEX")) /* only in bin/cue files */
{
@@ -718,14 +716,14 @@ chd_error chdcd_parse_cue(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
if (idx == 0)
{
- outinfo->idx0offs[trknum] = frames;
+ outinfo.track[trknum].idx0offs = frames;
}
else if (idx == 1)
{
- outinfo->idx1offs[trknum] = frames;
- if ((outtoc->tracks[trknum].pregap == 0) && (outinfo->idx0offs[trknum] != -1))
+ outinfo.track[trknum].idx1offs = frames;
+ if ((outtoc.tracks[trknum].pregap == 0) && (outinfo.track[trknum].idx0offs != -1))
{
- outtoc->tracks[trknum].pregap = frames - outinfo->idx0offs[trknum];
+ outtoc.tracks[trknum].pregap = frames - outinfo.track[trknum].idx0offs;
}
}
}
@@ -737,7 +735,7 @@ chd_error chdcd_parse_cue(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
TOKENIZE
frames = msf_to_frames( token );
- outtoc->tracks[trknum].pregap = frames;
+ outtoc.tracks[trknum].pregap = frames;
}
else if (!strcmp(token, "POSTGAP"))
{
@@ -747,7 +745,7 @@ chd_error chdcd_parse_cue(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
TOKENIZE
frames = msf_to_frames( token );
- outtoc->tracks[trknum].postgap = frames;
+ outtoc.tracks[trknum].postgap = frames;
}
}
}
@@ -756,67 +754,67 @@ chd_error chdcd_parse_cue(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
fclose(infile);
/* store the number of tracks found */
- outtoc->numtrks = trknum + 1;
+ outtoc.numtrks = trknum + 1;
/* now go over the files again and set the lengths */
- for (trknum = 0; trknum < outtoc->numtrks; trknum++)
+ for (trknum = 0; trknum < outtoc.numtrks; trknum++)
{
UINT64 tlen = 0;
// this is true for cue/bin and cue/iso, and we need it for cue/wav since .WAV is little-endian
- if (outtoc->tracks[trknum].trktype == CD_TRACK_AUDIO)
+ if (outtoc.tracks[trknum].trktype == CD_TRACK_AUDIO)
{
- outinfo->swap[trknum] = 1;
+ outinfo.track[trknum].swap = true;
}
// don't do this for .WAV tracks, we already have their length and offset filled out
- if (outinfo->offset[trknum] == 0)
+ if (outinfo.track[trknum].offset == 0)
{
// is this the last track?
- if (trknum == (outtoc->numtrks-1))
+ if (trknum == (outtoc.numtrks-1))
{
/* if we have the same filename as the last track, do it that way */
- if (!strcmp(&outinfo->fname[trknum][0], &outinfo->fname[trknum-1][0]))
+ if (outinfo.track[trknum].fname == outinfo.track[trknum-1].fname)
{
- tlen = get_file_size(outinfo->fname[trknum]);
+ tlen = get_file_size(outinfo.track[trknum].fname);
if (tlen == 0)
{
- printf("ERROR: couldn't find bin file [%s]\n", outinfo->fname[trknum-1]);
+ printf("ERROR: couldn't find bin file [%s]\n", outinfo.track[trknum-1].fname.cstr());
return CHDERR_FILE_NOT_FOUND;
}
- outinfo->offset[trknum] = outinfo->offset[trknum-1] + outtoc->tracks[trknum-1].frames * (outtoc->tracks[trknum-1].datasize + outtoc->tracks[trknum-1].subsize);
- outtoc->tracks[trknum].frames = (tlen - outinfo->offset[trknum]) / (outtoc->tracks[trknum].datasize + outtoc->tracks[trknum].subsize);
+ outinfo.track[trknum].offset = outinfo.track[trknum-1].offset + outtoc.tracks[trknum-1].frames * (outtoc.tracks[trknum-1].datasize + outtoc.tracks[trknum-1].subsize);
+ outtoc.tracks[trknum].frames = (tlen - outinfo.track[trknum].offset) / (outtoc.tracks[trknum].datasize + outtoc.tracks[trknum].subsize);
}
else /* data files are different */
{
- tlen = get_file_size(outinfo->fname[trknum]);
+ tlen = get_file_size(outinfo.track[trknum].fname);
if (tlen == 0)
{
- printf("ERROR: couldn't find bin file [%s]\n", outinfo->fname[trknum-1]);
+ printf("ERROR: couldn't find bin file [%s]\n", outinfo.track[trknum-1].fname.cstr());
return CHDERR_FILE_NOT_FOUND;
}
- tlen /= (outtoc->tracks[trknum].datasize + outtoc->tracks[trknum].subsize);
- outtoc->tracks[trknum].frames = tlen;
- outinfo->offset[trknum] = 0;
+ tlen /= (outtoc.tracks[trknum].datasize + outtoc.tracks[trknum].subsize);
+ outtoc.tracks[trknum].frames = tlen;
+ outinfo.track[trknum].offset = 0;
}
}
else
{
/* if we have the same filename as the next track, do it that way */
- if (!strcmp(&outinfo->fname[trknum][0], &outinfo->fname[trknum+1][0]))
+ if (outinfo.track[trknum].fname == outinfo.track[trknum+1].fname)
{
- outtoc->tracks[trknum].frames = outinfo->idx1offs[trknum+1] - outinfo->idx1offs[trknum];
+ outtoc.tracks[trknum].frames = outinfo.track[trknum+1].idx1offs - outinfo.track[trknum].idx1offs;
if (trknum == 0) // track 0 offset is 0
{
- outinfo->offset[trknum] = 0;
+ outinfo.track[trknum].offset = 0;
}
else
{
- outinfo->offset[trknum] = outinfo->offset[trknum-1] + outtoc->tracks[trknum-1].frames * (outtoc->tracks[trknum-1].datasize + outtoc->tracks[trknum-1].subsize);
+ outinfo.track[trknum].offset = outinfo.track[trknum-1].offset + outtoc.tracks[trknum-1].frames * (outtoc.tracks[trknum-1].datasize + outtoc.tracks[trknum-1].subsize);
}
- if (!outtoc->tracks[trknum].frames)
+ if (!outtoc.tracks[trknum].frames)
{
printf("ERROR: unable to determine size of track %d, missing INDEX 01 markers?\n", trknum+1);
return CHDERR_FILE_NOT_FOUND;
@@ -824,19 +822,19 @@ chd_error chdcd_parse_cue(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
}
else /* data files are different */
{
- tlen = get_file_size(outinfo->fname[trknum]);
+ tlen = get_file_size(outinfo.track[trknum].fname);
if (tlen == 0)
{
- printf("ERROR: couldn't find bin file [%s]\n", outinfo->fname[trknum]);
+ printf("ERROR: couldn't find bin file [%s]\n", outinfo.track[trknum].fname.cstr());
return CHDERR_FILE_NOT_FOUND;
}
- tlen /= (outtoc->tracks[trknum].datasize + outtoc->tracks[trknum].subsize);
- outtoc->tracks[trknum].frames = tlen;
- outinfo->offset[trknum] = 0;
+ tlen /= (outtoc.tracks[trknum].datasize + outtoc.tracks[trknum].subsize);
+ outtoc.tracks[trknum].frames = tlen;
+ outinfo.track[trknum].offset = 0;
}
}
}
- printf("trk %d: %d frames @ offset %d\n", trknum+1, outtoc->tracks[trknum].frames, outinfo->offset[trknum]);
+ printf("trk %d: %d frames @ offset %d\n", trknum+1, outtoc.tracks[trknum].frames, outinfo.track[trknum].offset);
}
return CHDERR_NONE;
@@ -846,7 +844,7 @@ chd_error chdcd_parse_cue(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
chdcd_parse_toc - parse a CDRDAO format TOC file
-------------------------------------------------*/
-chd_error chdcd_parse_toc(const char *tocfname, cdrom_toc *outtoc, chdcd_track_input_info *outinfo)
+chd_error chdcd_parse_toc(const char *tocfname, cdrom_toc &outtoc, chdcd_track_input_info &outinfo)
{
FILE *infile;
int i, trknum;
@@ -885,8 +883,8 @@ chd_error chdcd_parse_toc(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
}
/* clear structures */
- memset(outtoc, 0, sizeof(cdrom_toc));
- memset(outinfo, 0, sizeof(chdcd_track_input_info));
+ memset(&outtoc, 0, sizeof(outtoc));
+ outinfo.reset();
trknum = -1;
@@ -910,8 +908,7 @@ chd_error chdcd_parse_toc(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
TOKENIZE
/* keep the filename */
- strncpy(outinfo->fname[trknum], path.cstr(), 256);
- strncat(outinfo->fname[trknum], token, 256);
+ outinfo.track[trknum].fname.cpy(path.cstr()).cat(token);
/* get either the offset or the length */
TOKENIZE
@@ -920,11 +917,11 @@ chd_error chdcd_parse_toc(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
{
TOKENIZE
- outinfo->swap[trknum] = 1;
+ outinfo.track[trknum].swap = true;
}
else
{
- outinfo->swap[trknum] = 0;
+ outinfo.track[trknum].swap = false;
}
if (token[0] == '#')
@@ -937,14 +934,14 @@ chd_error chdcd_parse_toc(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
/* convert the time to an offset */
f = msf_to_frames( token );
- f *= (outtoc->tracks[trknum].datasize + outtoc->tracks[trknum].subsize);
+ f *= (outtoc.tracks[trknum].datasize + outtoc.tracks[trknum].subsize);
}
else
{
f = 0;
}
- outinfo->offset[trknum] = f;
+ outinfo.track[trknum].offset = f;
TOKENIZE
@@ -958,19 +955,19 @@ chd_error chdcd_parse_toc(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
if (isdigit((UINT8)token[0]))
{
// it was an offset.
- f *= (outtoc->tracks[trknum].datasize + outtoc->tracks[trknum].subsize);
+ f *= (outtoc.tracks[trknum].datasize + outtoc.tracks[trknum].subsize);
- outinfo->offset[trknum] += f;
+ outinfo.track[trknum].offset += f;
// this is the length.
f = msf_to_frames( token );
}
}
- else if( trknum == 0 && outinfo->offset[trknum] != 0 )
+ else if( trknum == 0 && outinfo.track[trknum].offset != 0 )
{
/* the 1st track might have a length with no offset */
- f = outinfo->offset[trknum] / (outtoc->tracks[trknum].datasize + outtoc->tracks[trknum].subsize);
- outinfo->offset[trknum] = 0;
+ f = outinfo.track[trknum].offset / (outtoc.tracks[trknum].datasize + outtoc.tracks[trknum].subsize);
+ outinfo.track[trknum].offset = 0;
}
else
{
@@ -978,7 +975,7 @@ chd_error chdcd_parse_toc(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
f = 0;
}
- outtoc->tracks[trknum].frames = f;
+ outtoc.tracks[trknum].frames = f;
}
else if (!strcmp(token, "TRACK"))
{
@@ -987,13 +984,13 @@ chd_error chdcd_parse_toc(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
/* next token on the line is the track type */
TOKENIZE
- outtoc->tracks[trknum].trktype = CD_TRACK_MODE1;
- outtoc->tracks[trknum].datasize = 0;
- outtoc->tracks[trknum].subtype = CD_SUB_NONE;
- outtoc->tracks[trknum].subsize = 0;
+ outtoc.tracks[trknum].trktype = CD_TRACK_MODE1;
+ outtoc.tracks[trknum].datasize = 0;
+ outtoc.tracks[trknum].subtype = CD_SUB_NONE;
+ outtoc.tracks[trknum].subsize = 0;
- cdrom_convert_type_string_to_track_info(token, &outtoc->tracks[trknum]);
- if (outtoc->tracks[trknum].datasize == 0)
+ cdrom_convert_type_string_to_track_info(token, &outtoc.tracks[trknum]);
+ if (outtoc.tracks[trknum].datasize == 0)
{
printf("ERROR: Unknown track type [%s]. Contact MAMEDEV.\n", token);
return CHDERR_FILE_NOT_FOUND;
@@ -1002,7 +999,7 @@ chd_error chdcd_parse_toc(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
/* next (optional) token on the line is the subcode type */
TOKENIZE
- cdrom_convert_subtype_string_to_track_info(token, &outtoc->tracks[trknum]);
+ cdrom_convert_subtype_string_to_track_info(token, &outtoc.tracks[trknum]);
}
else if (!strcmp(token, "START"))
{
@@ -1012,7 +1009,7 @@ chd_error chdcd_parse_toc(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
TOKENIZE
frames = msf_to_frames( token );
- outtoc->tracks[trknum].pregap = frames;
+ outtoc.tracks[trknum].pregap = frames;
}
}
}
@@ -1021,7 +1018,7 @@ chd_error chdcd_parse_toc(const char *tocfname, cdrom_toc *outtoc, chdcd_track_i
fclose(infile);
/* store the number of tracks found */
- outtoc->numtrks = trknum + 1;
+ outtoc.numtrks = trknum + 1;
return CHDERR_NONE;
}
diff --git a/src/lib/util/chdcd.h b/src/lib/util/chdcd.h
index 3228828a160..df799371603 100644
--- a/src/lib/util/chdcd.h
+++ b/src/lib/util/chdcd.h
@@ -14,17 +14,26 @@
#include "cdrom.h"
-typedef struct _chdcd_track_input_info chdcd_track_input_info;
-struct _chdcd_track_input_info /* used only at compression time */
+struct chdcd_track_input_entry
{
- char fname[CD_MAX_TRACKS][256]; /* filename for each track */
- UINT32 offset[CD_MAX_TRACKS]; /* offset in the data file for each track */
- int swap[CD_MAX_TRACKS]; /* data needs to be byte swapped */
- UINT32 idx0offs[CD_MAX_TRACKS];
- UINT32 idx1offs[CD_MAX_TRACKS];
+ chdcd_track_input_entry() { reset(); }
+ void reset() { fname.reset(); offset = idx0offs = idx1offs = 0; swap = false; }
+
+ astring fname; // filename for each track
+ UINT32 offset; // offset in the data file for each track
+ bool swap; // data needs to be byte swapped
+ UINT32 idx0offs;
+ UINT32 idx1offs;
+};
+
+struct chdcd_track_input_info
+{
+ void reset() { for (int i = 0; i < CD_MAX_TRACKS; i++) track[i].reset(); }
+
+ chdcd_track_input_entry track[CD_MAX_TRACKS];
};
-chd_error chdcd_parse_toc(const char *tocfname, cdrom_toc *outtoc, chdcd_track_input_info *outinfo);
+chd_error chdcd_parse_toc(const char *tocfname, cdrom_toc &outtoc, chdcd_track_input_info &outinfo);
#endif /* __CHDCD_H__ */
diff --git a/src/lib/util/chdcodec.c b/src/lib/util/chdcodec.c
new file mode 100644
index 00000000000..b0649cbd29b
--- /dev/null
+++ b/src/lib/util/chdcodec.c
@@ -0,0 +1,1325 @@
+/***************************************************************************
+
+ chdcodec.c
+
+ Codecs used by the CHD format
+
+****************************************************************************
+
+ Copyright Aaron Giles
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name 'MAME' nor the names of its contributors may be
+ used to endorse or promote products derived from this software
+ without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY AARON GILES ''AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ DISCLAIMED. IN NO EVENT SHALL AARON GILES BE LIABLE FOR ANY DIRECT,
+ INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
+ IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#include "chd.h"
+#include "hashing.h"
+#include "avhuff.h"
+#include "flac.h"
+#include <zlib.h>
+#include "lib7z/lzmaenc.h"
+#include "lib7z/lzmadec.h"
+#include <new>
+
+// function that should exist but doesn't in the official release
+extern "C" SRes LzmaDec_Allocate_MAME(CLzmaDec *p, const CLzmaProps *propNew, ISzAlloc *alloc);
+
+
+//**************************************************************************
+// TYPE DEFINITIONS
+//**************************************************************************
+
+// ======================> chd_zlib_allocator
+
+// allocation helper clas for zlib
+class chd_zlib_allocator
+{
+public:
+ // construction/destruction
+ chd_zlib_allocator();
+ ~chd_zlib_allocator();
+
+ // installation
+ void install(z_stream &stream);
+
+private:
+ // internal helpers
+ static voidpf fast_alloc(voidpf opaque, uInt items, uInt size);
+ static void fast_free(voidpf opaque, voidpf address);
+
+ static const int MAX_ZLIB_ALLOCS = 64;
+ UINT32 * m_allocptr[MAX_ZLIB_ALLOCS];
+};
+
+
+// ======================> chd_zlib_compressor
+
+// ZLIB compressor
+class chd_zlib_compressor : public chd_compressor
+{
+public:
+ // construction/destruction
+ chd_zlib_compressor(chd_file &chd, bool lossy);
+ ~chd_zlib_compressor();
+
+ // core functionality
+ virtual UINT32 compress(const UINT8 *src, UINT32 srclen, UINT8 *dest);
+
+private:
+ // internal state
+ z_stream m_deflater;
+ chd_zlib_allocator m_allocator;
+};
+
+
+// ======================> chd_zlib_decompressor
+
+// ZLIB decompressor
+class chd_zlib_decompressor : public chd_decompressor
+{
+public:
+ // construction/destruction
+ chd_zlib_decompressor(chd_file &chd, bool lossy);
+ ~chd_zlib_decompressor();
+
+ // core functionality
+ virtual void decompress(const UINT8 *src, UINT32 complen, UINT8 *dest, UINT32 destlen);
+
+private:
+ // internal state
+ z_stream m_inflater;
+ chd_zlib_allocator m_allocator;
+};
+
+
+// ======================> chd_lzma_allocator
+
+// allocation helper clas for zlib
+class chd_lzma_allocator : public ISzAlloc
+{
+public:
+ // construction/destruction
+ chd_lzma_allocator();
+ ~chd_lzma_allocator();
+
+private:
+ // internal helpers
+ static void *fast_alloc(void *p, size_t size);
+ static void fast_free(void *p, void *address);
+
+ static const int MAX_LZMA_ALLOCS = 64;
+ UINT32 * m_allocptr[MAX_LZMA_ALLOCS];
+};
+
+
+// ======================> chd_lzma_compressor
+
+// LZMA compressor
+class chd_lzma_compressor : public chd_compressor
+{
+public:
+ // construction/destruction
+ chd_lzma_compressor(chd_file &chd, bool lossy);
+ ~chd_lzma_compressor();
+
+ // core functionality
+ virtual UINT32 compress(const UINT8 *src, UINT32 srclen, UINT8 *dest);
+
+ // helpers
+ static void configure_properties(CLzmaEncProps &props, chd_file &chd);
+
+private:
+ // internal state
+ CLzmaEncProps m_props;
+ chd_lzma_allocator m_allocator;
+};
+
+
+// ======================> chd_lzma_decompressor
+
+// LZMA decompressor
+class chd_lzma_decompressor : public chd_decompressor
+{
+public:
+ // construction/destruction
+ chd_lzma_decompressor(chd_file &chd, bool lossy);
+ ~chd_lzma_decompressor();
+
+ // core functionality
+ virtual void decompress(const UINT8 *src, UINT32 complen, UINT8 *dest, UINT32 destlen);
+
+private:
+ // internal state
+ CLzmaProps m_props;
+ CLzmaDec m_decoder;
+ chd_lzma_allocator m_allocator;
+};
+
+
+// ======================> chd_huffman_compressor
+
+// Huffman compressor
+class chd_huffman_compressor : public chd_compressor
+{
+public:
+ // construction/destruction
+ chd_huffman_compressor(chd_file &chd, bool lossy);
+
+ // core functionality
+ virtual UINT32 compress(const UINT8 *src, UINT32 srclen, UINT8 *dest);
+
+private:
+ // internal state
+ huffman_8bit_encoder m_encoder;
+};
+
+
+// ======================> chd_huffman_decompressor
+
+// Huffman decompressor
+class chd_huffman_decompressor : public chd_decompressor
+{
+public:
+ // construction/destruction
+ chd_huffman_decompressor(chd_file &chd, bool lossy);
+
+ // core functionality
+ virtual void decompress(const UINT8 *src, UINT32 complen, UINT8 *dest, UINT32 destlen);
+
+private:
+ // internal state
+ huffman_8bit_decoder m_decoder;
+};
+
+
+// ======================> chd_flac_compressor
+
+// FLAC compressor
+class chd_flac_compressor : public chd_compressor
+{
+public:
+ // construction/destruction
+ chd_flac_compressor(chd_file &chd, bool lossy, bool bigendian);
+
+ // core functionality
+ virtual UINT32 compress(const UINT8 *src, UINT32 srclen, UINT8 *dest);
+
+private:
+ // internal state
+ bool m_swap_endian;
+ flac_encoder m_encoder;
+};
+
+// big-endian variant
+class chd_flac_compressor_be : public chd_flac_compressor
+{
+public:
+ // construction/destruction
+ chd_flac_compressor_be(chd_file &chd, bool lossy)
+ : chd_flac_compressor(chd, lossy, true) { }
+};
+
+// little-endian variant
+class chd_flac_compressor_le : public chd_flac_compressor
+{
+public:
+ // construction/destruction
+ chd_flac_compressor_le(chd_file &chd, bool lossy)
+ : chd_flac_compressor(chd, lossy, false) { }
+};
+
+
+// ======================> chd_flac_decompressor
+
+// FLAC decompressor
+class chd_flac_decompressor : public chd_decompressor
+{
+public:
+ // construction/destruction
+ chd_flac_decompressor(chd_file &chd, bool lossy, bool bigendian);
+
+ // core functionality
+ virtual void decompress(const UINT8 *src, UINT32 complen, UINT8 *dest, UINT32 destlen);
+
+private:
+ // internal state
+ bool m_swap_endian;
+ flac_decoder m_decoder;
+};
+
+// big-endian variant
+class chd_flac_decompressor_be : public chd_flac_decompressor
+{
+public:
+ // construction/destruction
+ chd_flac_decompressor_be(chd_file &chd, bool lossy)
+ : chd_flac_decompressor(chd, lossy, true) { }
+};
+
+// little-endian variant
+class chd_flac_decompressor_le : public chd_flac_decompressor
+{
+public:
+ // construction/destruction
+ chd_flac_decompressor_le(chd_file &chd, bool lossy)
+ : chd_flac_decompressor(chd, lossy, false) { }
+};
+
+
+// ======================> chd_avhuff_compressor
+
+// A/V compressor
+class chd_avhuff_compressor : public chd_compressor
+{
+public:
+ // construction/destruction
+ chd_avhuff_compressor(chd_file &chd, bool lossy);
+
+ // core functionality
+ virtual UINT32 compress(const UINT8 *src, UINT32 srclen, UINT8 *dest);
+
+private:
+ // internal helpers
+ void postinit();
+
+ // internal state
+ avhuff_encoder m_encoder;
+ bool m_postinit;
+};
+
+
+// ======================> chd_avhuff_decompressor
+
+// A/V decompressor
+class chd_avhuff_decompressor : public chd_decompressor
+{
+public:
+ // construction/destruction
+ chd_avhuff_decompressor(chd_file &chd, bool lossy);
+
+ // core functionality
+ virtual void decompress(const UINT8 *src, UINT32 complen, UINT8 *dest, UINT32 destlen);
+ virtual void configure(int param, void *config);
+
+private:
+ // internal state
+ avhuff_decoder m_decoder;
+};
+
+
+
+//**************************************************************************
+// CODEC LIST
+//**************************************************************************
+
+// static list of available known codecs
+const chd_codec_list::codec_entry chd_codec_list::s_codec_list[] =
+{
+ { CHD_CODEC_ZLIB, false, "Deflate", &chd_codec_list::construct_compressor<chd_zlib_compressor>, &chd_codec_list::construct_decompressor<chd_zlib_decompressor> },
+ { CHD_CODEC_LZMA, false, "LZMA", &chd_codec_list::construct_compressor<chd_lzma_compressor>, &chd_codec_list::construct_decompressor<chd_lzma_decompressor> },
+ { CHD_CODEC_HUFFMAN, false, "Huffman", &chd_codec_list::construct_compressor<chd_huffman_compressor>, &chd_codec_list::construct_decompressor<chd_huffman_decompressor> },
+ { CHD_CODEC_FLAC_BE, false, "FLAC, big-endian", &chd_codec_list::construct_compressor<chd_flac_compressor_be>, &chd_codec_list::construct_decompressor<chd_flac_decompressor_be> },
+ { CHD_CODEC_FLAC_LE, false, "FLAC, little-endian", &chd_codec_list::construct_compressor<chd_flac_compressor_le>, &chd_codec_list::construct_decompressor<chd_flac_decompressor_le> },
+ { CHD_CODEC_AVHUFF, false, "A/V Huffman", &chd_codec_list::construct_compressor<chd_avhuff_compressor>, &chd_codec_list::construct_decompressor<chd_avhuff_decompressor> },
+};
+
+
+
+//**************************************************************************
+// CHD CODEC
+//**************************************************************************
+
+//-------------------------------------------------
+// chd_codec - constructor
+//-------------------------------------------------
+
+chd_codec::chd_codec(chd_file &file, bool lossy)
+ : m_chd(file),
+ m_lossy(lossy)
+{
+}
+
+
+//-------------------------------------------------
+// ~chd_codec - destructor
+//-------------------------------------------------
+
+chd_codec::~chd_codec()
+{
+}
+
+
+//-------------------------------------------------
+// configure - configuration
+//-------------------------------------------------
+
+void chd_codec::configure(int param, void *config)
+{
+ // if not overridden, it is always a failure
+ throw CHDERR_INVALID_PARAMETER;
+}
+
+
+
+//**************************************************************************
+// CHD COMPRESSOR
+//**************************************************************************
+
+//-------------------------------------------------
+// chd_compressor - constructor
+//-------------------------------------------------
+
+chd_compressor::chd_compressor(chd_file &file, bool lossy)
+ : chd_codec(file, lossy)
+{
+}
+
+
+
+//**************************************************************************
+// CHD DECOMPRESSOR
+//**************************************************************************
+
+//-------------------------------------------------
+// chd_decompressor - constructor
+//-------------------------------------------------
+
+chd_decompressor::chd_decompressor(chd_file &file, bool lossy)
+ : chd_codec(file, lossy)
+{
+}
+
+
+
+//**************************************************************************
+// CHD CODEC LIST
+//**************************************************************************
+
+//-------------------------------------------------
+// new_compressor - create a new compressor
+// instance of the given type
+//-------------------------------------------------
+
+chd_compressor *chd_codec_list::new_compressor(chd_codec_type type, chd_file &file)
+{
+ // find in the list and construct the class
+ const codec_entry *entry = find_in_list(type);
+ return (entry == NULL) ? NULL : (*entry->m_construct_compressor)(file, entry->m_lossy);
+}
+
+
+//-------------------------------------------------
+// new_compressor - create a new decompressor
+// instance of the given type
+//-------------------------------------------------
+
+chd_decompressor *chd_codec_list::new_decompressor(chd_codec_type type, chd_file &file)
+{
+ // find in the list and construct the class
+ const codec_entry *entry = find_in_list(type);
+ return (entry == NULL) ? NULL : (*entry->m_construct_decompressor)(file, entry->m_lossy);
+}
+
+
+//-------------------------------------------------
+// codec_name - return the name of the given
+// codec
+//-------------------------------------------------
+
+const char *chd_codec_list::codec_name(chd_codec_type type)
+{
+ // find in the list and construct the class
+ const codec_entry *entry = find_in_list(type);
+ return (entry == NULL) ? NULL : entry->m_name;
+}
+
+
+//-------------------------------------------------
+// find_in_list - create a new compressor
+// instance of the given type
+//-------------------------------------------------
+
+const chd_codec_list::codec_entry *chd_codec_list::find_in_list(chd_codec_type type)
+{
+ // find in the list and construct the class
+ for (int listnum = 0; listnum < ARRAY_LENGTH(s_codec_list); listnum++)
+ if (s_codec_list[listnum].m_type == type)
+ return &s_codec_list[listnum];
+ return NULL;
+}
+
+
+
+//**************************************************************************
+// CODEC INSTANCE
+//**************************************************************************
+
+//-------------------------------------------------
+// chd_compressor_group - constructor
+//-------------------------------------------------
+
+chd_compressor_group::chd_compressor_group(chd_file &file, UINT32 compressor_list[4])
+ : m_hunkbytes(file.hunk_bytes()),
+ m_compress_test(m_hunkbytes)
+#if CHDCODEC_VERIFY_COMPRESSION
+ ,m_decompressed(m_hunkbytes)
+#endif
+{
+ // verify the compression types and initialize the codecs
+ for (int codecnum = 0; codecnum < ARRAY_LENGTH(m_compressor); codecnum++)
+ {
+ m_compressor[codecnum] = NULL;
+ if (compressor_list[codecnum] != CHD_CODEC_NONE)
+ {
+ m_compressor[codecnum] = chd_codec_list::new_compressor(compressor_list[codecnum], file);
+ if (m_compressor[codecnum] == NULL)
+ throw CHDERR_UNKNOWN_COMPRESSION;
+#if CHDCODEC_VERIFY_COMPRESSION
+ m_decompressor[codecnum] = chd_codec_list::new_decompressor(compressor_list[codecnum], file);
+ if (m_decompressor[codecnum] == NULL)
+ throw CHDERR_UNKNOWN_COMPRESSION;
+#endif
+ }
+ }
+}
+
+
+//-------------------------------------------------
+// ~chd_compressor_group - destructor
+//-------------------------------------------------
+
+chd_compressor_group::~chd_compressor_group()
+{
+ // delete the codecs and the test buffer
+ for (int codecnum = 0; codecnum < ARRAY_LENGTH(m_compressor); codecnum++)
+ delete m_compressor[codecnum];
+}
+
+
+//-------------------------------------------------
+// find_best_compressor - iterate over all codecs
+// to determine which one produces the best
+// compression for this hunk
+//-------------------------------------------------
+
+INT8 chd_compressor_group::find_best_compressor(const UINT8 *src, UINT8 *compressed, UINT32 &complen)
+{
+ // determine best compression technique
+ complen = m_hunkbytes;
+ INT8 compression = -1;
+ for (int codecnum = 0; codecnum < ARRAY_LENGTH(m_compressor); codecnum++)
+ if (m_compressor[codecnum] != NULL)
+ {
+ // attempt to compress, swallowing errors
+ try
+ {
+ // if this is the best one, copy the data into the permanent buffer
+ UINT32 compbytes = m_compressor[codecnum]->compress(src, m_hunkbytes, m_compress_test);
+#if CHDCODEC_VERIFY_COMPRESSION
+ try
+ {
+ memset(m_decompressed, 0, m_hunkbytes);
+ m_decompressor[codecnum]->decompress(m_compress_test, compbytes, m_decompressed, m_hunkbytes);
+ }
+ catch (...)
+ {
+ }
+
+ if (memcmp(src, m_decompressed, m_hunkbytes) != 0)
+ {
+ compbytes = m_compressor[codecnum]->compress(src, m_hunkbytes, m_compress_test);
+ try
+ {
+ m_decompressor[codecnum]->decompress(m_compress_test, compbytes, m_decompressed, m_hunkbytes);
+ }
+ catch (...)
+ {
+ memset(m_decompressed, 0, m_hunkbytes);
+ }
+ }
+printf(" codec%d=%d bytes \n", codecnum, compbytes);
+#endif
+ if (compbytes < complen)
+ {
+ compression = codecnum;
+ complen = compbytes;
+ memcpy(compressed, m_compress_test, compbytes);
+ }
+ }
+ catch (...) { }
+ }
+
+ // if the best is none, copy it over
+ if (compression == -1)
+ memcpy(compressed, src, m_hunkbytes);
+ return compression;
+}
+
+
+
+//**************************************************************************
+// ZLIB ALLOCATOR HELPER
+//**************************************************************************
+
+//-------------------------------------------------
+// chd_zlib_allocator - constructor
+//-------------------------------------------------
+
+chd_zlib_allocator::chd_zlib_allocator()
+{
+ // reset pointer list
+ memset(m_allocptr, 0, sizeof(m_allocptr));
+}
+
+
+//-------------------------------------------------
+// ~chd_zlib_allocator - constructor
+//-------------------------------------------------
+
+chd_zlib_allocator::~chd_zlib_allocator()
+{
+ // free our memory
+ for (int memindex = 0; memindex < ARRAY_LENGTH(m_allocptr); memindex++)
+ delete[] m_allocptr[memindex];
+}
+
+
+//-------------------------------------------------
+// install - configure the allocators for a
+// stream
+//-------------------------------------------------
+
+void chd_zlib_allocator::install(z_stream &stream)
+{
+ stream.zalloc = &chd_zlib_allocator::fast_alloc;
+ stream.zfree = &chd_zlib_allocator::fast_free;
+ stream.opaque = this;
+}
+
+
+//-------------------------------------------------
+// zlib_fast_alloc - fast malloc for ZLIB, which
+// allocates and frees memory frequently
+//-------------------------------------------------
+
+voidpf chd_zlib_allocator::fast_alloc(voidpf opaque, uInt items, uInt size)
+{
+ chd_zlib_allocator *codec = reinterpret_cast<chd_zlib_allocator *>(opaque);
+
+ // compute the size, rounding to the nearest 1k
+ size = (size * items + 0x3ff) & ~0x3ff;
+
+ // reuse a hunk if we can
+ for (int scan = 0; scan < MAX_ZLIB_ALLOCS; scan++)
+ {
+ UINT32 *ptr = codec->m_allocptr[scan];
+ if (ptr != NULL && size == *ptr)
+ {
+ // set the low bit of the size so we don't match next time
+ *ptr |= 1;
+ return ptr + 1;
+ }
+ }
+
+ // alloc a new one and put it into the list
+ UINT32 *ptr = reinterpret_cast<UINT32 *>(new UINT8[size + sizeof(UINT32)]);
+ for (int scan = 0; scan < MAX_ZLIB_ALLOCS; scan++)
+ if (codec->m_allocptr[scan] == NULL)
+ {
+ codec->m_allocptr[scan] = ptr;
+ break;
+ }
+
+ // set the low bit of the size so we don't match next time
+ *ptr = size | 1;
+ return ptr + 1;
+}
+
+
+//-------------------------------------------------
+// zlib_fast_free - fast free for ZLIB, which
+// allocates and frees memory frequently
+//-------------------------------------------------
+
+void chd_zlib_allocator::fast_free(voidpf opaque, voidpf address)
+{
+ chd_zlib_allocator *codec = reinterpret_cast<chd_zlib_allocator *>(opaque);
+
+ // find the hunk
+ UINT32 *ptr = reinterpret_cast<UINT32 *>(address) - 1;
+ for (int scan = 0; scan < MAX_ZLIB_ALLOCS; scan++)
+ if (ptr == codec->m_allocptr[scan])
+ {
+ // clear the low bit of the size to allow matches
+ *ptr &= ~1;
+ return;
+ }
+}
+
+
+
+//**************************************************************************
+// ZLIB COMPRESSOR
+//**************************************************************************
+
+//-------------------------------------------------
+// chd_zlib_compressor - constructor
+//-------------------------------------------------
+
+chd_zlib_compressor::chd_zlib_compressor(chd_file &chd, bool lossy)
+ : chd_compressor(chd, lossy)
+{
+ // initialize the deflater
+ m_deflater.next_in = (Bytef *)this; // bogus, but that's ok
+ m_deflater.avail_in = 0;
+ m_allocator.install(m_deflater);
+ int zerr = deflateInit2(&m_deflater, Z_BEST_COMPRESSION, Z_DEFLATED, -MAX_WBITS, 8, Z_DEFAULT_STRATEGY);
+
+ // convert errors
+ if (zerr == Z_MEM_ERROR)
+ throw std::bad_alloc();
+ else if (zerr != Z_OK)
+ throw CHDERR_CODEC_ERROR;
+}
+
+
+//-------------------------------------------------
+// ~chd_zlib_compressor - destructor
+//-------------------------------------------------
+
+chd_zlib_compressor::~chd_zlib_compressor()
+{
+ deflateEnd(&m_deflater);
+}
+
+
+//-------------------------------------------------
+// compress - compress data using the ZLIB codec
+//-------------------------------------------------
+
+UINT32 chd_zlib_compressor::compress(const UINT8 *src, UINT32 srclen, UINT8 *dest)
+{
+ // reset the decompressor
+ m_deflater.next_in = const_cast<Bytef *>(src);
+ m_deflater.avail_in = srclen;
+ m_deflater.total_in = 0;
+ m_deflater.next_out = dest;
+ m_deflater.avail_out = srclen;
+ m_deflater.total_out = 0;
+ int zerr = deflateReset(&m_deflater);
+ if (zerr != Z_OK)
+ throw CHDERR_COMPRESSION_ERROR;
+
+ // do it
+ zerr = deflate(&m_deflater, Z_FINISH);
+
+ // if we ended up with more data than we started with, return an error
+ if (zerr != Z_STREAM_END || m_deflater.total_out >= srclen)
+ throw CHDERR_COMPRESSION_ERROR;
+
+ // otherwise, return the length
+ return m_deflater.total_out;
+}
+
+
+
+//**************************************************************************
+// ZLIB DECOMPRESSOR
+//**************************************************************************
+
+//-------------------------------------------------
+// chd_zlib_decompressor - constructor
+//-------------------------------------------------
+
+chd_zlib_decompressor::chd_zlib_decompressor(chd_file &chd, bool lossy)
+ : chd_decompressor(chd, lossy)
+{
+ // init the inflater first
+ m_inflater.next_in = (Bytef *)this; // bogus, but that's ok
+ m_inflater.avail_in = 0;
+ m_allocator.install(m_inflater);
+ int zerr = inflateInit2(&m_inflater, -MAX_WBITS);
+
+ // convert errors
+ if (zerr == Z_MEM_ERROR)
+ throw std::bad_alloc();
+ else if (zerr != Z_OK)
+ throw CHDERR_CODEC_ERROR;
+}
+
+
+//-------------------------------------------------
+// ~chd_zlib_decompressor - destructor
+//-------------------------------------------------
+
+chd_zlib_decompressor::~chd_zlib_decompressor()
+{
+ inflateEnd(&m_inflater);
+}
+
+
+//-------------------------------------------------
+// decompress - decompress data using the ZLIB
+// codec
+//-------------------------------------------------
+
+void chd_zlib_decompressor::decompress(const UINT8 *src, UINT32 complen, UINT8 *dest, UINT32 destlen)
+{
+ // reset the decompressor
+ m_inflater.next_in = const_cast<Bytef *>(src);
+ m_inflater.avail_in = complen;
+ m_inflater.total_in = 0;
+ m_inflater.next_out = dest;
+ m_inflater.avail_out = destlen;
+ m_inflater.total_out = 0;
+ int zerr = inflateReset(&m_inflater);
+ if (zerr != Z_OK)
+ throw CHDERR_DECOMPRESSION_ERROR;
+
+ // do it
+ zerr = inflate(&m_inflater, Z_FINISH);
+ if (m_inflater.total_out != destlen)
+ throw CHDERR_DECOMPRESSION_ERROR;
+}
+
+
+
+//**************************************************************************
+// LZMA ALLOCATOR HELPER
+//**************************************************************************
+
+//-------------------------------------------------
+// chd_lzma_allocator - constructor
+//-------------------------------------------------
+
+chd_lzma_allocator::chd_lzma_allocator()
+{
+ // reset pointer list
+ memset(m_allocptr, 0, sizeof(m_allocptr));
+
+ // set our pointers
+ Alloc = &chd_lzma_allocator::fast_alloc;
+ Free = &chd_lzma_allocator::fast_free;
+}
+
+
+//-------------------------------------------------
+// ~chd_lzma_allocator - constructor
+//-------------------------------------------------
+
+chd_lzma_allocator::~chd_lzma_allocator()
+{
+ // free our memory
+ for (int memindex = 0; memindex < ARRAY_LENGTH(m_allocptr); memindex++)
+ delete[] m_allocptr[memindex];
+}
+
+
+//-------------------------------------------------
+// lzma_fast_alloc - fast malloc for lzma, which
+// allocates and frees memory frequently
+//-------------------------------------------------
+
+void *chd_lzma_allocator::fast_alloc(void *p, size_t size)
+{
+ chd_lzma_allocator *codec = reinterpret_cast<chd_lzma_allocator *>(p);
+
+ // compute the size, rounding to the nearest 1k
+ size = (size + 0x3ff) & ~0x3ff;
+
+ // reuse a hunk if we can
+ for (int scan = 0; scan < MAX_LZMA_ALLOCS; scan++)
+ {
+ UINT32 *ptr = codec->m_allocptr[scan];
+ if (ptr != NULL && size == *ptr)
+ {
+ // set the low bit of the size so we don't match next time
+ *ptr |= 1;
+ return ptr + 1;
+ }
+ }
+
+ // alloc a new one and put it into the list
+ UINT32 *ptr = reinterpret_cast<UINT32 *>(new UINT8[size + sizeof(UINT32)]);
+ for (int scan = 0; scan < MAX_LZMA_ALLOCS; scan++)
+ if (codec->m_allocptr[scan] == NULL)
+ {
+ codec->m_allocptr[scan] = ptr;
+ break;
+ }
+
+ // set the low bit of the size so we don't match next time
+ *ptr = size | 1;
+ return ptr + 1;
+}
+
+
+//-------------------------------------------------
+// lzma_fast_free - fast free for lzma, which
+// allocates and frees memory frequently
+//-------------------------------------------------
+
+void chd_lzma_allocator::fast_free(void *p, void *address)
+{
+ if (address == NULL)
+ return;
+
+ chd_lzma_allocator *codec = reinterpret_cast<chd_lzma_allocator *>(p);
+
+ // find the hunk
+ UINT32 *ptr = reinterpret_cast<UINT32 *>(address) - 1;
+ for (int scan = 0; scan < MAX_LZMA_ALLOCS; scan++)
+ if (ptr == codec->m_allocptr[scan])
+ {
+ // clear the low bit of the size to allow matches
+ *ptr &= ~1;
+ return;
+ }
+}
+
+
+
+//**************************************************************************
+// LZMA COMPRESSOR
+//**************************************************************************
+
+//-------------------------------------------------
+// chd_lzma_compressor - constructor
+//-------------------------------------------------
+
+chd_lzma_compressor::chd_lzma_compressor(chd_file &chd, bool lossy)
+ : chd_compressor(chd, lossy)
+{
+ // initialize the properties
+ configure_properties(m_props, chd);
+}
+
+
+//-------------------------------------------------
+// ~chd_lzma_compressor - destructor
+//-------------------------------------------------
+
+chd_lzma_compressor::~chd_lzma_compressor()
+{
+}
+
+
+//-------------------------------------------------
+// compress - compress data using the LZMA codec
+//-------------------------------------------------
+
+UINT32 chd_lzma_compressor::compress(const UINT8 *src, UINT32 srclen, UINT8 *dest)
+{
+ // allocate the encoder
+ CLzmaEncHandle encoder = LzmaEnc_Create(&m_allocator);
+ if (encoder == NULL)
+ throw CHDERR_COMPRESSION_ERROR;
+
+ try
+ {
+ // configure the encoder
+ SRes res = LzmaEnc_SetProps(encoder, &m_props);
+ if (res != SZ_OK)
+ throw CHDERR_COMPRESSION_ERROR;
+
+ // run it
+ UINT32 complen = srclen;
+ res = LzmaEnc_MemEncode(encoder, dest, &complen, src, srclen, 0, NULL, &m_allocator, &m_allocator);
+ if (res != SZ_OK)
+ throw CHDERR_COMPRESSION_ERROR;
+
+ // clean up
+ LzmaEnc_Destroy(encoder, &m_allocator, &m_allocator);
+ return complen;
+ }
+ catch (...)
+ {
+ // destroy before re-throwing
+ LzmaEnc_Destroy(encoder, &m_allocator, &m_allocator);
+ throw;
+ }
+}
+
+
+//-------------------------------------------------
+// configure_properties - configure the LZMA
+// codec
+//-------------------------------------------------
+
+void chd_lzma_compressor::configure_properties(CLzmaEncProps &props, chd_file &chd)
+{
+ LzmaEncProps_Init(&props);
+ props.level = 9;
+ props.reduceSize = chd.hunk_bytes();
+ LzmaEncProps_Normalize(&props);
+}
+
+
+
+//**************************************************************************
+// LZMA DECOMPRESSOR
+//**************************************************************************
+
+//-------------------------------------------------
+// chd_lzma_decompressor - constructor
+//-------------------------------------------------
+
+chd_lzma_decompressor::chd_lzma_decompressor(chd_file &chd, bool lossy)
+ : chd_decompressor(chd, lossy)
+{
+ // construct the decoder
+ LzmaDec_Construct(&m_decoder);
+
+ // configure the properties like the compressor did
+ CLzmaEncProps encoder_props;
+ chd_lzma_compressor::configure_properties(encoder_props, chd);
+
+ // convert to decoder properties
+ CLzmaProps decoder_props;
+ decoder_props.lc = encoder_props.lc;
+ decoder_props.lp = encoder_props.lp;
+ decoder_props.pb = encoder_props.pb;
+ decoder_props.dicSize = encoder_props.dictSize;
+
+ // do memory allocations
+ SRes res = LzmaDec_Allocate_MAME(&m_decoder, &decoder_props, &m_allocator);
+ if (res != SZ_OK)
+ throw CHDERR_DECOMPRESSION_ERROR;
+}
+
+
+//-------------------------------------------------
+// ~chd_lzma_decompressor - destructor
+//-------------------------------------------------
+
+chd_lzma_decompressor::~chd_lzma_decompressor()
+{
+ // free memory
+ LzmaDec_Free(&m_decoder, &m_allocator);
+}
+
+
+//-------------------------------------------------
+// decompress - decompress data using the LZMA
+// codec
+//-------------------------------------------------
+
+void chd_lzma_decompressor::decompress(const UINT8 *src, UINT32 complen, UINT8 *dest, UINT32 destlen)
+{
+ // initialize
+ LzmaDec_Init(&m_decoder);
+
+ // decode
+ UINT32 consumedlen = complen;
+ UINT32 decodedlen = destlen;
+ ELzmaStatus status;
+ SRes res = LzmaDec_DecodeToBuf(&m_decoder, dest, &decodedlen, src, &consumedlen, LZMA_FINISH_END, &status);
+ if ((res != SZ_OK && res != LZMA_STATUS_MAYBE_FINISHED_WITHOUT_MARK) || consumedlen != complen || decodedlen != destlen)
+ throw CHDERR_DECOMPRESSION_ERROR;
+}
+
+
+
+//**************************************************************************
+// HUFFMAN COMPRESSOR
+//**************************************************************************
+
+//-------------------------------------------------
+// chd_huffman_compressor - constructor
+//-------------------------------------------------
+
+chd_huffman_compressor::chd_huffman_compressor(chd_file &chd, bool lossy)
+ : chd_compressor(chd, lossy)
+{
+}
+
+
+//-------------------------------------------------
+// compress - compress data using the Huffman
+// codec
+//-------------------------------------------------
+
+UINT32 chd_huffman_compressor::compress(const UINT8 *src, UINT32 srclen, UINT8 *dest)
+{
+ UINT32 complen;
+ if (m_encoder.encode(src, srclen, dest, srclen, complen) != HUFFERR_NONE)
+ throw CHDERR_COMPRESSION_ERROR;
+ return complen;
+}
+
+
+
+//**************************************************************************
+// HUFFMAN DECOMPRESSOR
+//**************************************************************************
+
+//-------------------------------------------------
+// chd_huffman_decompressor - constructor
+//-------------------------------------------------
+
+chd_huffman_decompressor::chd_huffman_decompressor(chd_file &chd, bool lossy)
+ : chd_decompressor(chd, lossy)
+{
+}
+
+
+//-------------------------------------------------
+// decompress - decompress data using the Huffman
+// codec
+//-------------------------------------------------
+
+void chd_huffman_decompressor::decompress(const UINT8 *src, UINT32 complen, UINT8 *dest, UINT32 destlen)
+{
+ if (m_decoder.decode(src, complen, dest, destlen) != HUFFERR_NONE)
+ throw CHDERR_COMPRESSION_ERROR;
+}
+
+
+
+//**************************************************************************
+// FLAC COMPRESSOR
+//**************************************************************************
+
+//-------------------------------------------------
+// chd_flac_compressor - constructor
+//-------------------------------------------------
+
+chd_flac_compressor::chd_flac_compressor(chd_file &chd, bool lossy, bool bigendian)
+ : chd_compressor(chd, lossy)
+{
+ // determine whether we want native or swapped samples
+ UINT16 native_endian = 0;
+ *reinterpret_cast<UINT8 *>(&native_endian) = 1;
+ if (native_endian == 1)
+ m_swap_endian = bigendian;
+ else
+ m_swap_endian = !bigendian;
+
+ // configure the encoder
+ m_encoder.set_sample_rate(44100);
+ m_encoder.set_num_channels(2);
+ m_encoder.set_block_size(chd.hunk_bytes());
+ m_encoder.set_strip_metadata(true);
+}
+
+
+//-------------------------------------------------
+// compress - compress data using the FLAC codec
+//-------------------------------------------------
+
+UINT32 chd_flac_compressor::compress(const UINT8 *src, UINT32 srclen, UINT8 *dest)
+{
+ // reset and encode
+ m_encoder.reset(dest, chd().hunk_bytes());
+ if (!m_encoder.encode_interleaved(reinterpret_cast<const INT16 *>(src), srclen / 4, m_swap_endian))
+ throw CHDERR_COMPRESSION_ERROR;
+
+ // finish up
+ UINT32 complen = m_encoder.finish();
+ if (complen >= chd().hunk_bytes())
+ throw CHDERR_COMPRESSION_ERROR;
+ return complen;
+}
+
+
+
+//**************************************************************************
+// FLAC COMPRESSOR
+//**************************************************************************
+
+//-------------------------------------------------
+// chd_flac_decompressor - constructor
+//-------------------------------------------------
+
+chd_flac_decompressor::chd_flac_decompressor(chd_file &chd, bool lossy, bool bigendian)
+ : chd_decompressor(chd, lossy)
+{
+ // determine whether we want native or swapped samples
+ UINT16 native_endian = 0;
+ *reinterpret_cast<UINT8 *>(&native_endian) = 1;
+ if (native_endian == 1)
+ m_swap_endian = bigendian;
+ else
+ m_swap_endian = !bigendian;
+}
+
+
+//-------------------------------------------------
+// decompress - decompress data using the FLAC
+// codec
+//-------------------------------------------------
+
+void chd_flac_decompressor::decompress(const UINT8 *src, UINT32 complen, UINT8 *dest, UINT32 destlen)
+{
+ // reset and decode
+ if (!m_decoder.reset(44100, 2, chd().hunk_bytes() / 4, src, complen))
+ throw CHDERR_DECOMPRESSION_ERROR;
+ if (!m_decoder.decode_interleaved(reinterpret_cast<INT16 *>(dest), destlen / 4, m_swap_endian))
+ throw CHDERR_DECOMPRESSION_ERROR;
+
+ // finish up
+ m_decoder.finish();
+}
+
+
+
+//**************************************************************************
+// AVHUFF COMPRESSOR
+//**************************************************************************
+
+//-------------------------------------------------
+// chd_avhuff_compressor - constructor
+//-------------------------------------------------
+
+chd_avhuff_compressor::chd_avhuff_compressor(chd_file &chd, bool lossy)
+ : chd_compressor(chd, lossy),
+ m_postinit(false)
+{
+ try
+ {
+ // attempt to do a post-init now
+ postinit();
+ }
+ catch (chd_error &)
+ {
+ // if we're creating a new CHD, it won't work but that's ok
+ }
+}
+
+
+//-------------------------------------------------
+// compress - compress data using the A/V codec
+//-------------------------------------------------
+
+UINT32 chd_avhuff_compressor::compress(const UINT8 *src, UINT32 srclen, UINT8 *dest)
+{
+ // if we haven't yet set up the avhuff code, do it now
+ if (!m_postinit)
+ postinit();
+
+ // make sure short frames are padded with 0
+ if (src != NULL)
+ {
+ int size = avhuff_encoder::raw_data_size(src);
+ while (size < srclen)
+ if (src[size++] != 0)
+ throw CHDERR_INVALID_DATA;
+ }
+
+ // encode the audio and video
+ UINT32 complen;
+ avhuff_error averr = m_encoder.encode_data(src, dest, complen);
+ if (averr != AVHERR_NONE || complen > srclen)
+ throw CHDERR_COMPRESSION_ERROR;
+ return complen;
+}
+
+
+//-------------------------------------------------
+// postinit - actual initialization of avhuff
+// happens here, on the first attempt to compress
+// or decompress data
+//-------------------------------------------------
+
+void chd_avhuff_compressor::postinit()
+{
+ // get the metadata
+ astring metadata;
+ chd_error err = chd().read_metadata(AV_METADATA_TAG, 0, metadata);
+ if (err != CHDERR_NONE)
+ throw err;
+
+ // extract the info
+ int fps, fpsfrac, width, height, interlaced, channels, rate;
+ if (sscanf(metadata, AV_METADATA_FORMAT, &fps, &fpsfrac, &width, &height, &interlaced, &channels, &rate) != 7)
+ throw CHDERR_INVALID_METADATA;
+
+ // compute the bytes per frame
+ UINT32 fps_times_1million = fps * 1000000 + fpsfrac;
+ UINT32 max_samples_per_frame = (UINT64(rate) * 1000000 + fps_times_1million - 1) / fps_times_1million;
+ UINT32 bytes_per_frame = 12 + channels * max_samples_per_frame * 2 + width * height * 2;
+ if (bytes_per_frame > chd().hunk_bytes())
+ throw CHDERR_INVALID_METADATA;
+
+ // done with post-init
+ m_postinit = true;
+}
+
+
+
+//**************************************************************************
+// AVHUFF DECOMPRESSOR
+//**************************************************************************
+
+//-------------------------------------------------
+// chd_avhuff_decompressor - constructor
+//-------------------------------------------------
+
+chd_avhuff_decompressor::chd_avhuff_decompressor(chd_file &chd, bool lossy)
+ : chd_decompressor(chd, lossy)
+{
+}
+
+
+//-------------------------------------------------
+// decompress - decompress data using the A/V
+// codec
+//-------------------------------------------------
+
+void chd_avhuff_decompressor::decompress(const UINT8 *src, UINT32 complen, UINT8 *dest, UINT32 destlen)
+{
+ // decode the audio and video
+ avhuff_error averr = m_decoder.decode_data(src, complen, dest);
+ if (averr != AVHERR_NONE)
+ throw CHDERR_DECOMPRESSION_ERROR;
+
+ // pad short frames with 0
+ if (dest != NULL)
+ {
+ int size = avhuff_encoder::raw_data_size(dest);
+ if (size < destlen)
+ memset(dest + size, 0, destlen - size);
+ }
+}
+
+
+//-------------------------------------------------
+// config - codec-specific configuration for the
+// A/V codec
+//-------------------------------------------------
+
+void chd_avhuff_decompressor::configure(int param, void *config)
+{
+ // if we're getting the decompression configuration, apply it now
+ if (param == AVHUFF_CODEC_DECOMPRESS_CONFIG)
+ m_decoder.configure(*reinterpret_cast<avhuff_decompress_config *>(config));
+
+ // anything else is invalid
+ else
+ throw CHDERR_INVALID_PARAMETER;
+}
diff --git a/src/lib/util/chdcodec.h b/src/lib/util/chdcodec.h
new file mode 100644
index 00000000000..fd47e811ccb
--- /dev/null
+++ b/src/lib/util/chdcodec.h
@@ -0,0 +1,212 @@
+/***************************************************************************
+
+ chdcodec.h
+
+ Codecs used by the CHD format
+
+****************************************************************************
+
+ Copyright Aaron Giles
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name 'MAME' nor the names of its contributors may be
+ used to endorse or promote products derived from this software
+ without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY AARON GILES ''AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ DISCLAIMED. IN NO EVENT SHALL AARON GILES BE LIABLE FOR ANY DIRECT,
+ INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
+ IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#pragma once
+
+#ifndef __CHDCODEC_H__
+#define __CHDCODEC_H__
+
+#include "osdcore.h"
+
+
+#define CHDCODEC_VERIFY_COMPRESSION 0
+
+
+//**************************************************************************
+// MACROS
+//**************************************************************************
+
+#define CHD_MAKE_TAG(a,b,c,d) (((a) << 24) | ((b) << 16) | ((c) << 8) | (d))
+
+
+
+//**************************************************************************
+// TYPE DEFINITIONS
+//**************************************************************************
+
+// forward references
+class chd_file;
+
+// base types
+typedef UINT32 chd_codec_type;
+
+
+// ======================> chd_codec
+
+// common base class for all compressors and decompressors
+class chd_codec
+{
+protected:
+ // can't create these directly
+ chd_codec(chd_file &file, bool lossy);
+
+public:
+ // allow public deletion
+ virtual ~chd_codec();
+
+ // accessors
+ chd_file &chd() const { return m_chd; }
+ bool lossy() const { return m_lossy; }
+
+ // implementation
+ virtual void configure(int param, void *config);
+
+private:
+ // internal state
+ chd_file & m_chd;
+ bool m_lossy;
+};
+
+
+// ======================> chd_compressor
+
+// base class for all compressors
+class chd_compressor : public chd_codec
+{
+protected:
+ // can't create these directly
+ chd_compressor(chd_file &file, bool lossy);
+
+public:
+ // implementation
+ virtual UINT32 compress(const UINT8 *src, UINT32 srclen, UINT8 *dest) = 0;
+};
+
+
+// ======================> chd_decompressor
+
+// base class for all decompressors
+class chd_decompressor : public chd_codec
+{
+protected:
+ // can't create these directly
+ chd_decompressor(chd_file &file, bool lossy);
+
+public:
+ // implementation
+ virtual void decompress(const UINT8 *src, UINT32 complen, UINT8 *dest, UINT32 destlen) = 0;
+};
+
+
+// ======================> chd_codec_list
+
+// wrapper to get at the list of codecs
+class chd_codec_list
+{
+public:
+ // create compressors or decompressors
+ static chd_compressor *new_compressor(chd_codec_type type, chd_file &file);
+ static chd_decompressor *new_decompressor(chd_codec_type type, chd_file &file);
+
+ // utilities
+ static bool codec_exists(chd_codec_type type) { return (find_in_list(type) != NULL); }
+ static const char *codec_name(chd_codec_type type);
+
+private:
+ // an entry in the list
+ struct codec_entry
+ {
+ chd_codec_type m_type;
+ bool m_lossy;
+ const char * m_name;
+ chd_compressor * (*m_construct_compressor)(chd_file &, bool);
+ chd_decompressor * (*m_construct_decompressor)(chd_file &, bool);
+ };
+
+ // internal helper functions
+ static const codec_entry *find_in_list(chd_codec_type type);
+
+ template<class _CompressorClass>
+ static chd_compressor *construct_compressor(chd_file &chd, bool lossy) { return new _CompressorClass(chd, lossy); }
+
+ template<class _DecompressorClass>
+ static chd_decompressor *construct_decompressor(chd_file &chd, bool lossy) { return new _DecompressorClass(chd, lossy); }
+
+ // the static list
+ static const codec_entry s_codec_list[];
+};
+
+
+// ======================> chd_compressor_group
+
+// helper class that wraps several compressors
+class chd_compressor_group
+{
+public:
+ // construction/destruction
+ chd_compressor_group(chd_file &file, chd_codec_type compressor_list[4]);
+ ~chd_compressor_group();
+
+ // find the best compressor
+ INT8 find_best_compressor(const UINT8 *src, UINT8 *compressed, UINT32 &complen);
+
+private:
+ // internal state
+ UINT32 m_hunkbytes; // number of bytes in a hunk
+ chd_compressor * m_compressor[4]; // array of active codecs
+ dynamic_buffer m_compress_test; // test buffer for compression
+#if CHDCODEC_VERIFY_COMPRESSION
+ chd_decompressor * m_decompressor[4]; // array of active codecs
+ dynamic_buffer m_decompressed; // verification buffer
+#endif
+};
+
+
+
+//**************************************************************************
+// CONSTANTS
+//**************************************************************************
+
+// currently-defined codecs
+const chd_codec_type CHD_CODEC_NONE = 0;
+const chd_codec_type CHD_CODEC_ZLIB = CHD_MAKE_TAG('z','l','i','b');
+const chd_codec_type CHD_CODEC_LZMA = CHD_MAKE_TAG('l','z','m','a');
+const chd_codec_type CHD_CODEC_HUFFMAN = CHD_MAKE_TAG('h','u','f','f');
+const chd_codec_type CHD_CODEC_FLAC_BE = CHD_MAKE_TAG('f','l','c','b');
+const chd_codec_type CHD_CODEC_FLAC_LE = CHD_MAKE_TAG('f','l','c','l');
+const chd_codec_type CHD_CODEC_AVHUFF = CHD_MAKE_TAG('a','v','h','u');
+
+// A/V codec configuration parameters
+enum
+{
+ AVHUFF_CODEC_DECOMPRESS_CONFIG = 1
+};
+
+
+#endif // __CHDCODEC_H__
diff --git a/src/lib/util/corefile.c b/src/lib/util/corefile.c
index 4d760f95d24..0a907aeabc4 100644
--- a/src/lib/util/corefile.c
+++ b/src/lib/util/corefile.c
@@ -757,6 +757,41 @@ file_error core_fload(const char *filename, void **data, UINT32 *length)
return FILERR_NONE;
}
+file_error core_fload(const char *filename, dynamic_buffer &data)
+{
+ core_file *file = NULL;
+ file_error err;
+ UINT64 size;
+
+ /* attempt to open the file */
+ err = core_fopen(filename, OPEN_FLAG_READ, &file);
+ if (err != FILERR_NONE)
+ return err;
+
+ /* get the size */
+ size = core_fsize(file);
+ if ((UINT32)size != size)
+ {
+ core_fclose(file);
+ return FILERR_OUT_OF_MEMORY;
+ }
+
+ /* allocate memory */
+ data.resize(size);
+
+ /* read the data */
+ if (core_fread(file, data, size) != size)
+ {
+ core_fclose(file);
+ data.reset();
+ return FILERR_FAILURE;
+ }
+
+ /* close the file and return data */
+ core_fclose(file);
+ return FILERR_NONE;
+}
+
/***************************************************************************
diff --git a/src/lib/util/corefile.h b/src/lib/util/corefile.h
index ff88efa004a..182d8c76ad9 100644
--- a/src/lib/util/corefile.h
+++ b/src/lib/util/corefile.h
@@ -45,6 +45,7 @@
#include <stdarg.h>
#include "osdcore.h"
#include "astring.h"
+#include "coretmpl.h"
@@ -129,6 +130,7 @@ const void *core_fbuffer(core_file *file);
/* open a file with the specified filename, read it into memory, and return a pointer */
file_error core_fload(const char *filename, void **data, UINT32 *length);
+file_error core_fload(const char *filename, dynamic_buffer &data);
diff --git a/src/lib/util/coretmpl.h b/src/lib/util/coretmpl.h
new file mode 100644
index 00000000000..59e012fe753
--- /dev/null
+++ b/src/lib/util/coretmpl.h
@@ -0,0 +1,109 @@
+/***************************************************************************
+
+ coretmpl.h
+
+ Core templates for basic non-string types.
+
+****************************************************************************
+
+ Copyright Aaron Giles
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name 'MAME' nor the names of its contributors may be
+ used to endorse or promote products derived from this software
+ without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY AARON GILES ''AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ DISCLAIMED. IN NO EVENT SHALL AARON GILES BE LIABLE FOR ANY DIRECT,
+ INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
+ IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#pragma once
+
+#ifndef __CORETMPL_H__
+#define __CORETMPL_H__
+
+#include <assert.h>
+#include "osdcore.h"
+
+
+// ======================> dynamic_array
+
+// an array that is dynamically sized and can optionally auto-expand
+template<class _ElementType>
+class dynamic_array
+{
+private:
+ // we don't support deep copying
+ dynamic_array(const dynamic_array &);
+ dynamic_array &operator=(const dynamic_array &);
+
+public:
+ // construction/destruction
+ dynamic_array(int initial = 0)
+ : m_array(NULL),
+ m_count(0),
+ m_allocated(0) { if (initial != 0) expand_internal(initial); m_count = initial; }
+ virtual ~dynamic_array() { reset(); }
+
+ // operators
+ operator _ElementType *() { return &m_array[0]; }
+ operator const _ElementType *() const { return &m_array[0]; }
+ _ElementType operator[](int index) const { assert(index < m_count); return m_array[index]; }
+ _ElementType &operator[](int index) { assert(index < m_count); return m_array[index]; }
+
+ // simple getters
+ int count() const { return m_count; }
+
+ // helpers
+ void append(const _ElementType &element) { if (m_count == m_allocated) expand_internal((m_allocated == 0) ? 16 : (m_allocated << 1), true); m_array[m_count++] = element; }
+ void reset() { delete[] m_array; m_array = NULL; m_count = m_allocated = 0; }
+ void resize(int count, bool keepdata = false) { if (count > m_allocated) expand_internal(count, keepdata); m_count = count; }
+
+private:
+ // internal helpers
+ void expand_internal(int count, bool keepdata = true)
+ {
+ // allocate a new array, copy the old one, and proceed
+ m_allocated = count;
+ _ElementType *newarray = new _ElementType[m_allocated];
+ if (keepdata)
+ for (int index = 0; index < m_count; index++)
+ newarray[index] = m_array[index];
+ delete[] m_array;
+ m_array = newarray;
+ }
+
+ // internal state
+ _ElementType * m_array; // allocated array
+ int m_count; // number of objects accessed in the list
+ int m_allocated; // amount of space allocated for the array
+};
+
+
+// ======================> dynamic_buffer
+
+typedef dynamic_array<UINT8> dynamic_buffer;
+
+
+
+#endif
diff --git a/src/lib/util/flac.c b/src/lib/util/flac.c
new file mode 100644
index 00000000000..aaefe301779
--- /dev/null
+++ b/src/lib/util/flac.c
@@ -0,0 +1,598 @@
+/***************************************************************************
+
+ flac.c
+
+ FLAC compression wrappers
+
+****************************************************************************
+
+ Copyright Aaron Giles
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name 'MAME' nor the names of its contributors may be
+ used to endorse or promote products derived from this software
+ without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY AARON GILES ''AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ DISCLAIMED. IN NO EVENT SHALL AARON GILES BE LIABLE FOR ANY DIRECT,
+ INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
+ IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#include "flac.h"
+#include <assert.h>
+#include <new>
+
+
+//**************************************************************************
+// FLAC ENCODER
+//**************************************************************************
+
+//-------------------------------------------------
+// flac_encoder - constructors
+//-------------------------------------------------
+
+flac_encoder::flac_encoder()
+{
+ init_common();
+}
+
+
+flac_encoder::flac_encoder(void *buffer, UINT32 buflength)
+{
+ init_common();
+ reset(buffer, buflength);
+}
+
+
+flac_encoder::flac_encoder(core_file &file)
+{
+ init_common();
+ reset(file);
+}
+
+
+//-------------------------------------------------
+// ~flac_encoder - destructor
+//-------------------------------------------------
+
+flac_encoder::~flac_encoder()
+{
+ // delete the encoder
+ FLAC__stream_encoder_delete(m_encoder);
+}
+
+
+//-------------------------------------------------
+// reset - reset state with the original
+// parameters
+//-------------------------------------------------
+
+bool flac_encoder::reset()
+{
+ // configure the output
+ m_compressed_offset = 0;
+ m_ignore_bytes = m_strip_metadata ? 4 : 0;
+ m_found_audio = !m_strip_metadata;
+
+ // configure the encoder in a standard way
+ // note we do this on each reset; if we don't, results are NOT consistent!
+ FLAC__stream_encoder_set_verify(m_encoder, false);
+// FLAC__stream_encoder_set_do_md5(m_encoder, false);
+ FLAC__stream_encoder_set_compression_level(m_encoder, 8);
+ FLAC__stream_encoder_set_channels(m_encoder, m_channels);
+ FLAC__stream_encoder_set_bits_per_sample(m_encoder, 16);
+ FLAC__stream_encoder_set_sample_rate(m_encoder, m_sample_rate);
+ FLAC__stream_encoder_set_total_samples_estimate(m_encoder, 0);
+ FLAC__stream_encoder_set_streamable_subset(m_encoder, false);
+ FLAC__stream_encoder_set_blocksize(m_encoder, m_block_size);
+
+ // re-start processing
+ return (FLAC__stream_encoder_init_stream(m_encoder, write_callback_static, NULL, NULL, NULL, this) == FLAC__STREAM_ENCODER_INIT_STATUS_OK);
+}
+
+
+//-------------------------------------------------
+// reset - reset state with new memory parameters
+//-------------------------------------------------
+
+bool flac_encoder::reset(void *buffer, UINT32 buflength)
+{
+ // configure the output
+ m_compressed_start = reinterpret_cast<FLAC__byte *>(buffer);
+ m_compressed_length = buflength;
+ m_file = NULL;
+ return reset();
+}
+
+
+//-------------------------------------------------
+// reset - reset state with new file parameters
+//-------------------------------------------------
+
+bool flac_encoder::reset(core_file &file)
+{
+ // configure the output
+ m_compressed_start = NULL;
+ m_compressed_length = 0;
+ m_file = &file;
+ return reset();
+}
+
+
+//-------------------------------------------------
+// encode_interleaved - encode a buffer with
+// interleaved samples
+//-------------------------------------------------
+
+bool flac_encoder::encode_interleaved(const INT16 *samples, UINT32 samples_per_channel, bool swap_endian)
+{
+ int shift = swap_endian ? 8 : 0;
+
+ // loop over source samples
+ int num_channels = FLAC__stream_encoder_get_channels(m_encoder);
+ UINT32 srcindex = 0;
+ while (samples_per_channel != 0)
+ {
+ // process in batches of 2k samples
+ FLAC__int32 converted_buffer[2048];
+ FLAC__int32 *dest = converted_buffer;
+ UINT32 cur_samples = MIN(ARRAY_LENGTH(converted_buffer) / num_channels, samples_per_channel);
+
+ // convert a buffers' worth
+ for (UINT32 sampnum = 0; sampnum < cur_samples; sampnum++)
+ for (int channel = 0; channel < num_channels; channel++, srcindex++)
+ *dest++ = INT16((UINT16(samples[srcindex]) << shift) | (UINT16(samples[srcindex]) >> shift));
+
+ // process this batch
+ if (!FLAC__stream_encoder_process_interleaved(m_encoder, converted_buffer, cur_samples))
+ return false;
+ samples_per_channel -= cur_samples;
+ }
+ return true;
+}
+
+
+//-------------------------------------------------
+// encode - encode a buffer with individual
+// sample streams
+//-------------------------------------------------
+
+bool flac_encoder::encode(INT16 *const *samples, UINT32 samples_per_channel, bool swap_endian)
+{
+ int shift = swap_endian ? 8 : 0;
+
+ // loop over source samples
+ int num_channels = FLAC__stream_encoder_get_channels(m_encoder);
+ UINT32 srcindex = 0;
+ while (samples_per_channel != 0)
+ {
+ // process in batches of 2k samples
+ FLAC__int32 converted_buffer[2048];
+ FLAC__int32 *dest = converted_buffer;
+ UINT32 cur_samples = MIN(ARRAY_LENGTH(converted_buffer) / num_channels, samples_per_channel);
+
+ // convert a buffers' worth
+ for (UINT32 sampnum = 0; sampnum < cur_samples; sampnum++, srcindex++)
+ for (int channel = 0; channel < num_channels; channel++)
+ *dest++ = INT16((UINT16(samples[channel][srcindex]) << shift) | (UINT16(samples[channel][srcindex]) >> shift));
+
+ // process this batch
+ if (!FLAC__stream_encoder_process_interleaved(m_encoder, converted_buffer, cur_samples))
+ return false;
+ samples_per_channel -= cur_samples;
+ }
+ return true;
+}
+
+
+//-------------------------------------------------
+// finish - complete encoding and flush the
+// stream
+//-------------------------------------------------
+
+UINT32 flac_encoder::finish()
+{
+ // process the data and return the amount written
+ FLAC__stream_encoder_finish(m_encoder);
+ return (m_file != NULL) ? core_ftell(m_file) : m_compressed_offset;
+}
+
+
+//-------------------------------------------------
+// init_common - common initialization
+//-------------------------------------------------
+
+void flac_encoder::init_common()
+{
+ // allocate the encoder
+ m_encoder = FLAC__stream_encoder_new();
+ if (m_encoder == NULL)
+ throw std::bad_alloc();
+
+ // initialize default state
+ m_file = NULL;
+ m_compressed_offset = 0;
+ m_compressed_start = NULL;
+ m_compressed_length = 0;
+ m_sample_rate = 44100;
+ m_channels = 2;
+ m_block_size = 0;
+ m_strip_metadata = false;
+ m_ignore_bytes = 0;
+ m_found_audio = false;
+}
+
+
+//-------------------------------------------------
+// write_callback - handle writes to the
+// output stream
+//-------------------------------------------------
+
+FLAC__StreamEncoderWriteStatus flac_encoder::write_callback_static(const FLAC__StreamEncoder *encoder, const FLAC__byte buffer[], size_t bytes, unsigned samples, unsigned current_frame, void *client_data)
+{
+ return reinterpret_cast<flac_encoder *>(client_data)->write_callback(buffer, bytes, samples, current_frame);
+}
+
+FLAC__StreamEncoderWriteStatus flac_encoder::write_callback(const FLAC__byte buffer[], size_t bytes, unsigned samples, unsigned current_frame)
+{
+ // loop over output data
+ size_t offset = 0;
+ while (offset < bytes)
+ {
+ // if we're ignoring, continue to do so
+ if (m_ignore_bytes != 0)
+ {
+ int ignore = MIN(bytes - offset, m_ignore_bytes);
+ offset += ignore;
+ m_ignore_bytes -= ignore;
+ }
+
+ // if we haven't hit the end of metadata, process a new piece
+ else if (!m_found_audio)
+ {
+ assert(bytes - offset >= 4);
+ m_found_audio = ((buffer[offset] & 0x80) != 0);
+ m_ignore_bytes = (buffer[offset + 1] << 16) | (buffer[offset + 2] << 8) | buffer[offset + 3];
+ offset += 4;
+ }
+
+ // otherwise process as audio data and copy to the output
+ else
+ {
+ int count = bytes - offset;
+ if (m_file != NULL)
+ core_fwrite(m_file, buffer, count);
+ else
+ {
+ if (m_compressed_offset + count <= m_compressed_length)
+ memcpy(m_compressed_start + m_compressed_offset, buffer, count);
+ m_compressed_offset += count;
+ }
+ offset += count;
+ }
+ }
+ return FLAC__STREAM_ENCODER_WRITE_STATUS_OK;
+}
+
+
+
+//**************************************************************************
+// FLAC DECODER
+//**************************************************************************
+
+//-------------------------------------------------
+// flac_decoder - constructor
+//-------------------------------------------------
+
+flac_decoder::flac_decoder()
+ : m_decoder(FLAC__stream_decoder_new()),
+ m_file(NULL),
+ m_compressed_offset(0),
+ m_compressed_start(NULL),
+ m_compressed_length(0),
+ m_compressed2_start(NULL),
+ m_compressed2_length(0)
+{
+}
+
+
+//-------------------------------------------------
+// flac_decoder - constructor
+//-------------------------------------------------
+
+flac_decoder::flac_decoder(const void *buffer, UINT32 length, const void *buffer2, UINT32 length2)
+ : m_decoder(FLAC__stream_decoder_new()),
+ m_file(NULL),
+ m_compressed_offset(0),
+ m_compressed_start(reinterpret_cast<const FLAC__byte *>(buffer)),
+ m_compressed_length(length),
+ m_compressed2_start(reinterpret_cast<const FLAC__byte *>(buffer2)),
+ m_compressed2_length(length2)
+{
+ reset();
+}
+
+
+//-------------------------------------------------
+// flac_decoder - constructor
+//-------------------------------------------------
+
+flac_decoder::flac_decoder(core_file &file)
+ : m_decoder(FLAC__stream_decoder_new()),
+ m_file(&file),
+ m_compressed_offset(0),
+ m_compressed_start(NULL),
+ m_compressed_length(0),
+ m_compressed2_start(NULL),
+ m_compressed2_length(0)
+{
+ reset();
+}
+
+
+//-------------------------------------------------
+// flac_decoder - destructor
+//-------------------------------------------------
+
+flac_decoder::~flac_decoder()
+{
+ FLAC__stream_decoder_delete(m_decoder);
+}
+
+
+//-------------------------------------------------
+// reset - reset state with the original
+// parameters
+//-------------------------------------------------
+
+bool flac_decoder::reset()
+{
+ m_compressed_offset = 0;
+ if (FLAC__stream_decoder_init_stream(m_decoder, &flac_decoder::read_callback_static, NULL, NULL, NULL, NULL, &flac_decoder::write_callback_static, NULL, &flac_decoder::error_callback_static, this) != FLAC__STREAM_DECODER_INIT_STATUS_OK)
+ return false;
+ return FLAC__stream_decoder_process_until_end_of_metadata(m_decoder);
+}
+
+
+//-------------------------------------------------
+// reset - reset state with new memory parameters
+//-------------------------------------------------
+
+bool flac_decoder::reset(const void *buffer, UINT32 length, const void *buffer2, UINT32 length2)
+{
+ m_file = NULL;
+ m_compressed_start = reinterpret_cast<const FLAC__byte *>(buffer);
+ m_compressed_length = length;
+ m_compressed2_start = reinterpret_cast<const FLAC__byte *>(buffer2);
+ m_compressed2_length = length2;
+ return reset();
+}
+
+
+//-------------------------------------------------
+// reset - reset state with new memory parameters
+// and a custom-generated header
+//-------------------------------------------------
+
+bool flac_decoder::reset(UINT32 sample_rate, UINT8 num_channels, UINT32 block_size, const void *buffer, UINT32 length)
+{
+ // modify the template header with our parameters
+ static const UINT8 s_header_template[0x2a] =
+ {
+ 0x66, 0x4C, 0x61, 0x43, // +00: 'fLaC' stream header
+ 0x80, // +04: metadata block type 0 (STREAMINFO),
+ // flagged as last block
+ 0x00, 0x00, 0x22, // +05: metadata block length = 0x22
+ 0x00, 0x00, // +08: minimum block size
+ 0x00, 0x00, // +0A: maximum block size
+ 0x00, 0x00, 0x00, // +0C: minimum frame size (0 == unknown)
+ 0x00, 0x00, 0x00, // +0F: maximum frame size (0 == unknown)
+ 0x0A, 0xC4, 0x42, 0xF0, 0x00, 0x00, 0x00, 0x00, // +12: sample rate (0x0ac44 == 44100),
+ // numchannels (2), sample bits (16),
+ // samples in stream (0 == unknown)
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // +1A: MD5 signature (0 == none)
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 //
+ // +2A: start of stream data
+ };
+ memcpy(m_custom_header, s_header_template, sizeof(s_header_template));
+ m_custom_header[0x08] = m_custom_header[0x0a] = block_size >> 8;
+ m_custom_header[0x09] = m_custom_header[0x0b] = block_size & 0xff;
+ m_custom_header[0x12] = sample_rate >> 12;
+ m_custom_header[0x13] = sample_rate >> 4;
+ m_custom_header[0x14] = (sample_rate << 4) | ((num_channels - 1) << 1);
+
+ // configure the header ahead of the provided buffer
+ m_file = NULL;
+ m_compressed_start = reinterpret_cast<const FLAC__byte *>(m_custom_header);
+ m_compressed_length = sizeof(m_custom_header);
+ m_compressed2_start = reinterpret_cast<const FLAC__byte *>(buffer);
+ m_compressed2_length = length;
+ return reset();
+}
+
+
+//-------------------------------------------------
+// reset - reset state with new file parameter
+//-------------------------------------------------
+
+bool flac_decoder::reset(core_file &file)
+{
+ m_file = &file;
+ m_compressed_start = NULL;
+ m_compressed_length = 0;
+ m_compressed2_start = NULL;
+ m_compressed2_length = 0;
+ return reset();
+}
+
+
+//-------------------------------------------------
+// decode_interleaved - decode to an interleaved
+// sound stream
+//-------------------------------------------------
+
+bool flac_decoder::decode_interleaved(INT16 *samples, UINT32 num_samples, bool swap_endian)
+{
+ // configure the uncompressed buffer
+ memset(m_uncompressed_start, 0, sizeof(m_uncompressed_start));
+ m_uncompressed_start[0] = samples;
+ m_uncompressed_offset = 0;
+ m_uncompressed_length = num_samples;
+ m_uncompressed_swap = swap_endian;
+
+ // loop until we get everything we want
+ while (m_uncompressed_offset < m_uncompressed_length)
+ if (!FLAC__stream_decoder_process_single(m_decoder))
+ return false;
+ return true;
+}
+
+
+//-------------------------------------------------
+// decode - decode to an multiple independent
+// data streams
+//-------------------------------------------------
+
+bool flac_decoder::decode(INT16 **samples, UINT32 num_samples, bool swap_endian)
+{
+ // make sure we don't have too many channels
+ int chans = channels();
+ if (chans > ARRAY_LENGTH(m_uncompressed_start))
+ return false;
+
+ // configure the uncompressed buffer
+ memset(m_uncompressed_start, 0, sizeof(m_uncompressed_start));
+ for (int curchan = 0; curchan < chans; curchan++)
+ m_uncompressed_start[curchan] = samples[curchan];
+ m_uncompressed_offset = 0;
+ m_uncompressed_length = num_samples;
+ m_uncompressed_swap = swap_endian;
+
+ // loop until we get everything we want
+ while (m_uncompressed_offset < m_uncompressed_length)
+ if (!FLAC__stream_decoder_process_single(m_decoder))
+ return false;
+ return true;
+}
+
+
+//-------------------------------------------------
+// finish - finish up the decode
+//-------------------------------------------------
+
+void flac_decoder::finish()
+{
+ FLAC__stream_decoder_finish(m_decoder);
+}
+
+
+//-------------------------------------------------
+// read_callback - handle reads from the input
+// stream
+//-------------------------------------------------
+
+FLAC__StreamDecoderReadStatus flac_decoder::read_callback_static(const FLAC__StreamDecoder *decoder, FLAC__byte buffer[], size_t *bytes, void *client_data)
+{
+ return reinterpret_cast<flac_decoder *>(client_data)->read_callback(buffer, bytes);
+}
+
+FLAC__StreamDecoderReadStatus flac_decoder::read_callback(FLAC__byte buffer[], size_t *bytes)
+{
+ UINT32 expected = *bytes;
+
+ // if a file, just read
+ if (m_file != NULL)
+ *bytes = core_fread(m_file, buffer, expected);
+
+ // otherwise, copy from memory
+ else
+ {
+ // copy from primary buffer first
+ UINT32 outputpos = 0;
+ if (outputpos < *bytes && m_compressed_offset < m_compressed_length)
+ {
+ UINT32 bytes_to_copy = MIN(*bytes - outputpos, m_compressed_length - m_compressed_offset);
+ memcpy(&buffer[outputpos], m_compressed_start + m_compressed_offset, bytes_to_copy);
+ outputpos += bytes_to_copy;
+ m_compressed_offset += bytes_to_copy;
+ }
+
+ // once we're out of that, copy from the secondary buffer
+ if (outputpos < *bytes && m_compressed_offset < m_compressed_length + m_compressed2_length)
+ {
+ UINT32 bytes_to_copy = MIN(*bytes - outputpos, m_compressed2_length - (m_compressed_offset - m_compressed_length));
+ memcpy(&buffer[outputpos], m_compressed2_start + m_compressed_offset - m_compressed_length, bytes_to_copy);
+ outputpos += bytes_to_copy;
+ m_compressed_offset += bytes_to_copy;
+ }
+ *bytes = outputpos;
+ }
+
+ // return based on whether we ran out of data
+ return (*bytes < expected) ? FLAC__STREAM_DECODER_READ_STATUS_END_OF_STREAM : FLAC__STREAM_DECODER_READ_STATUS_CONTINUE;
+}
+
+
+//-------------------------------------------------
+// write_callback - handle writes to the output
+// stream
+//-------------------------------------------------
+
+FLAC__StreamDecoderWriteStatus flac_decoder::write_callback_static(const FLAC__StreamDecoder *decoder, const ::FLAC__Frame *frame, const FLAC__int32 * const buffer[], void *client_data)
+{
+ return reinterpret_cast<flac_decoder *>(client_data)->write_callback(frame, buffer);
+}
+
+FLAC__StreamDecoderWriteStatus flac_decoder::write_callback(const ::FLAC__Frame *frame, const FLAC__int32 * const buffer[])
+{
+ assert(frame->header.channels == channels());
+
+ // interleaved case
+ int shift = m_uncompressed_swap ? 8 : 0;
+ int blocksize = frame->header.blocksize;
+ if (m_uncompressed_start[1] == NULL)
+ {
+ INT16 *dest = m_uncompressed_start[0] + m_uncompressed_offset * frame->header.channels;
+ for (int sampnum = 0; sampnum < blocksize && m_uncompressed_offset < m_uncompressed_length; sampnum++, m_uncompressed_offset++)
+ for (int chan = 0; chan < frame->header.channels; chan++)
+ *dest++ = INT16((UINT16(buffer[chan][sampnum]) << shift) | (UINT16(buffer[chan][sampnum]) >> shift));
+ }
+
+ // non-interleaved case
+ else
+ {
+ for (int sampnum = 0; sampnum < blocksize && m_uncompressed_offset < m_uncompressed_length; sampnum++, m_uncompressed_offset++)
+ for (int chan = 0; chan < frame->header.channels; chan++)
+ if (m_uncompressed_start[chan] != NULL)
+ m_uncompressed_start[chan][m_uncompressed_offset] = INT16((UINT16(buffer[chan][sampnum]) << shift) | (UINT16(buffer[chan][sampnum]) >> shift));
+ }
+ return FLAC__STREAM_DECODER_WRITE_STATUS_CONTINUE;
+}
+
+
+//-------------------------------------------------
+// error_callback - handle errors (ignore them)
+//-------------------------------------------------
+
+void flac_decoder::error_callback_static(const FLAC__StreamDecoder *decoder, FLAC__StreamDecoderErrorStatus status, void *client_data)
+{
+}
diff --git a/src/lib/util/flac.h b/src/lib/util/flac.h
new file mode 100644
index 00000000000..da37b49b75b
--- /dev/null
+++ b/src/lib/util/flac.h
@@ -0,0 +1,167 @@
+/***************************************************************************
+
+ flac.h
+
+ FLAC compression wrappers
+
+****************************************************************************
+
+ Copyright Aaron Giles
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name 'MAME' nor the names of its contributors may be
+ used to endorse or promote products derived from this software
+ without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY AARON GILES ''AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ DISCLAIMED. IN NO EVENT SHALL AARON GILES BE LIABLE FOR ANY DIRECT,
+ INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
+ IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#pragma once
+
+#ifndef __FLAC_H__
+#define __FLAC_H__
+
+#include "osdcore.h"
+#include "corefile.h"
+#include "../../lib/libflac/include/flac/all.h"
+
+
+//**************************************************************************
+// TYPE DEFINITIONS
+//**************************************************************************
+
+// ======================> flac_encoder
+
+class flac_encoder
+{
+public:
+ // construction/destruction
+ flac_encoder();
+ flac_encoder(void *buffer, UINT32 buflength);
+ flac_encoder(core_file &file);
+ ~flac_encoder();
+
+ // configuration
+ void set_sample_rate(UINT32 sample_rate) { m_sample_rate = sample_rate; }
+ void set_num_channels(UINT8 num_channels) { m_channels = num_channels; }
+ void set_block_size(UINT32 block_size) { m_block_size = block_size; }
+ void set_strip_metadata(bool strip) { m_strip_metadata = strip; }
+
+ // getters (valid after reset)
+ FLAC__StreamEncoderState state() const { return FLAC__stream_encoder_get_state(m_encoder); }
+ const char *state_string() const { return FLAC__stream_encoder_get_resolved_state_string(m_encoder); }
+
+ // reset
+ bool reset();
+ bool reset(void *buffer, UINT32 buflength);
+ bool reset(core_file &file);
+
+ // encode a buffer
+ bool encode_interleaved(const INT16 *samples, UINT32 samples_per_channel, bool swap_endian = false);
+ bool encode(INT16 *const *samples, UINT32 samples_per_channel, bool swap_endian = false);
+
+ // finish up
+ UINT32 finish();
+
+private:
+ // internal helpers
+ void init_common();
+ static FLAC__StreamEncoderWriteStatus write_callback_static(const FLAC__StreamEncoder *encoder, const FLAC__byte buffer[], size_t bytes, unsigned samples, unsigned current_frame, void *client_data);
+ FLAC__StreamEncoderWriteStatus write_callback(const FLAC__byte buffer[], size_t bytes, unsigned samples, unsigned current_frame);
+
+ // internal state
+ FLAC__StreamEncoder * m_encoder; // actual encoder
+ core_file * m_file; // output file
+ UINT32 m_compressed_offset; // current offset with the compressed stream
+ FLAC__byte * m_compressed_start; // start of compressed data
+ UINT32 m_compressed_length; // length of the compressed stream
+
+ // parameters
+ UINT32 m_sample_rate; // sample rate
+ UINT8 m_channels; // number of channels
+ UINT32 m_block_size; // block size
+
+ // header stripping
+ bool m_strip_metadata; // strip the the metadata?
+ UINT32 m_ignore_bytes; // how many bytes to ignore when writing
+ bool m_found_audio; // have we hit the audio yet?
+};
+
+
+// ======================> flac_decoder
+
+class flac_decoder
+{
+public:
+ // construction/destruction
+ flac_decoder();
+ flac_decoder(const void *buffer, UINT32 length, const void *buffer2 = NULL, UINT32 length2 = 0);
+ flac_decoder(core_file &file);
+ ~flac_decoder();
+
+ // getters (valid after reset)
+ UINT32 sample_rate() const { return FLAC__stream_decoder_get_sample_rate(m_decoder); }
+ UINT8 channels() const { return FLAC__stream_decoder_get_channels(m_decoder); }
+ UINT32 block_size() const { return FLAC__stream_decoder_get_blocksize(m_decoder); }
+ FLAC__StreamDecoderState state() const { return FLAC__stream_decoder_get_state(m_decoder); }
+ const char *state_string() const { return FLAC__stream_decoder_get_resolved_state_string(m_decoder); }
+
+ // reset
+ bool reset();
+ bool reset(const void *buffer, UINT32 length, const void *buffer2 = NULL, UINT32 length2 = 0);
+ bool reset(UINT32 sample_rate, UINT8 num_channels, UINT32 block_size, const void *buffer, UINT32 length);
+ bool reset(core_file &file);
+
+ // decode to a buffer; num_samples must be a multiple of the block size
+ bool decode_interleaved(INT16 *samples, UINT32 num_samples, bool swap_endian = false);
+ bool decode(INT16 **samples, UINT32 num_samples, bool swap_endian = false);
+
+ // finish up
+ void finish();
+
+private:
+ // internal helpers
+ static FLAC__StreamDecoderReadStatus read_callback_static(const FLAC__StreamDecoder *decoder, FLAC__byte buffer[], size_t *bytes, void *client_data);
+ FLAC__StreamDecoderReadStatus read_callback(FLAC__byte buffer[], size_t *bytes);
+ static FLAC__StreamDecoderWriteStatus write_callback_static(const FLAC__StreamDecoder *decoder, const ::FLAC__Frame *frame, const FLAC__int32 * const buffer[], void *client_data);
+ FLAC__StreamDecoderWriteStatus write_callback(const ::FLAC__Frame *frame, const FLAC__int32 * const buffer[]);
+ static void error_callback_static(const FLAC__StreamDecoder *decoder, FLAC__StreamDecoderErrorStatus status, void *client_data);
+
+ // output state
+ FLAC__StreamDecoder * m_decoder; // actual encoder
+ core_file * m_file; // output file
+ UINT32 m_compressed_offset; // current offset in compressed data
+ const FLAC__byte * m_compressed_start; // start of compressed data
+ UINT32 m_compressed_length; // length of compressed data
+ const FLAC__byte * m_compressed2_start; // start of compressed data
+ UINT32 m_compressed2_length; // length of compressed data
+ INT16 * m_uncompressed_start[8];// pointer to start of uncompressed data (up to 8 streams)
+ UINT32 m_uncompressed_offset; // current position in uncompressed data
+ UINT32 m_uncompressed_length; // length of uncompressed data
+ bool m_uncompressed_swap; // swap uncompressed sample data
+ UINT8 m_custom_header[0x2a]; // custom header
+};
+
+
+#endif // __FLAC_H__
diff --git a/src/lib/util/harddisk.c b/src/lib/util/harddisk.c
index ffae861a6d4..64af494405a 100644
--- a/src/lib/util/harddisk.c
+++ b/src/lib/util/harddisk.c
@@ -46,13 +46,10 @@
TYPE DEFINITIONS
***************************************************************************/
-struct _hard_disk_file
+struct hard_disk_file
{
chd_file * chd; /* CHD file */
hard_disk_info info; /* hard disk info */
- UINT32 hunksectors; /* sectors per hunk */
- UINT32 cachehunk; /* which hunk is cached */
- UINT8 * cache; /* cache of the current hunk */
};
@@ -70,7 +67,7 @@ hard_disk_file *hard_disk_open(chd_file *chd)
{
int cylinders, heads, sectors, sectorbytes;
hard_disk_file *file;
- char metadata[256];
+ astring metadata;
chd_error err;
/* punt if no CHD */
@@ -78,7 +75,7 @@ hard_disk_file *hard_disk_open(chd_file *chd)
return NULL;
/* read the hard disk metadata */
- err = chd_get_metadata(chd, HARD_DISK_METADATA_TAG, 0, metadata, sizeof(metadata), NULL, NULL, NULL);
+ err = chd->read_metadata(HARD_DISK_METADATA_TAG, 0, metadata);
if (err != CHDERR_NONE)
return NULL;
@@ -97,17 +94,6 @@ hard_disk_file *hard_disk_open(chd_file *chd)
file->info.heads = heads;
file->info.sectors = sectors;
file->info.sectorbytes = sectorbytes;
- file->hunksectors = chd_get_header(chd)->hunkbytes / file->info.sectorbytes;
- file->cachehunk = -1;
-
- /* allocate a cache */
- file->cache = (UINT8 *)malloc(chd_get_header(chd)->hunkbytes);
- if (file->cache == NULL)
- {
- free(file);
- return NULL;
- }
-
return file;
}
@@ -118,9 +104,6 @@ hard_disk_file *hard_disk_open(chd_file *chd)
void hard_disk_close(hard_disk_file *file)
{
- /* free the cache */
- if (file->cache != NULL)
- free(file->cache);
free(file);
}
@@ -154,21 +137,8 @@ hard_disk_info *hard_disk_get_info(hard_disk_file *file)
UINT32 hard_disk_read(hard_disk_file *file, UINT32 lbasector, void *buffer)
{
- UINT32 hunknum = lbasector / file->hunksectors;
- UINT32 sectoroffs = lbasector % file->hunksectors;
-
- /* if we haven't cached this hunk, read it now */
- if (file->cachehunk != hunknum)
- {
- chd_error err = chd_read(file->chd, hunknum, file->cache);
- if (err != CHDERR_NONE)
- return 0;
- file->cachehunk = hunknum;
- }
-
- /* copy out the requested sector */
- memcpy(buffer, &file->cache[sectoroffs * file->info.sectorbytes], file->info.sectorbytes);
- return 1;
+ chd_error err = file->chd->read_units(lbasector, buffer);
+ return (err == CHDERR_NONE);
}
@@ -179,23 +149,6 @@ UINT32 hard_disk_read(hard_disk_file *file, UINT32 lbasector, void *buffer)
UINT32 hard_disk_write(hard_disk_file *file, UINT32 lbasector, const void *buffer)
{
- UINT32 hunknum = lbasector / file->hunksectors;
- UINT32 sectoroffs = lbasector % file->hunksectors;
- chd_error err;
-
- /* if we haven't cached this hunk, read it now */
- if (file->cachehunk != hunknum)
- {
- err = chd_read(file->chd, hunknum, file->cache);
- if (err != CHDERR_NONE)
- return 0;
- file->cachehunk = hunknum;
- }
-
- /* copy in the requested data */
- memcpy(&file->cache[sectoroffs * file->info.sectorbytes], buffer, file->info.sectorbytes);
-
- /* write it back out */
- err = chd_write(file->chd, hunknum, file->cache);
- return (err == CHDERR_NONE) ? 1 : 0;
+ chd_error err = file->chd->write_units(lbasector, buffer);
+ return (err == CHDERR_NONE);
}
diff --git a/src/lib/util/harddisk.h b/src/lib/util/harddisk.h
index 9f7c5f402e1..09351be9072 100644
--- a/src/lib/util/harddisk.h
+++ b/src/lib/util/harddisk.h
@@ -50,10 +50,9 @@
TYPE DEFINITIONS
***************************************************************************/
-typedef struct _hard_disk_file hard_disk_file;
+struct hard_disk_file;
-typedef struct _hard_disk_info hard_disk_info;
-struct _hard_disk_info
+struct hard_disk_info
{
UINT32 cylinders;
UINT32 heads;
diff --git a/src/lib/util/hashing.c b/src/lib/util/hashing.c
new file mode 100644
index 00000000000..97afb184156
--- /dev/null
+++ b/src/lib/util/hashing.c
@@ -0,0 +1,282 @@
+/***************************************************************************
+
+ hashing.c
+
+ Hashing helper classes.
+
+****************************************************************************
+
+ Copyright Aaron Giles
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name 'MAME' nor the names of its contributors may be
+ used to endorse or promote products derived from this software
+ without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY AARON GILES ''AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ DISCLAIMED. IN NO EVENT SHALL AARON GILES BE LIABLE FOR ANY DIRECT,
+ INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
+ IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#include "hashing.h"
+
+
+//**************************************************************************
+// CONSTANTS
+//**************************************************************************
+
+const crc16_t crc16_t::null = { 0 };
+const crc32_t crc32_t::null = { 0 };
+const md5_t md5_t::null = { { 0 } };
+const sha1_t sha1_t::null = { { 0 } };
+
+
+
+//**************************************************************************
+// INLINE FUNCTIONS
+//**************************************************************************
+
+//-------------------------------------------------
+// char_to_hex - return the hex value of a
+// character
+//-------------------------------------------------
+
+inline int char_to_hex(char c)
+{
+ if (c >= '0' && c <= '9')
+ return c - '0';
+ if (c >= 'a' && c <= 'f')
+ return 10 + c - 'a';
+ if (c >= 'A' && c <= 'F')
+ return 10 + c - 'a';
+ return -1;
+}
+
+
+
+//**************************************************************************
+// SHA-1 HELPERS
+//**************************************************************************
+
+//-------------------------------------------------
+// from_string - convert from a string
+//-------------------------------------------------
+
+bool sha1_t::from_string(const char *string)
+{
+ // must be at least long enough to hold everything
+ if (strlen(string) < 2 * sizeof(m_raw))
+ return false;
+
+ // iterate through our raw buffer
+ for (int bytenum = 0; bytenum < sizeof(m_raw); bytenum++)
+ {
+ int upper = char_to_hex(*string++);
+ int lower = char_to_hex(*string++);
+ if (upper == -1 || lower == -1)
+ return false;
+ m_raw[bytenum] = (upper << 4) | lower;
+ }
+ return true;
+}
+
+
+//-------------------------------------------------
+// as_string - convert to a string
+//-------------------------------------------------
+
+const char *sha1_t::as_string(astring &buffer)
+{
+ buffer.reset();
+ for (int i = 0; i < ARRAY_LENGTH(m_raw); i++)
+ buffer.catformat("%02x", m_raw[i]);
+ return buffer;
+}
+
+
+//**************************************************************************
+// MD-5 HELPERS
+//**************************************************************************
+
+//-------------------------------------------------
+// from_string - convert from a string
+//-------------------------------------------------
+
+bool md5_t::from_string(const char *string)
+{
+ // must be at least long enough to hold everything
+ if (strlen(string) < 2 * sizeof(m_raw))
+ return false;
+
+ // iterate through our raw buffer
+ for (int bytenum = 0; bytenum < sizeof(m_raw); bytenum++)
+ {
+ int upper = char_to_hex(*string++);
+ int lower = char_to_hex(*string++);
+ if (upper == -1 || lower == -1)
+ return false;
+ m_raw[bytenum] = (upper << 4) | lower;
+ }
+ return true;
+}
+
+
+//-------------------------------------------------
+// as_string - convert to a string
+//-------------------------------------------------
+
+const char *md5_t::as_string(astring &buffer)
+{
+ buffer.reset();
+ for (int i = 0; i < ARRAY_LENGTH(m_raw); i++)
+ buffer.catformat("%02x", m_raw[i]);
+ return buffer;
+}
+
+
+
+//**************************************************************************
+// CRC-32 HELPERS
+//**************************************************************************
+
+//-------------------------------------------------
+// from_string - convert from a string
+//-------------------------------------------------
+
+bool crc32_t::from_string(const char *string)
+{
+ // must be at least long enough to hold everything
+ if (strlen(string) < 2 * sizeof(m_raw))
+ return false;
+
+ // iterate through our raw buffer
+ m_raw = 0;
+ for (int bytenum = 0; bytenum < sizeof(m_raw) * 2; bytenum++)
+ {
+ int nibble = char_to_hex(*string++);
+ if (nibble == -1)
+ return false;
+ m_raw = (m_raw << 4) | nibble;
+ }
+ return true;
+}
+
+
+//-------------------------------------------------
+// as_string - convert to a string
+//-------------------------------------------------
+
+const char *crc32_t::as_string(astring &buffer)
+{
+ return buffer.format("%08x", m_raw);
+}
+
+
+
+//**************************************************************************
+// CRC-16 HELPERS
+//**************************************************************************
+
+//-------------------------------------------------
+// from_string - convert from a string
+//-------------------------------------------------
+
+bool crc16_t::from_string(const char *string)
+{
+ // must be at least long enough to hold everything
+ if (strlen(string) < 2 * sizeof(m_raw))
+ return false;
+
+ // iterate through our raw buffer
+ m_raw = 0;
+ for (int bytenum = 0; bytenum < sizeof(m_raw) * 2; bytenum++)
+ {
+ int nibble = char_to_hex(*string++);
+ if (nibble == -1)
+ return false;
+ m_raw = (m_raw << 4) | nibble;
+ }
+ return true;
+}
+
+
+//-------------------------------------------------
+// as_string - convert to a string
+//-------------------------------------------------
+
+const char *crc16_t::as_string(astring &buffer)
+{
+ return buffer.format("%04x", m_raw);
+}
+
+
+//-------------------------------------------------
+// append - hash a block of data, appending to
+// the currently-accumulated value
+//-------------------------------------------------
+
+void crc16_creator::append(const void *data, UINT32 length)
+{
+ static const UINT16 s_table[256] =
+ {
+ 0x0000, 0x1021, 0x2042, 0x3063, 0x4084, 0x50a5, 0x60c6, 0x70e7,
+ 0x8108, 0x9129, 0xa14a, 0xb16b, 0xc18c, 0xd1ad, 0xe1ce, 0xf1ef,
+ 0x1231, 0x0210, 0x3273, 0x2252, 0x52b5, 0x4294, 0x72f7, 0x62d6,
+ 0x9339, 0x8318, 0xb37b, 0xa35a, 0xd3bd, 0xc39c, 0xf3ff, 0xe3de,
+ 0x2462, 0x3443, 0x0420, 0x1401, 0x64e6, 0x74c7, 0x44a4, 0x5485,
+ 0xa56a, 0xb54b, 0x8528, 0x9509, 0xe5ee, 0xf5cf, 0xc5ac, 0xd58d,
+ 0x3653, 0x2672, 0x1611, 0x0630, 0x76d7, 0x66f6, 0x5695, 0x46b4,
+ 0xb75b, 0xa77a, 0x9719, 0x8738, 0xf7df, 0xe7fe, 0xd79d, 0xc7bc,
+ 0x48c4, 0x58e5, 0x6886, 0x78a7, 0x0840, 0x1861, 0x2802, 0x3823,
+ 0xc9cc, 0xd9ed, 0xe98e, 0xf9af, 0x8948, 0x9969, 0xa90a, 0xb92b,
+ 0x5af5, 0x4ad4, 0x7ab7, 0x6a96, 0x1a71, 0x0a50, 0x3a33, 0x2a12,
+ 0xdbfd, 0xcbdc, 0xfbbf, 0xeb9e, 0x9b79, 0x8b58, 0xbb3b, 0xab1a,
+ 0x6ca6, 0x7c87, 0x4ce4, 0x5cc5, 0x2c22, 0x3c03, 0x0c60, 0x1c41,
+ 0xedae, 0xfd8f, 0xcdec, 0xddcd, 0xad2a, 0xbd0b, 0x8d68, 0x9d49,
+ 0x7e97, 0x6eb6, 0x5ed5, 0x4ef4, 0x3e13, 0x2e32, 0x1e51, 0x0e70,
+ 0xff9f, 0xefbe, 0xdfdd, 0xcffc, 0xbf1b, 0xaf3a, 0x9f59, 0x8f78,
+ 0x9188, 0x81a9, 0xb1ca, 0xa1eb, 0xd10c, 0xc12d, 0xf14e, 0xe16f,
+ 0x1080, 0x00a1, 0x30c2, 0x20e3, 0x5004, 0x4025, 0x7046, 0x6067,
+ 0x83b9, 0x9398, 0xa3fb, 0xb3da, 0xc33d, 0xd31c, 0xe37f, 0xf35e,
+ 0x02b1, 0x1290, 0x22f3, 0x32d2, 0x4235, 0x5214, 0x6277, 0x7256,
+ 0xb5ea, 0xa5cb, 0x95a8, 0x8589, 0xf56e, 0xe54f, 0xd52c, 0xc50d,
+ 0x34e2, 0x24c3, 0x14a0, 0x0481, 0x7466, 0x6447, 0x5424, 0x4405,
+ 0xa7db, 0xb7fa, 0x8799, 0x97b8, 0xe75f, 0xf77e, 0xc71d, 0xd73c,
+ 0x26d3, 0x36f2, 0x0691, 0x16b0, 0x6657, 0x7676, 0x4615, 0x5634,
+ 0xd94c, 0xc96d, 0xf90e, 0xe92f, 0x99c8, 0x89e9, 0xb98a, 0xa9ab,
+ 0x5844, 0x4865, 0x7806, 0x6827, 0x18c0, 0x08e1, 0x3882, 0x28a3,
+ 0xcb7d, 0xdb5c, 0xeb3f, 0xfb1e, 0x8bf9, 0x9bd8, 0xabbb, 0xbb9a,
+ 0x4a75, 0x5a54, 0x6a37, 0x7a16, 0x0af1, 0x1ad0, 0x2ab3, 0x3a92,
+ 0xfd2e, 0xed0f, 0xdd6c, 0xcd4d, 0xbdaa, 0xad8b, 0x9de8, 0x8dc9,
+ 0x7c26, 0x6c07, 0x5c64, 0x4c45, 0x3ca2, 0x2c83, 0x1ce0, 0x0cc1,
+ 0xef1f, 0xff3e, 0xcf5d, 0xdf7c, 0xaf9b, 0xbfba, 0x8fd9, 0x9ff8,
+ 0x6e17, 0x7e36, 0x4e55, 0x5e74, 0x2e93, 0x3eb2, 0x0ed1, 0x1ef0
+ };
+
+ const UINT8 *src = reinterpret_cast<const UINT8 *>(data);
+
+ // fetch the current value into a local and rip through the source data
+ UINT16 crc = m_accum.m_raw;
+ while (length-- != 0)
+ crc = (crc << 8) ^ s_table[(crc >> 8) ^ *src++];
+ m_accum.m_raw = crc;
+}
diff --git a/src/lib/util/hashing.h b/src/lib/util/hashing.h
new file mode 100644
index 00000000000..8bb59b9c5cc
--- /dev/null
+++ b/src/lib/util/hashing.h
@@ -0,0 +1,245 @@
+/***************************************************************************
+
+ hashing.h
+
+ Hashing helper classes.
+
+****************************************************************************
+
+ Copyright Aaron Giles
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name 'MAME' nor the names of its contributors may be
+ used to endorse or promote products derived from this software
+ without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY AARON GILES ''AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ DISCLAIMED. IN NO EVENT SHALL AARON GILES BE LIABLE FOR ANY DIRECT,
+ INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
+ IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#pragma once
+
+#ifndef __HASHING_H__
+#define __HASHING_H__
+
+#include "osdcore.h"
+#include "astring.h"
+#include "zlib.h"
+#include "md5.h"
+#include "sha1.h"
+
+
+//**************************************************************************
+// TYPE DEFINITIONS
+//**************************************************************************
+
+
+// ======================> SHA-1
+
+// final digest
+struct sha1_t
+{
+ bool operator==(const sha1_t &rhs) const { return memcmp(m_raw, rhs.m_raw, sizeof(m_raw)) == 0; }
+ bool operator!=(const sha1_t &rhs) const { return memcmp(m_raw, rhs.m_raw, sizeof(m_raw)) != 0; }
+ operator UINT8 *() { return m_raw; }
+ const char *as_string(astring &buffer);
+ bool from_string(const char *string);
+ UINT8 m_raw[20];
+ static const sha1_t null;
+};
+
+// creation helper
+class sha1_creator
+{
+public:
+ // construction/destruction
+ sha1_creator() { reset(); }
+
+ // reset
+ void reset() { sha1_init(&m_context); }
+
+ // append data
+ void append(const void *data, UINT32 length) { sha1_update(&m_context, length, reinterpret_cast<const UINT8 *>(data)); }
+
+ // finalize and compute the final digest
+ sha1_t finish()
+ {
+ sha1_t result;
+ sha1_final(&m_context);
+ sha1_digest(&m_context, sizeof(result.m_raw), result.m_raw);
+ return result;
+ }
+
+ // static wrapper to just get the digest from a block
+ static sha1_t simple(const void *data, UINT32 length)
+ {
+ sha1_creator creator;
+ creator.append(data, length);
+ return creator.finish();
+ }
+
+protected:
+ // internal state
+ struct sha1_ctx m_context; // internal context
+};
+
+
+
+// ======================> MD5
+
+// final digest
+struct md5_t
+{
+ bool operator==(const md5_t &rhs) const { return memcmp(m_raw, rhs.m_raw, sizeof(m_raw)) == 0; }
+ bool operator!=(const md5_t &rhs) const { return memcmp(m_raw, rhs.m_raw, sizeof(m_raw)) != 0; }
+ operator UINT8 *() { return m_raw; }
+ const char *as_string(astring &buffer);
+ bool from_string(const char *string);
+ UINT8 m_raw[16];
+ static const md5_t null;
+};
+
+// creation helper
+class md5_creator
+{
+public:
+ // construction/destruction
+ md5_creator() { reset(); }
+
+ // reset
+ void reset() { MD5Init(&m_context); }
+
+ // append data
+ void append(const void *data, UINT32 length) { MD5Update(&m_context, reinterpret_cast<const unsigned char *>(data), length); }
+
+ // finalize and compute the final digest
+ md5_t finish()
+ {
+ md5_t result;
+ MD5Final(result.m_raw, &m_context);
+ return result;
+ }
+
+ // static wrapper to just get the digest from a block
+ static md5_t simple(const void *data, UINT32 length)
+ {
+ md5_creator creator;
+ creator.append(data, length);
+ return creator.finish();
+ }
+
+protected:
+ // internal state
+ struct MD5Context m_context; // internal context
+};
+
+
+
+// ======================> CRC-32
+
+// final digest
+struct crc32_t
+{
+ bool operator==(const crc32_t &rhs) const { return m_raw == rhs.m_raw; }
+ operator UINT32() const { return m_raw; }
+ const char *as_string(astring &buffer);
+ bool from_string(const char *string);
+ UINT32 m_raw;
+ static const crc32_t null;
+};
+
+// creation helper
+class crc32_creator
+{
+public:
+ // construction/destruction
+ crc32_creator() { reset(); }
+
+ // reset
+ void reset() { m_accum.m_raw = 0; }
+
+ // append data
+ void append(const void *data, UINT32 length) { m_accum.m_raw = crc32(m_accum, reinterpret_cast<const Bytef *>(data), length); }
+
+ // finalize and compute the final digest
+ crc32_t finish() { return m_accum; }
+
+ // static wrapper to just get the digest from a block
+ static crc32_t simple(const void *data, UINT32 length)
+ {
+ crc32_creator creator;
+ creator.append(data, length);
+ return creator.finish();
+ }
+
+protected:
+ // internal state
+ crc32_t m_accum; // internal accumulator
+};
+
+
+
+// ======================> CRC-16
+
+// final digest
+struct crc16_t
+{
+ bool operator==(const crc16_t &rhs) const { return m_raw == rhs.m_raw; }
+ operator UINT16() const { return m_raw; }
+ const char *as_string(astring &buffer);
+ bool from_string(const char *string);
+ UINT16 m_raw;
+ static const crc16_t null;
+};
+
+// creation helper
+class crc16_creator
+{
+public:
+ // construction/destruction
+ crc16_creator() { reset(); }
+
+ // reset
+ void reset() { m_accum.m_raw = 0xffff; }
+
+ // append data
+ void append(const void *data, UINT32 length);
+
+ // finalize and compute the final digest
+ crc16_t finish() { return m_accum; }
+
+ // static wrapper to just get the digest from a block
+ static crc16_t simple(const void *data, UINT32 length)
+ {
+ crc16_creator creator;
+ creator.append(data, length);
+ return creator.finish();
+ }
+
+protected:
+ // internal state
+ crc16_t m_accum; // internal accumulator
+};
+
+
+#endif // __HASHING_H__
diff --git a/src/lib/util/huffman.c b/src/lib/util/huffman.c
index 3b9c4ea52b8..b672a7a0d89 100644
--- a/src/lib/util/huffman.c
+++ b/src/lib/util/huffman.c
@@ -2,7 +2,7 @@
huffman.c
- Video compression and decompression helpers.
+ Static Huffman compression and decompression helpers.
****************************************************************************
@@ -127,1569 +127,636 @@
#include <stdlib.h>
+#include "coretmpl.h"
#include "huffman.h"
-/***************************************************************************
- CONSTANTS
-***************************************************************************/
-
-#define HUFFMAN_CODES 256
-#define HUFFMAN_DELTARLE_CODES (HUFFMAN_CODES + 16)
-
-#define MAX_HUFFMAN_CODES (HUFFMAN_DELTARLE_CODES)
-#define MAX_HUFFMAN_NODES (MAX_HUFFMAN_CODES + MAX_HUFFMAN_CODES)
+//**************************************************************************
+// MACROS
+//**************************************************************************
+#define MAKE_LOOKUP(code,bits) (((code) << 5) | ((bits) & 0x1f))
-/***************************************************************************
- MACROS
-***************************************************************************/
-#define MAKE_LOOKUP(code,bits) (((code) << 6) | ((bits) & 0x1f))
-#define LOOKUP_CODE(val) ((val) >> 6)
-#define LOOKUP_BITS(val) ((val) & 0x1f)
+//**************************************************************************
+// IMPLEMENTATION
+//**************************************************************************
+//-------------------------------------------------
+// huffman_context_base - create an encoding/
+// decoding context
+//-------------------------------------------------
-
-/***************************************************************************
- TYPE DEFINITIONS
-***************************************************************************/
-
-typedef struct _bit_buffer bit_buffer;
-struct _bit_buffer
+huffman_context_base::huffman_context_base(int numcodes, int maxbits, lookup_value *lookup, UINT32 *histo, node_t *nodes)
+ : m_numcodes(numcodes),
+ m_maxbits(maxbits),
+ m_prevdata(0),
+ m_rleremaining(0),
+ m_lookup(lookup),
+ m_datahisto(histo),
+ m_huffnode(nodes)
{
- UINT32 buffer; /* current bit accumulator */
- int bits; /* number of bits in the accumulator */
- union
- {
- const UINT8 * read; /* read pointer */
- UINT8 * write; /* write pointer */
- } data;
- UINT32 doffset; /* byte offset within the data */
- UINT32 dlength; /* length of the data */
- int overflow; /* flag: true if we read/wrote past the end */
-};
-
-
-typedef struct _huffman_node huffman_node;
-struct _huffman_node
-{
- huffman_node * parent; /* pointer to parent node */
- UINT32 count; /* number of hits on this node */
- UINT32 weight; /* assigned weight of this node */
- UINT32 bits; /* bits used to encode the node */
- UINT8 numbits; /* number of bits needed for this node */
-};
-
-
-struct _huffman_context
-{
- UINT8 maxbits; /* maximum bits per code */
- UINT8 lookupdirty; /* TRUE if the lookup table is dirty */
- UINT8 prevdata; /* value of the previous data (for delta-RLE encoding) */
- UINT32 datahisto[MAX_HUFFMAN_CODES]; /* histogram of data values */
- int rleremaining; /* number of RLE bytes remaining (for delta-RLE encoding) */
- huffman_node huffnode[MAX_HUFFMAN_NODES]; /* array of nodes */
- huffman_lookup_value * lookup; /* pointer to the lookup table */
-};
-
-
-
-/***************************************************************************
- PROTOTYPES
-***************************************************************************/
-
-static huffman_error huffman_deltarle_decode_data_interleaved_0102(huffman_context **contexts, const UINT8 *source, UINT32 slength, UINT8 *dest, UINT32 dwidth, UINT32 dheight, UINT32 dstride, UINT32 dxor, UINT32 *actlength);
-
-static huffman_error import_tree(huffman_context *context, const UINT8 *source, UINT32 slength, UINT32 *actlength, UINT32 numcodes);
-static huffman_error export_tree(huffman_context *context, UINT8 *dest, UINT32 dlength, UINT32 *actlength, UINT32 numcodes);
-static void write_rle_tree_bits(bit_buffer *bitbuf, int value, int repcount, int numbits);
-static int CLIB_DECL tree_node_compare(const void *item1, const void *item2);
-static huffman_error compute_optimal_tree(huffman_context *context, const UINT32 *datahisto, UINT32 numcodes);
-static int huffman_build_tree(huffman_context *context, const UINT32 *datahisto, UINT32 totaldata, UINT32 totalweight, UINT32 numcodes);
-static huffman_error assign_canonical_codes(huffman_context *context, UINT32 numcodes);
-static huffman_error build_lookup_table(huffman_context *context, UINT32 numcodes);
-
-
-
-/***************************************************************************
- INLINE FUNCTIONS
-***************************************************************************/
-
-/*-------------------------------------------------
- bit_buffer_write_init - initialize a bit
- buffer for writing
--------------------------------------------------*/
-
-INLINE void bit_buffer_write_init(bit_buffer *bitbuf, UINT8 *data, UINT32 dlength)
-{
- /* fill in the basic data structure */
- bitbuf->buffer = 0;
- bitbuf->bits = 0;
- bitbuf->data.write = data;
- bitbuf->doffset = 0;
- bitbuf->dlength = dlength;
- bitbuf->overflow = FALSE;
-}
-
-
-/*-------------------------------------------------
- bit_buffer_write - write 'numbits' to the
- bit buffer, assuming that 'newbits' is right-
- justified
--------------------------------------------------*/
-
-INLINE void bit_buffer_write(bit_buffer *bitbuf, UINT32 newbits, int numbits)
-{
- /* flush the buffer if we're going to overflow it */
- if (bitbuf->bits + numbits > 32)
- while (bitbuf->bits >= 8)
- {
- if (bitbuf->doffset < bitbuf->dlength)
- bitbuf->data.write[bitbuf->doffset] = bitbuf->buffer >> 24;
- else
- bitbuf->overflow = TRUE;
- bitbuf->doffset++;
- bitbuf->buffer <<= 8;
- bitbuf->bits -= 8;
- }
-
- /* shift the bits to the top */
- newbits <<= 32 - numbits;
-
- /* now shift it down to account for the number of bits we already have and OR them in */
- bitbuf->buffer |= newbits >> bitbuf->bits;
- bitbuf->bits += numbits;
-}
-
-
-/*-------------------------------------------------
- bit_buffer_flush - flush any bits in the write
- buffer and return the final data offset
--------------------------------------------------*/
-
-INLINE UINT32 bit_buffer_flush(bit_buffer *bitbuf)
-{
- while (bitbuf->bits > 0)
- {
- if (bitbuf->doffset < bitbuf->dlength)
- bitbuf->data.write[bitbuf->doffset] = bitbuf->buffer >> 24;
- else
- bitbuf->overflow = TRUE;
- bitbuf->doffset++;
- bitbuf->buffer <<= 8;
- bitbuf->bits -= 8;
- }
- return bitbuf->doffset;
-}
-
-
-/*-------------------------------------------------
- bit_buffer_read_init - initialize a bit
- buffer for reading
--------------------------------------------------*/
-
-INLINE void bit_buffer_read_init(bit_buffer *bitbuf, const UINT8 *data, UINT32 dlength)
-{
- /* fill in the basic data structure */
- bitbuf->buffer = 0;
- bitbuf->bits = 0;
- bitbuf->data.read = data;
- bitbuf->doffset = 0;
- bitbuf->dlength = dlength;
- bitbuf->overflow = FALSE;
-}
-
-
-/*-------------------------------------------------
- bit_buffer_read - read 'numbits' bits from
- the buffer, returning them right-justified
--------------------------------------------------*/
-
-INLINE UINT32 bit_buffer_read(bit_buffer *bitbuf, int numbits)
-{
- UINT32 result;
-
- /* fetch data if we need more */
- if (numbits > bitbuf->bits)
- {
- while (bitbuf->bits <= 24)
- {
- if (bitbuf->doffset < bitbuf->dlength)
- bitbuf->buffer |= bitbuf->data.read[bitbuf->doffset] << (24 - bitbuf->bits);
- bitbuf->doffset++;
- bitbuf->bits += 8;
- }
- if (numbits > bitbuf->bits)
- bitbuf->overflow = TRUE;
- }
-
- /* return the data */
- result = bitbuf->buffer >> (32 - numbits);
- bitbuf->buffer <<= numbits;
- bitbuf->bits -= numbits;
- return result;
-}
-
-
-/*-------------------------------------------------
- bit_buffer_peek - peek ahead and return
- 'numbits' bits from the buffer, returning
- them right-justified
--------------------------------------------------*/
-
-INLINE UINT32 bit_buffer_peek(bit_buffer *bitbuf, int numbits)
-{
- /* fetch data if we need more */
- if (numbits > bitbuf->bits)
- {
- while (bitbuf->bits <= 24)
- {
- if (bitbuf->doffset < bitbuf->dlength)
- bitbuf->buffer |= bitbuf->data.read[bitbuf->doffset] << (24 - bitbuf->bits);
- bitbuf->doffset++;
- bitbuf->bits += 8;
- }
- if (numbits > bitbuf->bits)
- bitbuf->overflow = TRUE;
- }
-
- /* return the data */
- return bitbuf->buffer >> (32 - numbits);
-}
-
-
-/*-------------------------------------------------
- bit_buffer_remove - remove 'numbits' bits
- from the bit buffer; this presupposes that
- at least 'numbits' are present
--------------------------------------------------*/
-
-INLINE void bit_buffer_remove(bit_buffer *bitbuf, int numbits)
-{
- bitbuf->buffer <<= numbits;
- bitbuf->bits -= numbits;
-}
-
-
-/*-------------------------------------------------
- bit_buffer_read_offset - return the current
- rounded byte reading offset
--------------------------------------------------*/
-
-INLINE UINT32 bit_buffer_read_offset(bit_buffer *bitbuf)
-{
- UINT32 result = bitbuf->doffset;
- int bits = bitbuf->bits;
- while (bits >= 8)
- {
- result--;
- bits -= 8;
- }
- return result;
-}
-
-
-/*-------------------------------------------------
- code_to_rlecount - number of RLE repetitions
- encoded in a given byte
--------------------------------------------------*/
-
-INLINE int code_to_rlecount(int code)
-{
- if (code == 0x00)
- return 1;
- if (code <= 0x107)
- return 8 + (code - 0x100);
- return 16 << (code - 0x108);
-}
-
-
-/*-------------------------------------------------
- rlecount_to_byte - return a byte encoding
- the maximum RLE count less than or equal to
- the provided amount
--------------------------------------------------*/
-
-INLINE int rlecount_to_code(int rlecount)
-{
- if (rlecount >= 2048)
- return 0x10f;
- if (rlecount >= 1024)
- return 0x10e;
- if (rlecount >= 512)
- return 0x10d;
- if (rlecount >= 256)
- return 0x10c;
- if (rlecount >= 128)
- return 0x10b;
- if (rlecount >= 64)
- return 0x10a;
- if (rlecount >= 32)
- return 0x109;
- if (rlecount >= 16)
- return 0x108;
- if (rlecount >= 8)
- return 0x100 + (rlecount - 8);
- return 0x00;
-}
-
-
-
-/***************************************************************************
- IMPLEMENTATION
-***************************************************************************/
-
-/*-------------------------------------------------
- huffman_create_context - create an encoding/
- decoding context
--------------------------------------------------*/
-
-huffman_error huffman_create_context(huffman_context **context, int maxbits)
-{
- /* limit to 24 bits */
+ // limit to 24 bits
if (maxbits > 24)
- return HUFFERR_TOO_MANY_BITS;
-
- /* allocate a context */
- *context = (huffman_context *)malloc(sizeof(**context));
- if (*context == NULL)
- return HUFFERR_OUT_OF_MEMORY;
-
- /* set the info */
- memset(*context, 0, sizeof(**context));
- (*context)->maxbits = maxbits;
- (*context)->lookupdirty = TRUE;
-
- return HUFFERR_NONE;
+ throw HUFFERR_TOO_MANY_BITS;
}
-/*-------------------------------------------------
- huffman_free_context - free an encoding/
- decoding context
--------------------------------------------------*/
+//-------------------------------------------------
+// import_tree_rle - import an RLE-encoded
+// huffman tree from a source data stream
+//-------------------------------------------------
-void huffman_free_context(huffman_context *context)
+huffman_error huffman_context_base::import_tree_rle(bitstream_in &bitbuf)
{
- if (context->lookup != NULL)
- free(context->lookup);
- free(context);
-}
-
-
-/*-------------------------------------------------
- huffman_import_tree - import a huffman tree
- from a source data stream
--------------------------------------------------*/
-
-huffman_error huffman_import_tree(huffman_context *context, const UINT8 *source, UINT32 slength, UINT32 *actlength)
-{
- return import_tree(context, source, slength, actlength, HUFFMAN_CODES);
-}
-
-
-/*-------------------------------------------------
- huffman_export_tree - export a huffman tree
- to a target data stream
--------------------------------------------------*/
-
-huffman_error huffman_export_tree(huffman_context *context, UINT8 *dest, UINT32 dlength, UINT32 *actlength)
-{
- return export_tree(context, dest, dlength, actlength, HUFFMAN_CODES);
-}
-
-
-/*-------------------------------------------------
- huffman_deltarle_import_tree - import a
- huffman tree from a source data stream for
- delta-RLE encoded data
--------------------------------------------------*/
-
-huffman_error huffman_deltarle_import_tree(huffman_context *context, const UINT8 *source, UINT32 slength, UINT32 *actlength)
-{
- return import_tree(context, source, slength, actlength, HUFFMAN_DELTARLE_CODES);
-}
-
-
-/*-------------------------------------------------
- huffman__deltarle_export_tree - export a
- huffman tree to a target data stream for
- delta-RLE encoded data
--------------------------------------------------*/
-
-huffman_error huffman_deltarle_export_tree(huffman_context *context, UINT8 *dest, UINT32 dlength, UINT32 *actlength)
-{
- return export_tree(context, dest, dlength, actlength, HUFFMAN_DELTARLE_CODES);
-}
-
-
-/*-------------------------------------------------
- huffman_compute_tree - compute an optimal
- huffman tree for the given source data
--------------------------------------------------*/
-
-huffman_error huffman_compute_tree(huffman_context *context, const UINT8 *source, UINT32 swidth, UINT32 sheight, UINT32 sstride, UINT32 sxor)
-{
- return huffman_compute_tree_interleaved(1, &context, source, swidth, sheight, sstride, sxor);
-}
-
-huffman_error huffman_compute_tree_interleaved(int numcontexts, huffman_context **contexts, const UINT8 *source, UINT32 swidth, UINT32 sheight, UINT32 sstride, UINT32 sxor)
-{
- UINT32 sx, sy, ctxnum;
- huffman_error error;
-
- /* initialize all nodes */
- for (ctxnum = 0; ctxnum < numcontexts; ctxnum++)
- {
- huffman_context *context = contexts[ctxnum];
- memset(context->datahisto, 0, sizeof(context->datahisto));
- }
-
- /* iterate over "height" */
- for (sy = 0; sy < sheight; sy++)
- {
- /* iterate over "width" */
- for (sx = 0; sx < swidth; )
- {
- /* iterate over contexts */
- for (ctxnum = 0; ctxnum < numcontexts; ctxnum++, sx++)
- {
- huffman_context *context = contexts[ctxnum];
- context->datahisto[source[sx ^ sxor]]++;
- }
- }
-
- /* advance to the next row */
- source += sstride;
- }
-
- /* compute optimal trees for each */
- for (ctxnum = 0; ctxnum < numcontexts; ctxnum++)
- {
- huffman_context *context = contexts[ctxnum];
- error = compute_optimal_tree(context, context->datahisto, HUFFMAN_CODES);
- if (error != HUFFERR_NONE)
- return error;
- }
- return HUFFERR_NONE;
-}
-
-
-/*-------------------------------------------------
- huffman_deltarle_compute_tree - compute an
- optimal huffman tree for the given source
- data, with pre-encoding as delta-RLE
--------------------------------------------------*/
-
-huffman_error huffman_deltarle_compute_tree(huffman_context *context, const UINT8 *source, UINT32 swidth, UINT32 sheight, UINT32 sstride, UINT32 sxor)
-{
- return huffman_deltarle_compute_tree_interleaved(1, &context, source, swidth, sheight, sstride, sxor);
-}
-
-huffman_error huffman_deltarle_compute_tree_interleaved(int numcontexts, huffman_context **contexts, const UINT8 *source, UINT32 swidth, UINT32 sheight, UINT32 sstride, UINT32 sxor)
-{
- UINT32 sx, sy, ctxnum;
- huffman_error error;
+ // bits per entry depends on the maxbits
+ int numbits;
+ if (m_maxbits >= 16)
+ numbits = 5;
+ else if (m_maxbits >= 8)
+ numbits = 4;
+ else
+ numbits = 3;
- /* initialize all nodes */
- for (ctxnum = 0; ctxnum < numcontexts; ctxnum++)
+ // loop until we read all the nodes
+ int curnode;
+ for (curnode = 0; curnode < m_numcodes; )
{
- huffman_context *context = contexts[ctxnum];
- memset(context->datahisto, 0, sizeof(context->datahisto));
- context->prevdata = 0;
- }
+ // a non-one value is just raw
+ int nodebits = bitbuf.read(numbits);
+ if (nodebits != 1)
+ m_huffnode[curnode++].m_numbits = nodebits;
- /* iterate over "height" */
- for (sy = 0; sy < sheight; sy++)
- {
- /* reset RLE counts */
- for (ctxnum = 0; ctxnum < numcontexts; ctxnum++)
+ // a one value is an escape code
+ else
{
- huffman_context *context = contexts[ctxnum];
- context->rleremaining = 0;
- }
+ // a double 1 is just a single 1
+ nodebits = bitbuf.read(numbits);
+ if (nodebits == 1)
+ m_huffnode[curnode++].m_numbits = nodebits;
- /* iterate over "width" */
- for (sx = 0; sx < swidth; )
- {
- /* iterate over contexts */
- for (ctxnum = 0; ctxnum < numcontexts; ctxnum++, sx++)
+ // otherwise, we need one for value for the repeat count
+ else
{
- huffman_context *context = contexts[ctxnum];
- UINT8 newdata, delta;
-
- /* if still counting RLE, do nothing */
- if (context->rleremaining != 0)
- {
- context->rleremaining--;
- continue;
- }
-
- /* fetch new data and compute the delta */
- newdata = source[sx ^ sxor];
- delta = newdata - context->prevdata;
- context->prevdata = newdata;
-
- /* 0 deltas scan forward for a count */
- if (delta == 0)
- {
- int zerocount = 1;
- int rlecode;
- UINT32 scan;
-
- /* count the number of consecutive values */
- for (scan = sx + 1; scan < swidth; scan++)
- if (contexts[scan % numcontexts] == context)
- {
- if (newdata == source[scan ^ sxor])
- zerocount++;
- else
- break;
- }
-
- /* if we hit the end of row, maximize the count */
- if (scan >= swidth && zerocount >= 8)
- zerocount = 100000;
-
- /* encode the maximal count we can */
- rlecode = rlecount_to_code(zerocount);
- context->datahisto[rlecode]++;
-
- /* set up the remaining count */
- context->rleremaining = code_to_rlecount(rlecode) - 1;
- }
- else
- {
- /* encode the actual delta */
- context->datahisto[delta]++;
- }
+ int repcount = bitbuf.read(numbits) + 3;
+ while (repcount--)
+ m_huffnode[curnode++].m_numbits = nodebits;
}
}
-
- /* advance to the next row */
- source += sstride;
}
- /* compute optimal trees for each */
- for (ctxnum = 0; ctxnum < numcontexts; ctxnum++)
- {
- huffman_context *context = contexts[ctxnum];
- error = compute_optimal_tree(context, context->datahisto, HUFFMAN_DELTARLE_CODES);
- if (error != HUFFERR_NONE)
- return error;
- }
- return HUFFERR_NONE;
-}
-
-
-/*-------------------------------------------------
- huffman_encode_data - encode data using the
- given tree
--------------------------------------------------*/
-
-huffman_error huffman_encode_data(huffman_context *context, const UINT8 *source, UINT32 swidth, UINT32 sheight, UINT32 sstride, UINT32 sxor, UINT8 *dest, UINT32 dlength, UINT32 *actlength)
-{
- return huffman_encode_data_interleaved(1, &context, source, swidth, sheight, sstride, sxor, dest, dlength, actlength);
-}
-
-huffman_error huffman_encode_data_interleaved(int numcontexts, huffman_context **contexts, const UINT8 *source, UINT32 swidth, UINT32 sheight, UINT32 sstride, UINT32 sxor, UINT8 *dest, UINT32 dlength, UINT32 *actlength)
-{
- UINT32 sx, sy, ctxnum;
- bit_buffer bitbuf;
-
- /* initialize the output buffer */
- bit_buffer_write_init(&bitbuf, dest, dlength);
-
- /* iterate over "height" */
- for (sy = 0; sy < sheight; sy++)
- {
- /* iterate over "width" */
- for (sx = 0; sx < swidth; )
- {
- /* iterate over contexts */
- for (ctxnum = 0; ctxnum < numcontexts; ctxnum++, sx++)
- {
- huffman_context *context = contexts[ctxnum];
- huffman_node *node = &context->huffnode[source[sx ^ sxor]];
- bit_buffer_write(&bitbuf, node->bits, node->numbits);
- }
- }
+ // make sure we ended up with the right number
+ if (curnode != m_numcodes)
+ return HUFFERR_INVALID_DATA;
- /* advance to the next row */
- source += sstride;
- }
+ // assign canonical codes for all nodes based on their code lengths
+ huffman_error error = assign_canonical_codes();
+ if (error != HUFFERR_NONE)
+ return error;
+
+ // build the lookup table
+ build_lookup_table();
- /* flush and return a status */
- *actlength = bit_buffer_flush(&bitbuf);
- return bitbuf.overflow ? HUFFERR_OUTPUT_BUFFER_TOO_SMALL : HUFFERR_NONE;
+ // determine final input length and report errors
+ return bitbuf.overflow() ? HUFFERR_INPUT_BUFFER_TOO_SMALL : HUFFERR_NONE;
}
-/*-------------------------------------------------
- huffman_deltarle_encode_data - encode data
- using the given tree with delta-RLE
- pre-encoding
--------------------------------------------------*/
+//-------------------------------------------------
+// export_tree_rle - export a huffman tree to an
+// RLE target data stream
+//-------------------------------------------------
-huffman_error huffman_deltarle_encode_data(huffman_context *context, const UINT8 *source, UINT32 swidth, UINT32 sheight, UINT32 sstride, UINT32 sxor, UINT8 *dest, UINT32 dlength, UINT32 *actlength)
+huffman_error huffman_context_base::export_tree_rle(bitstream_out &bitbuf)
{
- return huffman_deltarle_encode_data_interleaved(1, &context, source, swidth, sheight, sstride, sxor, dest, dlength, actlength);
-}
-
-huffman_error huffman_deltarle_encode_data_interleaved(int numcontexts, huffman_context **contexts, const UINT8 *source, UINT32 swidth, UINT32 sheight, UINT32 sstride, UINT32 sxor, UINT8 *dest, UINT32 dlength, UINT32 *actlength)
-{
- UINT32 sx, sy, ctxnum;
- bit_buffer bitbuf;
-
- /* initialize the output buffer */
- bit_buffer_write_init(&bitbuf, dest, dlength);
-
- /* initialize the contexts */
- for (ctxnum = 0; ctxnum < numcontexts; ctxnum++)
- {
- huffman_context *context = contexts[ctxnum];
- context->prevdata = 0;
- }
-
- /* iterate over "height" */
- for (sy = 0; sy < sheight; sy++)
- {
- /* reset RLE counts */
- for (ctxnum = 0; ctxnum < numcontexts; ctxnum++)
- {
- huffman_context *context = contexts[ctxnum];
- context->rleremaining = 0;
- }
-
- /* iterate over "width" */
- for (sx = 0; sx < swidth; )
- {
- /* iterate over contexts */
- for (ctxnum = 0; ctxnum < numcontexts; ctxnum++, sx++)
- {
- huffman_context *context = contexts[ctxnum];
- UINT8 newdata, delta;
- huffman_node *node;
-
- /* if still counting RLE, do nothing */
- if (context->rleremaining != 0)
- {
- context->rleremaining--;
- continue;
- }
-
- /* fetch new data and compute the delta */
- newdata = source[sx ^ sxor];
- delta = newdata - context->prevdata;
- context->prevdata = newdata;
-
- /* 0 deltas scan forward for a count */
- if (delta == 0)
- {
- int zerocount = 1;
- int rlecode;
- UINT32 scan;
-
- /* count the number of consecutive values */
- for (scan = sx + 1; scan < swidth; scan++)
- if (contexts[scan % numcontexts] == context)
- {
- if (newdata == source[scan ^ sxor])
- zerocount++;
- else
- break;
- }
-
- /* if we hit the end of row, maximize the count */
- if (scan >= swidth && zerocount >= 8)
- zerocount = 100000;
-
- /* encode the maximal count we can */
- rlecode = rlecount_to_code(zerocount);
- node = &context->huffnode[rlecode];
- bit_buffer_write(&bitbuf, node->bits, node->numbits);
-
- /* set up the remaining count */
- context->rleremaining = code_to_rlecount(rlecode) - 1;
- }
- else
- {
- /* encode the actual delta */
- node = &context->huffnode[delta];
- bit_buffer_write(&bitbuf, node->bits, node->numbits);
- }
- }
- }
-
- /* advance to the next row */
- source += sstride;
- }
-
- /* flush and return a status */
- *actlength = bit_buffer_flush(&bitbuf);
- return bitbuf.overflow ? HUFFERR_OUTPUT_BUFFER_TOO_SMALL : HUFFERR_NONE;
-}
-
-
-/*-------------------------------------------------
- huffman_decode_data - decode data using the
- given tree
--------------------------------------------------*/
+ // bits per entry depends on the maxbits
+ int numbits;
+ if (m_maxbits >= 16)
+ numbits = 5;
+ else if (m_maxbits >= 8)
+ numbits = 4;
+ else
+ numbits = 3;
-huffman_error huffman_decode_data(huffman_context *context, const UINT8 *source, UINT32 slength, UINT8 *dest, UINT32 dwidth, UINT32 dheight, UINT32 dstride, UINT32 dxor, UINT32 *actlength)
-{
- const huffman_lookup_value *table;
- int maxbits = context->maxbits;
- huffman_error error;
- bit_buffer bitbuf;
- UINT32 dx, dy;
-
- /* regenerate the lookup table if necessary */
- if (context->lookupdirty)
+ // RLE encode the lengths
+ int lastval = ~0;
+ int repcount = 0;
+ for (int curcode = 0; curcode < m_numcodes; curcode++)
{
- error = build_lookup_table(context, HUFFMAN_CODES);
- if (error != HUFFERR_NONE)
- return error;
- }
- table = context->lookup;
-
- /* initialize our bit buffer */
- bit_buffer_read_init(&bitbuf, source, slength);
+ // if we match the previous value, just bump the repcount
+ int newval = m_huffnode[curcode].m_numbits;
+ if (newval == lastval)
+ repcount++;
- /* iterate over "height" */
- for (dy = 0; dy < dheight; dy++)
- {
- /* iterate over "width" */
- for (dx = 0; dx < dwidth; dx++)
+ // otherwise, we need to flush the previous repeats
+ else
{
- huffman_lookup_value lookup;
- UINT32 bits;
-
- /* peek ahead to get maxbits worth of data */
- bits = bit_buffer_peek(&bitbuf, maxbits);
-
- /* look it up, then remove the actual number of bits for this code */
- lookup = table[bits];
- bit_buffer_remove(&bitbuf, LOOKUP_BITS(lookup));
-
- /* store the upper byte */
- dest[dx ^ dxor] = LOOKUP_CODE(lookup);
+ if (repcount != 0)
+ write_rle_tree_bits(bitbuf, lastval, repcount, numbits);
+ lastval = newval;
+ repcount = 1;
}
-
- /* advance to the next row */
- dest += dstride;
}
- /* determine the actual length and indicate overflow */
- *actlength = bit_buffer_read_offset(&bitbuf);
- return bitbuf.overflow ? HUFFERR_INPUT_BUFFER_TOO_SMALL : HUFFERR_NONE;
+ // flush the last value
+ write_rle_tree_bits(bitbuf, lastval, repcount, numbits);
+ return bitbuf.overflow() ? HUFFERR_OUTPUT_BUFFER_TOO_SMALL : HUFFERR_NONE;
}
-/*-------------------------------------------------
- huffman_decode_data_interleaved - decode
- interleaved data using multiple contexts
--------------------------------------------------*/
+//-------------------------------------------------
+// import_tree_huffman - import a huffman-encoded
+// huffman tree from a source data stream
+//-------------------------------------------------
-huffman_error huffman_decode_data_interleaved(int numcontexts, huffman_context **contexts, const UINT8 *source, UINT32 slength, UINT8 *dest, UINT32 dwidth, UINT32 dheight, UINT32 dstride, UINT32 dxor, UINT32 *actlength)
+huffman_error huffman_context_base::import_tree_huffman(bitstream_in &bitbuf)
{
- UINT32 dx, dy, ctxnum;
- huffman_error error;
- bit_buffer bitbuf;
-
- /* regenerate the lookup tables if necessary */
- for (ctxnum = 0; ctxnum < numcontexts; ctxnum++)
+ // start by parsing the lengths for the small tree
+ huffman_decoder<24, 6> smallhuff;
+ smallhuff.m_huffnode[0].m_numbits = bitbuf.read(3);
+ int start = bitbuf.read(3) + 1;
+ int count = 0;
+ for (int index = 1; index < 24; index++)
{
- huffman_context *context = contexts[ctxnum];
- if (context->lookupdirty)
+ if (index < start || count == 7)
+ smallhuff.m_huffnode[index].m_numbits = 0;
+ else
{
- error = build_lookup_table(context, HUFFMAN_CODES);
- if (error != HUFFERR_NONE)
- return error;
+ count = bitbuf.read(3);
+ smallhuff.m_huffnode[index].m_numbits = (count == 7) ? 0 : count;
}
}
-
- /* initialize our bit buffer */
- bit_buffer_read_init(&bitbuf, source, slength);
-
- /* iterate over "height" */
- for (dy = 0; dy < dheight; dy++)
- {
- /* iterate over "width" */
- for (dx = 0; dx < dwidth; )
+
+ // then regenerate the tree
+ huffman_error error = smallhuff.assign_canonical_codes();
+ if (error != HUFFERR_NONE)
+ return error;
+ smallhuff.build_lookup_table();
+
+ // determine the maximum length of an RLE count
+ UINT32 temp = m_numcodes - 9;
+ UINT8 rlefullbits = 0;
+ while (temp != 0)
+ temp >>= 1, rlefullbits++;
+
+ // now process the rest of the data
+ int last = 0;
+ int curcode;
+ for (curcode = 0; curcode < m_numcodes; )
+ {
+ int value = smallhuff.decode_one(bitbuf);
+ if (value != 0)
+ m_huffnode[curcode++].m_numbits = last = value - 1;
+ else
{
- /* iterate over contexts */
- for (ctxnum = 0; ctxnum < numcontexts; ctxnum++, dx++)
- {
- huffman_context *context = contexts[ctxnum];
- huffman_lookup_value lookup;
- UINT32 bits;
-
- /* peek ahead to get maxbits worth of data */
- bits = bit_buffer_peek(&bitbuf, context->maxbits);
-
- /* look it up, then remove the actual number of bits for this code */
- lookup = context->lookup[bits];
- bit_buffer_remove(&bitbuf, LOOKUP_BITS(lookup));
-
- /* store the upper byte */
- dest[dx ^ dxor] = LOOKUP_CODE(lookup);
- }
+ int count = bitbuf.read(3) + 2;
+ if (count == 7+2)
+ count += bitbuf.read(rlefullbits);
+ for ( ; count != 0 && curcode < m_numcodes; count--)
+ m_huffnode[curcode++].m_numbits = last;
}
-
- /* advance to the next row */
- dest += dstride;
- }
-
- /* determine the actual length and indicate overflow */
- *actlength = bit_buffer_read_offset(&bitbuf);
- return bitbuf.overflow ? HUFFERR_INPUT_BUFFER_TOO_SMALL : HUFFERR_NONE;
-}
-
-
-/*-------------------------------------------------
- huffman_deltarle_decode_data - decode data
- using the given tree with delta-RLE
- post-decoding
--------------------------------------------------*/
-
-huffman_error huffman_deltarle_decode_data(huffman_context *context, const UINT8 *source, UINT32 slength, UINT8 *dest, UINT32 dwidth, UINT32 dheight, UINT32 dstride, UINT32 dxor, UINT32 *actlength)
-{
- const huffman_lookup_value *table;
- int maxbits = context->maxbits;
- UINT32 rleremaining = 0;
- huffman_error error;
- UINT8 prevdata = 0;
- bit_buffer bitbuf;
- UINT32 dx, dy;
-
- /* regenerate the lookup table if necessary */
- if (context->lookupdirty)
- {
- error = build_lookup_table(context, HUFFMAN_DELTARLE_CODES);
- if (error != HUFFERR_NONE)
- return error;
}
- table = context->lookup;
-
- /* initialize our bit buffer */
- bit_buffer_read_init(&bitbuf, source, slength);
-
- /* iterate over "height" */
- for (dy = 0; dy < dheight; dy++)
- {
- /* reset RLE counts */
- rleremaining = 0;
-
- /* iterate over "width" */
- for (dx = 0; dx < dwidth; dx++)
- {
- huffman_lookup_value lookup;
- UINT32 bits;
- int data;
- /* if we have RLE remaining, just store that */
- if (rleremaining != 0)
- {
- rleremaining--;
- dest[dx ^ dxor] = prevdata;
- continue;
- }
-
- /* peek ahead to get maxbits worth of data */
- bits = bit_buffer_peek(&bitbuf, maxbits);
-
- /* look it up, then remove the actual number of bits for this code */
- lookup = table[bits];
- bit_buffer_remove(&bitbuf, LOOKUP_BITS(lookup));
-
- /* compute the data and handle RLE decoding */
- data = LOOKUP_CODE(lookup);
-
- /* if not an RLE special, just add to the previous; otherwise, start counting RLE */
- if (data < 0x100)
- prevdata += (UINT8)data;
- else
- rleremaining = code_to_rlecount(data) - 1;
+ // make sure we ended up with the right number
+ if (curcode != m_numcodes)
+ return HUFFERR_INVALID_DATA;
- /* store the updated data value */
- dest[dx ^ dxor] = prevdata;
- }
+ // assign canonical codes for all nodes based on their code lengths
+ error = assign_canonical_codes();
+ if (error != HUFFERR_NONE)
+ return error;
- /* advance to the next row */
- dest += dstride;
- }
+ // build the lookup table
+ build_lookup_table();
- /* determine the actual length and indicate overflow */
- *actlength = bit_buffer_read_offset(&bitbuf);
- return bitbuf.overflow ? HUFFERR_INPUT_BUFFER_TOO_SMALL : HUFFERR_NONE;
+ // determine final input length and report errors
+ return bitbuf.overflow() ? HUFFERR_INPUT_BUFFER_TOO_SMALL : HUFFERR_NONE;
}
-/*-------------------------------------------------
- huffman_deltarle_decode_data_interleaved -
- decode data using multiple contexts and
- delta-RLE post-decoding
--------------------------------------------------*/
+//-------------------------------------------------
+// export_tree_huffman - export a huffman tree to
+// a huffman target data stream
+//-------------------------------------------------
-huffman_error huffman_deltarle_decode_data_interleaved(int numcontexts, huffman_context **contexts, const UINT8 *source, UINT32 slength, UINT8 *dest, UINT32 dwidth, UINT32 dheight, UINT32 dstride, UINT32 dxor, UINT32 *actlength)
+huffman_error huffman_context_base::export_tree_huffman(bitstream_out &bitbuf)
{
- UINT32 dx, dy, ctxnum;
- huffman_error error;
- bit_buffer bitbuf;
-
- /* fast case the A/V Y/Cb/Y/Cr case */
- if (numcontexts == 4 && contexts[0] == contexts[2] && contexts[0] != contexts[1] && contexts[1] != contexts[3] &&
- contexts[0]->maxbits == contexts[1]->maxbits && contexts[0]->maxbits == contexts[3]->maxbits)
- return huffman_deltarle_decode_data_interleaved_0102(contexts, source, slength, dest, dwidth, dheight, dstride, dxor, actlength);
+ // first RLE compress the lengths of all the nodes
+ dynamic_array<UINT8> rle_data(m_numcodes);
+ UINT8 *dest = rle_data;
+ dynamic_array<UINT16> rle_lengths(m_numcodes/3);
+ UINT16 *lengths = rle_lengths;
+ int last = ~0;
+ int repcount = 0;
+
+ // use a small huffman context to create a tree (ignoring RLE lengths)
+ huffman_encoder<24, 6> smallhuff;
- /* regenerate the lookup tables if necessary */
- for (ctxnum = 0; ctxnum < numcontexts; ctxnum++)
+ // RLE-compress the lengths
+ for (int curcode = 0; curcode < m_numcodes; curcode++)
{
- huffman_context *context = contexts[ctxnum];
- if (context->lookupdirty)
+ // if this is the end of a repeat, flush any accumulation
+ int newval = m_huffnode[curcode].m_numbits;
+ if (newval != last && repcount > 0)
{
- error = build_lookup_table(context, HUFFMAN_DELTARLE_CODES);
- if (error != HUFFERR_NONE)
- return error;
+ if (repcount == 1)
+ smallhuff.histo_one(*dest++ = last + 1);
+ else
+ smallhuff.histo_one(*dest++ = 0), *lengths++ = repcount - 2;
}
- context->prevdata = 0;
- }
-
- /* initialize our bit buffer */
- bit_buffer_read_init(&bitbuf, source, slength);
-
- /* iterate over "height" */
- for (dy = 0; dy < dheight; dy++)
- {
- /* reset RLE counts */
- for (ctxnum = 0; ctxnum < numcontexts; ctxnum++)
- {
- huffman_context *context = contexts[ctxnum];
- context->rleremaining = 0;
- }
-
- /* iterate over "width" */
- for (dx = 0; dx < dwidth; )
+
+ // if same as last, just track repeats
+ if (newval == last)
+ repcount++;
+
+ // otherwise, write it and start a new run
+ else
{
- /* iterate over contexts */
- for (ctxnum = 0; ctxnum < numcontexts; ctxnum++, dx++)
- {
- huffman_context *context = contexts[ctxnum];
- huffman_lookup_value lookup;
- UINT32 bits;
- int data;
-
- /* if we have RLE remaining, just store that */
- if (context->rleremaining != 0)
- {
- context->rleremaining--;
- dest[dx ^ dxor] = context->prevdata;
- continue;
- }
-
- /* peek ahead to get maxbits worth of data */
- bits = bit_buffer_peek(&bitbuf, context->maxbits);
-
- /* look it up, then remove the actual number of bits for this code */
- lookup = context->lookup[bits];
- bit_buffer_remove(&bitbuf, LOOKUP_BITS(lookup));
-
- /* compute the data and handle RLE decoding */
- data = LOOKUP_CODE(lookup);
-
- /* if not an RLE special, just add to the previous; otherwise, start counting RLE */
- if (data < 0x100)
- context->prevdata += (UINT8)data;
- else
- context->rleremaining = code_to_rlecount(data) - 1;
-
- /* store the updated data value */
- dest[dx ^ dxor] = context->prevdata;
- }
+ smallhuff.histo_one(*dest++ = newval + 1);
+ last = newval;
+ repcount = 0;
}
-
- /* advance to the next row */
- dest += dstride;
}
- /* determine the actual length and indicate overflow */
- *actlength = bit_buffer_read_offset(&bitbuf);
- return bitbuf.overflow ? HUFFERR_INPUT_BUFFER_TOO_SMALL : HUFFERR_NONE;
-}
-
-
-/*-------------------------------------------------
- huffman_deltarle_decode_data_interleaved_0102 -
- decode data using 3 unique contexts in
- 0/1/0/2 order (used for Y/Cb/Y/Cr encoding)
--------------------------------------------------*/
-
-static huffman_error huffman_deltarle_decode_data_interleaved_0102(huffman_context **contexts, const UINT8 *source, UINT32 slength, UINT8 *dest, UINT32 dwidth, UINT32 dheight, UINT32 dstride, UINT32 dxor, UINT32 *actlength)
-{
- const huffman_lookup_value *table02, *table1, *table3;
- int rleremaining02, rleremaining1, rleremaining3;
- UINT8 prevdata02 = 0, prevdata1 = 0, prevdata3 = 0;
- int maxbits = contexts[0]->maxbits;
- huffman_error error;
- bit_buffer bitbuf;
- UINT32 dx, dy;
-
- /* regenerate the lookup tables if necessary */
- if (contexts[0]->lookupdirty)
- {
- error = build_lookup_table(contexts[0], HUFFMAN_DELTARLE_CODES);
- if (error != HUFFERR_NONE)
- return error;
- }
- if (contexts[1]->lookupdirty)
+ // flush any final RLE counts
+ if (repcount > 0)
{
- error = build_lookup_table(contexts[1], HUFFMAN_DELTARLE_CODES);
- if (error != HUFFERR_NONE)
- return error;
+ if (repcount == 1)
+ smallhuff.histo_one(*dest++ = last + 1);
+ else
+ smallhuff.histo_one(*dest++ = 0), *lengths++ = repcount - 2;
}
- if (contexts[3]->lookupdirty)
- {
- error = build_lookup_table(contexts[3], HUFFMAN_DELTARLE_CODES);
- if (error != HUFFERR_NONE)
- return error;
- }
-
- /* cache the tables locally */
- table02 = contexts[0]->lookup;
- table1 = contexts[1]->lookup;
- table3 = contexts[3]->lookup;
- /* initialize our bit buffer */
- bit_buffer_read_init(&bitbuf, source, slength);
+ // compute an optimal tree
+ smallhuff.compute_tree_from_histo();
- /* iterate over "height" */
- for (dy = 0; dy < dheight; dy++)
- {
- /* reset RLE counts */
- rleremaining02 = rleremaining1 = rleremaining3 = 0;
-
- /* iterate over "width" */
- for (dx = 0; dx < dwidth; dx += 4)
+ // determine the first and last non-zero nodes
+ int first_non_zero = 31, last_non_zero = 0;
+ for (int index = 1; index < smallhuff.m_numcodes; index++)
+ if (smallhuff.m_huffnode[index].m_numbits != 0)
{
- huffman_lookup_value lookup;
- UINT32 bits;
- int data;
-
- /* ----- offset 0 ----- */
-
- /* if we have RLE remaining, just store that */
- if (rleremaining02 != 0)
- rleremaining02--;
- else
- {
- /* peek ahead to get maxbits worth of data */
- bits = bit_buffer_peek(&bitbuf, maxbits);
-
- /* look it up, then remove the actual number of bits for this code */
- lookup = table02[bits];
- bit_buffer_remove(&bitbuf, LOOKUP_BITS(lookup));
-
- /* compute the data and handle RLE decoding */
- data = LOOKUP_CODE(lookup);
-
- /* if not an RLE special, just add to the previous; otherwise, start counting RLE */
- if (data < 0x100)
- prevdata02 += (UINT8)data;
- else
- rleremaining02 = code_to_rlecount(data) - 1;
- }
-
- /* store the updated data value */
- dest[(dx + 0) ^ dxor] = prevdata02;
-
- /* ----- offset 1 ----- */
-
- /* if we have RLE remaining, just store that */
- if (rleremaining1 != 0)
- rleremaining1--;
- else
- {
- /* peek ahead to get maxbits worth of data */
- bits = bit_buffer_peek(&bitbuf, maxbits);
-
- /* look it up, then remove the actual number of bits for this code */
- lookup = table1[bits];
- bit_buffer_remove(&bitbuf, LOOKUP_BITS(lookup));
-
- /* compute the data and handle RLE decoding */
- data = LOOKUP_CODE(lookup);
-
- /* if not an RLE special, just add to the previous; otherwise, start counting RLE */
- if (data < 0x100)
- prevdata1 += (UINT8)data;
- else
- rleremaining1 = code_to_rlecount(data) - 1;
- }
-
- /* store the updated data value */
- dest[(dx + 1) ^ dxor] = prevdata1;
-
- /* ----- offset 2 (same as 0) ----- */
-
- /* if we have RLE remaining, just store that */
- if (rleremaining02 != 0)
- rleremaining02--;
- else
- {
- /* peek ahead to get maxbits worth of data */
- bits = bit_buffer_peek(&bitbuf, maxbits);
-
- /* look it up, then remove the actual number of bits for this code */
- lookup = table02[bits];
- bit_buffer_remove(&bitbuf, LOOKUP_BITS(lookup));
-
- /* compute the data and handle RLE decoding */
- data = LOOKUP_CODE(lookup);
-
- /* if not an RLE special, just add to the previous; otherwise, start counting RLE */
- if (data < 0x100)
- prevdata02 += (UINT8)data;
- else
- rleremaining02 = code_to_rlecount(data) - 1;
- }
-
- /* store the updated data value */
- dest[(dx + 2) ^ dxor] = prevdata02;
-
- /* ----- offset 3 ----- */
+ if (first_non_zero == 31)
+ first_non_zero = index;
+ last_non_zero = index;
+ }
- /* if we have RLE remaining, just store that */
- if (rleremaining3 != 0)
- rleremaining3--;
+ // clamp first non-zero to be 8 at a maximum
+ first_non_zero = MIN(first_non_zero, 8);
+
+ // output the lengths of the each small tree node, starting with the RLE
+ // token (0), followed by the first_non_zero value, followed by the data
+ // terminated by a 7
+ bitbuf.write(smallhuff.m_huffnode[0].m_numbits, 3);
+ bitbuf.write(first_non_zero - 1, 3);
+ for (int index = first_non_zero; index <= last_non_zero; index++)
+ bitbuf.write(smallhuff.m_huffnode[index].m_numbits, 3);
+ bitbuf.write(7, 3);
+
+ // determine the maximum length of an RLE count
+ UINT32 temp = m_numcodes - 9;
+ UINT8 rlefullbits = 0;
+ while (temp != 0)
+ temp >>= 1, rlefullbits++;
+
+ // now encode the RLE data
+ lengths = rle_lengths;
+ for (UINT8 *src = rle_data; src < dest; src++)
+ {
+ // encode the data
+ UINT8 data = *src;
+ smallhuff.encode_one(bitbuf, data);
+
+ // if this is an RLE token, encode the length following
+ if (data == 0)
+ {
+ int count = *lengths++;
+ if (count < 7)
+ bitbuf.write(count, 3);
else
- {
- /* peek ahead to get maxbits worth of data */
- bits = bit_buffer_peek(&bitbuf, maxbits);
-
- /* look it up, then remove the actual number of bits for this code */
- lookup = table3[bits];
- bit_buffer_remove(&bitbuf, LOOKUP_BITS(lookup));
-
- /* compute the data and handle RLE decoding */
- data = LOOKUP_CODE(lookup);
-
- /* if not an RLE special, just add to the previous; otherwise, start counting RLE */
- if (data < 0x100)
- prevdata3 += (UINT8)data;
- else
- rleremaining3 = code_to_rlecount(data) - 1;
- }
-
- /* store the updated data value */
- dest[(dx + 3) ^ dxor] = prevdata3;
+ bitbuf.write(7, 3), bitbuf.write(count - 7, rlefullbits);
}
-
- /* advance to the next row */
- dest += dstride;
}
-
- /* determine the actual length and indicate overflow */
- *actlength = bit_buffer_read_offset(&bitbuf);
- return bitbuf.overflow ? HUFFERR_INPUT_BUFFER_TOO_SMALL : HUFFERR_NONE;
+
+ // flush the final buffer
+ return bitbuf.overflow() ? HUFFERR_OUTPUT_BUFFER_TOO_SMALL : HUFFERR_NONE;
}
+//-------------------------------------------------
+// compute_tree_from_histo - common backend for
+// computing a tree based on the data histogram
+//-------------------------------------------------
-/***************************************************************************
- INTERNAL FUNCTIONS
-***************************************************************************/
-
-/*-------------------------------------------------
- import_tree - import a huffman tree from a
- source data stream
--------------------------------------------------*/
-
-static huffman_error import_tree(huffman_context *context, const UINT8 *source, UINT32 slength, UINT32 *actlength, UINT32 numcodes)
+huffman_error huffman_context_base::compute_tree_from_histo()
{
- huffman_error error;
- bit_buffer bitbuf;
- int curnode;
- int numbits;
-
- /* initialize the input buffer */
- bit_buffer_read_init(&bitbuf, source, slength);
-
- /* bits per entry depends on the maxbits */
- if (context->maxbits >= 16)
- numbits = 5;
- else if (context->maxbits >= 8)
- numbits = 4;
- else
- numbits = 3;
+ // compute the number of data items in the histogram
+ UINT32 sdatacount = 0;
+ for (int i = 0; i < m_numcodes; i++)
+ sdatacount += m_datahisto[i];
- /* loop until we read all the nodes */
- for (curnode = 0; curnode < numcodes; )
+ // binary search to achieve the optimum encoding
+ UINT32 lowerweight = 0;
+ UINT32 upperweight = sdatacount * 2;
+ while (1)
{
- int nodebits = bit_buffer_read(&bitbuf, numbits);
-
- /* a non-one value is just raw */
- if (nodebits != 1)
- context->huffnode[curnode++].numbits = nodebits;
+ // build a tree using the current weight
+ UINT32 curweight = (upperweight + lowerweight) / 2;
+ int curmaxbits = build_tree(sdatacount, curweight);
- /* a one value is an escape code */
- else
+ // apply binary search here
+ if (curmaxbits <= m_maxbits)
{
- nodebits = bit_buffer_read(&bitbuf, numbits);
-
- /* a double 1 is just a single 1 */
- if (nodebits == 1)
- context->huffnode[curnode++].numbits = nodebits;
+ lowerweight = curweight;
- /* otherwise, we need one for value for the repeat count */
- else
- {
- int repcount = bit_buffer_read(&bitbuf, numbits) + 3;
- while (repcount--)
- context->huffnode[curnode++].numbits = nodebits;
- }
+ // early out if it worked with the raw weights, or if we're done searching
+ if (curweight == sdatacount || (upperweight - lowerweight) <= 1)
+ break;
}
+ else
+ upperweight = curweight;
}
- /* assign canonical codes for all nodes based on their code lengths */
- error = assign_canonical_codes(context, numcodes);
- if (error != HUFFERR_NONE)
- return error;
-
- /* make sure we ended up with the right number */
- if (curnode != numcodes)
- return HUFFERR_INVALID_DATA;
-
- *actlength = bit_buffer_read_offset(&bitbuf);
- return bitbuf.overflow ? HUFFERR_INPUT_BUFFER_TOO_SMALL : HUFFERR_NONE;
+ // assign canonical codes for all nodes based on their code lengths
+ return assign_canonical_codes();
}
-/*-------------------------------------------------
- export_tree - export a huffman tree to a
- target data stream
--------------------------------------------------*/
-
-static huffman_error export_tree(huffman_context *context, UINT8 *dest, UINT32 dlength, UINT32 *actlength, UINT32 numcodes)
-{
- bit_buffer bitbuf;
- int repcount;
- int lastval;
- int numbits;
- int i;
-
- /* initialize the output buffer */
- bit_buffer_write_init(&bitbuf, dest, dlength);
-
- /* bits per entry depends on the maxbits */
- if (context->maxbits >= 16)
- numbits = 5;
- else if (context->maxbits >= 8)
- numbits = 4;
- else
- numbits = 3;
-
- /* RLE encode the lengths */
- lastval = ~0;
- repcount = 0;
- for (i = 0; i < numcodes; i++)
- {
- int newval = context->huffnode[i].numbits;
-
- /* if we match the previous value, just bump the repcount */
- if (newval == lastval)
- repcount++;
-
- /* otherwise, we need to flush the previous repeats */
- else
- {
- if (repcount != 0)
- write_rle_tree_bits(&bitbuf, lastval, repcount, numbits);
- lastval = newval;
- repcount = 1;
- }
- }
-
- /* flush the last value */
- write_rle_tree_bits(&bitbuf, lastval, repcount, numbits);
- *actlength = bit_buffer_flush(&bitbuf);
- return bitbuf.overflow ? HUFFERR_OUTPUT_BUFFER_TOO_SMALL : HUFFERR_NONE;
-}
+//**************************************************************************
+// INTERNAL FUNCTIONS
+//**************************************************************************
-/*-------------------------------------------------
- write_rle_tree_bits - write an RLE encoded
- set of data to a target stream
--------------------------------------------------*/
+//-------------------------------------------------
+// write_rle_tree_bits - write an RLE encoded
+// set of data to a target stream
+//-------------------------------------------------
-static void write_rle_tree_bits(bit_buffer *bitbuf, int value, int repcount, int numbits)
+void huffman_context_base::write_rle_tree_bits(bitstream_out &bitbuf, int value, int repcount, int numbits)
{
- /* loop until we have output all of the repeats */
+ // loop until we have output all of the repeats
while (repcount > 0)
{
- /* if we have a 1, write it twice as it is an escape code */
+ // if we have a 1, write it twice as it is an escape code
if (value == 1)
{
- bit_buffer_write(bitbuf, 1, numbits);
- bit_buffer_write(bitbuf, 1, numbits);
+ bitbuf.write(1, numbits);
+ bitbuf.write(1, numbits);
repcount--;
}
- /* if we have two or fewer in a row, write them raw */
+ // if we have two or fewer in a row, write them raw
else if (repcount <= 2)
{
- bit_buffer_write(bitbuf, value, numbits);
+ bitbuf.write(value, numbits);
repcount--;
}
- /* otherwise, write a triple using 1 as the escape code */
+ // otherwise, write a triple using 1 as the escape code
else
{
int cur_reps = MIN(repcount - 3, (1 << numbits) - 1);
- bit_buffer_write(bitbuf, 1, numbits);
- bit_buffer_write(bitbuf, value, numbits);
- bit_buffer_write(bitbuf, cur_reps, numbits);
+ bitbuf.write(1, numbits);
+ bitbuf.write(value, numbits);
+ bitbuf.write(cur_reps, numbits);
repcount -= cur_reps + 3;
}
}
}
-/*-------------------------------------------------
- tree_node_compare - compare two tree nodes
- by weight
--------------------------------------------------*/
-
-static int CLIB_DECL tree_node_compare(const void *item1, const void *item2)
-{
- const huffman_node *node1 = *(const huffman_node **)item1;
- const huffman_node *node2 = *(const huffman_node **)item2;
- return node2->weight - node1->weight;
-}
-
-
-/*-------------------------------------------------
- compute_optimal_tree - common backend for
- computing a tree based on the data histogram
--------------------------------------------------*/
+//-------------------------------------------------
+// tree_node_compare - compare two tree nodes
+// by weight
+//-------------------------------------------------
-static huffman_error compute_optimal_tree(huffman_context *context, const UINT32 *datahisto, UINT32 numcodes)
+int CLIB_DECL huffman_context_base::tree_node_compare(const void *item1, const void *item2)
{
- UINT32 lowerweight, upperweight;
- UINT32 sdatacount;
- int i;
-
- /* compute the number of data items in the histogram */
- sdatacount = 0;
- for (i = 0; i < numcodes; i++)
- sdatacount += datahisto[i];
-
- /* binary search to achieve the optimum encoding */
- lowerweight = 0;
- upperweight = sdatacount * 2;
- while (TRUE)
- {
- UINT32 curweight = (upperweight + lowerweight) / 2;
- int curmaxbits;
-
- /* build a tree using the current weight */
- curmaxbits = huffman_build_tree(context, datahisto, sdatacount, curweight, numcodes);
-
- /* apply binary search here */
- if (curmaxbits <= context->maxbits)
- {
- lowerweight = curweight;
-
- /* early out if it worked with the raw weights, or if we're done searching */
- if (curweight == sdatacount || (upperweight - lowerweight) <= 1)
- break;
- }
- else
- upperweight = curweight;
- }
-
- /* assign canonical codes for all nodes based on their code lengths */
- return assign_canonical_codes(context, numcodes);
+ const node_t *node1 = *(const node_t **)item1;
+ const node_t *node2 = *(const node_t **)item2;
+ return node2->m_weight - node1->m_weight;
}
-/*-------------------------------------------------
- huffman_build_tree - build a huffman tree
- based on the data distribution
--------------------------------------------------*/
+//-------------------------------------------------
+// build_tree - build a huffman tree based on the
+// data distribution
+//-------------------------------------------------
-static int huffman_build_tree(huffman_context *context, const UINT32 *datahisto, UINT32 totaldata, UINT32 totalweight, UINT32 numcodes)
+int huffman_context_base::build_tree(UINT32 totaldata, UINT32 totalweight)
{
- huffman_node *list[MAX_HUFFMAN_CODES];
- int listitems;
- int nextalloc;
- int maxbits;
- int i;
-
- /* make a list of all non-zero nodes */
- listitems = 0;
- memset(context->huffnode, 0, numcodes * sizeof(context->huffnode[0]));
- for (i = 0; i < numcodes; i++)
- if (datahisto[i] != 0)
+ // make a list of all non-zero nodes
+ dynamic_array<node_t *> list(m_numcodes * 2);
+ int listitems = 0;
+ memset(m_huffnode, 0, m_numcodes * sizeof(m_huffnode[0]));
+ for (int curcode = 0; curcode < m_numcodes; curcode++)
+ if (m_datahisto[curcode] != 0)
{
- list[listitems++] = &context->huffnode[i];
- context->huffnode[i].count = datahisto[i];
+ list[listitems++] = &m_huffnode[curcode];
+ m_huffnode[curcode].m_count = m_datahisto[curcode];
- /* scale the weight by the current effective length, ensuring we don't go to 0 */
- context->huffnode[i].weight = (UINT64)datahisto[i] * (UINT64)totalweight / (UINT64)totaldata;
- if (context->huffnode[i].weight == 0)
- context->huffnode[i].weight = 1;
+ // scale the weight by the current effective length, ensuring we don't go to 0
+ m_huffnode[curcode].m_weight = UINT64(m_datahisto[curcode]) * UINT64(totalweight) / UINT64(totaldata);
+ if (m_huffnode[curcode].m_weight == 0)
+ m_huffnode[curcode].m_weight = 1;
}
- /* sort the list by weight, largest weight first */
+ // sort the list by weight, largest weight first
qsort(list, listitems, sizeof(list[0]), tree_node_compare);
- /* now build the tree */
- nextalloc = MAX_HUFFMAN_CODES;
+ // now build the tree
+ int nextalloc = m_numcodes;
while (listitems > 1)
{
- huffman_node *node0, *node1, *newnode;
-
- /* remove lowest two items */
- node1 = list[--listitems];
- node0 = list[--listitems];
+ // remove lowest two items
+ node_t &node1 = *list[--listitems];
+ node_t &node0 = *list[--listitems];
- /* create new node */
- newnode = &context->huffnode[nextalloc++];
- newnode->parent = NULL;
- node0->parent = node1->parent = newnode;
- newnode->weight = node0->weight + node1->weight;
+ // create new node
+ node_t &newnode = m_huffnode[nextalloc++];
+ newnode.m_parent = NULL;
+ node0.m_parent = node1.m_parent = &newnode;
+ newnode.m_weight = node0.m_weight + node1.m_weight;
- /* insert into list at appropriate location */
- for (i = 0; i < listitems; i++)
- if (newnode->weight > list[i]->weight)
+ // insert into list at appropriate location
+ int curitem;
+ for (curitem = 0; curitem < listitems; curitem++)
+ if (newnode.m_weight > list[curitem]->m_weight)
{
- memmove(&list[i+1], &list[i], (listitems - i) * sizeof(list[0]));
+ memmove(&list[curitem+1], &list[curitem], (listitems - curitem) * sizeof(list[0]));
break;
}
- list[i] = newnode;
+ list[curitem] = &newnode;
listitems++;
}
- /* compute the number of bits in each code, and fill in another histogram */
- maxbits = 0;
- for (i = 0; i < numcodes; i++)
+ // compute the number of bits in each code, and fill in another histogram
+ int maxbits = 0;
+ for (int curcode = 0; curcode < m_numcodes; curcode++)
{
- huffman_node *node = &context->huffnode[i];
- node->numbits = 0;
+ node_t &node = m_huffnode[curcode];
+ node.m_numbits = 0;
- /* if we have a non-zero weight, compute the number of bits */
- if (node->weight > 0)
+ // if we have a non-zero weight, compute the number of bits
+ if (node.m_weight > 0)
{
- huffman_node *curnode;
-
- /* determine the number of bits for this node */
- for (curnode = node; curnode->parent != NULL; curnode = curnode->parent)
- node->numbits++;
- if (node->numbits == 0)
- node->numbits = 1;
-
- /* keep track of the max */
- maxbits = MAX(maxbits, node->numbits);
+ // determine the number of bits for this node
+ for (node_t *curnode = &node; curnode->m_parent != NULL; curnode = curnode->m_parent)
+ node.m_numbits++;
+ if (node.m_numbits == 0)
+ node.m_numbits = 1;
+
+ // keep track of the max
+ maxbits = MAX(maxbits, node.m_numbits);
}
}
-
return maxbits;
}
-/*-------------------------------------------------
- assign_canonical_codes - assign
- canonical codes to all the nodes based on the
- number of bits in each
--------------------------------------------------*/
+//-------------------------------------------------
+// assign_canonical_codes - assign canonical codes
+// to all the nodes based on the number of bits
+// in each
+//-------------------------------------------------
-static huffman_error assign_canonical_codes(huffman_context *context, UINT32 numcodes)
+huffman_error huffman_context_base::assign_canonical_codes()
{
- UINT32 bithisto[33];
- int curstart;
- int i;
-
- /* build up a histogram of bit lengths */
- memset(bithisto, 0, sizeof(bithisto));
- for (i = 0; i < numcodes; i++)
+ // build up a histogram of bit lengths
+ UINT32 bithisto[33] = { 0 };
+ for (int curcode = 0; curcode < m_numcodes; curcode++)
{
- huffman_node *node = &context->huffnode[i];
- if (node->numbits > context->maxbits)
+ node_t &node = m_huffnode[curcode];
+ if (node.m_numbits > m_maxbits)
return HUFFERR_INTERNAL_INCONSISTENCY;
- if (node->numbits <= 32)
- bithisto[node->numbits]++;
+ if (node.m_numbits <= 32)
+ bithisto[node.m_numbits]++;
}
- /* for each code length, determine the starting code number */
- curstart = 0;
- for (i = 32; i > 0; i--)
+ // for each code length, determine the starting code number
+ UINT32 curstart = 0;
+ for (int codelen = 32; codelen > 0; codelen--)
{
- UINT32 nextstart = (curstart + bithisto[i]) >> 1;
- if (i != 1 && nextstart * 2 != (curstart + bithisto[i]))
+ UINT32 nextstart = (curstart + bithisto[codelen]) >> 1;
+ if (codelen != 1 && nextstart * 2 != (curstart + bithisto[codelen]))
return HUFFERR_INTERNAL_INCONSISTENCY;
- bithisto[i] = curstart;
+ bithisto[codelen] = curstart;
curstart = nextstart;
}
- /* now assign canonical codes */
- for (i = 0; i < numcodes; i++)
+ // now assign canonical codes
+ for (int curcode = 0; curcode < m_numcodes; curcode++)
{
- huffman_node *node = &context->huffnode[i];
- if (node->numbits > 0)
- node->bits = bithisto[node->numbits]++;
+ node_t &node = m_huffnode[curcode];
+ if (node.m_numbits > 0)
+ node.m_bits = bithisto[node.m_numbits]++;
}
-
- /* if there was a decoding table, get rid of it now */
- context->lookupdirty = TRUE;
return HUFFERR_NONE;
}
-/*-------------------------------------------------
- build_lookup_table - build a lookup
- table for fast decoding
--------------------------------------------------*/
+//-------------------------------------------------
+// build_lookup_table - build a lookup table for
+// fast decoding
+//-------------------------------------------------
-static huffman_error build_lookup_table(huffman_context *context, UINT32 numcodes)
+void huffman_context_base::build_lookup_table()
{
- int i;
-
- /* allocate a table if needed */
- if (context->lookup == NULL)
- context->lookup = (huffman_lookup_value *)malloc((UINT32)sizeof(context->lookup[0]) * (UINT32)(1 << context->maxbits));
- if (context->lookup == NULL)
- return HUFFERR_OUT_OF_MEMORY;
-
- /* now build */
- for (i = 0; i < numcodes; i++)
+ // iterate over all codes
+ for (int curcode = 0; curcode < m_numcodes; curcode++)
{
- huffman_node *node = &context->huffnode[i];
- if (node->numbits > 0)
+ // process all nodes which have non-zero bits
+ node_t &node = m_huffnode[curcode];
+ if (node.m_numbits > 0)
{
- huffman_lookup_value *dest, *destend;
+ // set up the entry
+ lookup_value value = MAKE_LOOKUP(curcode, node.m_numbits);
- /* left justify this node's bit values to max bits */
- int shift = context->maxbits - node->numbits;
- UINT32 start = node->bits << shift;
- UINT32 end = ((node->bits + 1) << shift) - 1;
- huffman_lookup_value value;
-
- /* set up the entry */
- value = (i << 6) | node->numbits;
-
- /* fill all matching entries */
- dest = &context->lookup[start];
- destend = &context->lookup[end];
+ // fill all matching entries
+ int shift = m_maxbits - node.m_numbits;
+ lookup_value *dest = &m_lookup[node.m_bits << shift];
+ lookup_value *destend = &m_lookup[((node.m_bits + 1) << shift) - 1];
while (dest <= destend)
*dest++ = value;
}
}
+}
- /* no longer dirty */
- context->lookupdirty = FALSE;
- return HUFFERR_NONE;
+
+
+//**************************************************************************
+// 8-BIT ENCODER
+//**************************************************************************
+
+//-------------------------------------------------
+// huffman_8bit_encoder - constructor
+//-------------------------------------------------
+
+huffman_8bit_encoder::huffman_8bit_encoder()
+{
+}
+
+
+//-------------------------------------------------
+// encode - encode a full buffer
+//-------------------------------------------------
+
+huffman_error huffman_8bit_encoder::encode(const UINT8 *source, UINT32 slength, UINT8 *dest, UINT32 dlength, UINT32 &complength)
+{
+ // first compute the histogram
+ histo_reset();
+ for (UINT32 cur = 0; cur < slength; cur++)
+ histo_one(source[cur]);
+
+ // then compute the tree
+ huffman_error err = compute_tree_from_histo();
+ if (err != HUFFERR_NONE)
+ return err;
+
+ // export the tree
+ bitstream_out bitbuf(dest, dlength);
+ err = export_tree_huffman(bitbuf);
+ if (err != HUFFERR_NONE)
+ return err;
+
+ // then encode the data
+ for (UINT32 cur = 0; cur < slength; cur++)
+ encode_one(bitbuf, source[cur]);
+ complength = bitbuf.flush();
+ return bitbuf.overflow() ? HUFFERR_OUTPUT_BUFFER_TOO_SMALL : HUFFERR_NONE;
+}
+
+
+
+//**************************************************************************
+// 8-BIT DECODER
+//**************************************************************************
+
+//-------------------------------------------------
+// huffman_8bit_decoder - constructor
+//-------------------------------------------------
+
+huffman_8bit_decoder::huffman_8bit_decoder()
+{
+}
+
+
+//-------------------------------------------------
+// decode - decode a full buffer
+//-------------------------------------------------
+
+huffman_error huffman_8bit_decoder::decode(const UINT8 *source, UINT32 slength, UINT8 *dest, UINT32 dlength)
+{
+ // first import the tree
+ bitstream_in bitbuf(source, slength);
+ huffman_error err = import_tree_huffman(bitbuf);
+ if (err != HUFFERR_NONE)
+ return err;
+
+ // then decode the data
+ for (UINT32 cur = 0; cur < dlength; cur++)
+ dest[cur] = decode_one(bitbuf);
+ bitbuf.flush();
+ return bitbuf.overflow() ? HUFFERR_INPUT_BUFFER_TOO_SMALL : HUFFERR_NONE;
}
diff --git a/src/lib/util/huffman.h b/src/lib/util/huffman.h
index f512975c77f..10dc301e7df 100644
--- a/src/lib/util/huffman.h
+++ b/src/lib/util/huffman.h
@@ -2,7 +2,7 @@
huffman.h
- Huffman compression routines.
+ Static Huffman compression and decompression helpers.
****************************************************************************
@@ -37,19 +37,22 @@
***************************************************************************/
+#pragma once
+
#ifndef __HUFFMAN_H__
+#define __HUFFMAN_H__
#include "osdcore.h"
+#include "bitstream.h"
-/***************************************************************************
- CONSTANTS
-***************************************************************************/
+//**************************************************************************
+// CONSTANTS
+//**************************************************************************
-enum _huffman_error
+enum huffman_error
{
HUFFERR_NONE = 0,
- HUFFERR_OUT_OF_MEMORY,
HUFFERR_TOO_MANY_BITS,
HUFFERR_INVALID_DATA,
HUFFERR_INPUT_BUFFER_TOO_SMALL,
@@ -57,45 +60,193 @@ enum _huffman_error
HUFFERR_INTERNAL_INCONSISTENCY,
HUFFERR_TOO_MANY_CONTEXTS
};
-typedef enum _huffman_error huffman_error;
-/***************************************************************************
- TYPE DEFINITIONS
-***************************************************************************/
+//**************************************************************************
+// TYPE DEFINITIONS
+//**************************************************************************
-typedef UINT16 huffman_lookup_value;
+// ======================> huffman_context_base
-typedef struct _huffman_context huffman_context;
+// base class for encoding and decoding
+class huffman_context_base
+{
+protected:
+ typedef UINT16 lookup_value;
+
+ // a node in the huffman tree
+ struct node_t
+ {
+ node_t * m_parent; // pointer to parent node
+ UINT32 m_count; // number of hits on this node
+ UINT32 m_weight; // assigned weight of this node
+ UINT32 m_bits; // bits used to encode the node
+ UINT8 m_numbits; // number of bits needed for this node
+ };
+
+ // construction/destruction
+ huffman_context_base(int numcodes, int maxbits, lookup_value *lookup, UINT32 *histo, node_t *nodes);
+
+ // tree creation
+ huffman_error compute_tree_from_histo();
+
+ // static tree import; huffman is notably more efficient
+ huffman_error import_tree_rle(bitstream_in &bitbuf);
+ huffman_error import_tree_huffman(bitstream_in &bitbuf);
+
+ // static tree export
+ huffman_error export_tree_rle(bitstream_out &bitbuf);
+ huffman_error export_tree_huffman(bitstream_out &bitbuf);
+
+ // internal helpers
+ void write_rle_tree_bits(bitstream_out &bitbuf, int value, int repcount, int numbits);
+ static int CLIB_DECL tree_node_compare(const void *item1, const void *item2);
+ int build_tree(UINT32 totaldata, UINT32 totalweight);
+ huffman_error assign_canonical_codes();
+ void build_lookup_table();
+
+protected:
+ // internal state
+ UINT32 m_numcodes; // number of total codes being processed
+ UINT8 m_maxbits; // maximum bits per code
+ UINT8 m_prevdata; // value of the previous data (for delta-RLE encoding)
+ int m_rleremaining; // number of RLE bytes remaining (for delta-RLE encoding)
+ lookup_value * m_lookup; // pointer to the lookup table
+ UINT32 * m_datahisto; // histogram of data values
+ node_t * m_huffnode; // array of nodes
+};
+// ======================> huffman_encoder
-/***************************************************************************
- FUNCTION PROTOTYPES
-***************************************************************************/
+// template class for encoding
+template<int _NumCodes = 256, int _MaxBits = 16>
+class huffman_encoder : public huffman_context_base
+{
+public:
+ // pass through to the underlying constructor
+ huffman_encoder()
+ : huffman_context_base(_NumCodes, _MaxBits, NULL, m_datahisto_array, m_huffnode_array) { histo_reset(); }
+
+ // single item operations
+ void histo_reset() { memset(m_datahisto_array, 0, sizeof(m_datahisto_array)); }
+ void histo_one(UINT32 data);
+ void encode_one(bitstream_out &bitbuf, UINT32 data);
+
+ // expose tree computation and export
+ using huffman_context_base::compute_tree_from_histo;
+ using huffman_context_base::export_tree_rle;
+ using huffman_context_base::export_tree_huffman;
+
+private:
+ // array versions of the info we need
+ UINT32 m_datahisto_array[_NumCodes];
+ node_t m_huffnode_array[_NumCodes * 2];
+};
+
+
+// ======================> huffman_decoder
-huffman_error huffman_create_context(huffman_context **context, int maxbits);
-void huffman_free_context(huffman_context *context);
+// template class for decoding
+template<int _NumCodes = 256, int _MaxBits = 16>
+class huffman_decoder : public huffman_context_base
+{
+public:
+ // pass through to the underlying constructor
+ huffman_decoder()
+ : huffman_context_base(_NumCodes, _MaxBits, m_lookup_array, NULL, m_huffnode_array) { }
+
+ // single item operations
+ UINT32 decode_one(bitstream_in &bitbuf);
+
+ // expose tree import
+ using huffman_context_base::import_tree_rle;
+ using huffman_context_base::import_tree_huffman;
+
+private:
+ // array versions of the info we need
+ node_t m_huffnode_array[_NumCodes];
+ lookup_value m_lookup_array[1 << _MaxBits];
+};
-huffman_error huffman_import_tree(huffman_context *context, const UINT8 *source, UINT32 slength, UINT32 *actlength);
-huffman_error huffman_export_tree(huffman_context *context, UINT8 *dest, UINT32 dlength, UINT32 *actlength);
-huffman_error huffman_deltarle_import_tree(huffman_context *context, const UINT8 *source, UINT32 slength, UINT32 *actlength);
-huffman_error huffman_deltarle_export_tree(huffman_context *context, UINT8 *dest, UINT32 dlength, UINT32 *actlength);
-huffman_error huffman_compute_tree(huffman_context *context, const UINT8 *source, UINT32 swidth, UINT32 sheight, UINT32 sstride, UINT32 sxor);
-huffman_error huffman_compute_tree_interleaved(int numcontexts, huffman_context **contexts, const UINT8 *source, UINT32 swidth, UINT32 sheight, UINT32 sstride, UINT32 sxor);
-huffman_error huffman_deltarle_compute_tree(huffman_context *context, const UINT8 *source, UINT32 swidth, UINT32 sheight, UINT32 sstride, UINT32 sxor);
-huffman_error huffman_deltarle_compute_tree_interleaved(int numcontexts, huffman_context **contexts, const UINT8 *source, UINT32 swidth, UINT32 sheight, UINT32 sstride, UINT32 sxor);
+// ======================> huffman_8bit_encoder
+
+// generic 8-bit encoder/decoder
+class huffman_8bit_encoder : public huffman_encoder<>
+{
+public:
+ // construction/destruction
+ huffman_8bit_encoder();
+
+ // operations
+ huffman_error encode(const UINT8 *source, UINT32 slength, UINT8 *dest, UINT32 destlength, UINT32 &complength);
+};
+
+
+// ======================> huffman_8bit_decoder
+
+// generic 8-bit encoder/decoder
+class huffman_8bit_decoder : public huffman_decoder<>
+{
+public:
+ // construction/destruction
+ huffman_8bit_decoder();
+
+ // operations
+ huffman_error decode(const UINT8 *source, UINT32 slength, UINT8 *dest, UINT32 destlength);
+};
+
+
+
+//**************************************************************************
+// INLINE FUNCTIONS
+//**************************************************************************
+
+//-------------------------------------------------
+// histo_one - update the histogram
+//-------------------------------------------------
+
+template<int _NumCodes, int _MaxBits>
+inline void huffman_encoder<_NumCodes, _MaxBits>::histo_one(UINT32 data)
+{
+ m_datahisto[data]++;
+}
+
+
+//-------------------------------------------------
+// encode_one - encode a single code to the
+// huffman stream
+//-------------------------------------------------
+
+template<int _NumCodes, int _MaxBits>
+inline void huffman_encoder<_NumCodes, _MaxBits>::encode_one(bitstream_out &bitbuf, UINT32 data)
+{
+ // write the data
+ node_t &node = m_huffnode[data];
+ bitbuf.write(node.m_bits, node.m_numbits);
+}
+
+
+//-------------------------------------------------
+// decode_one - decode a single code from the
+// huffman stream
+//-------------------------------------------------
+
+template<int _NumCodes, int _MaxBits>
+inline UINT32 huffman_decoder<_NumCodes, _MaxBits>::decode_one(bitstream_in &bitbuf)
+{
+ // peek ahead to get maxbits worth of data
+ UINT32 bits = bitbuf.peek(m_maxbits);
+
+ // look it up, then remove the actual number of bits for this code
+ lookup_value lookup = m_lookup[bits];
+ bitbuf.remove(lookup & 0x1f);
-huffman_error huffman_encode_data(huffman_context *context, const UINT8 *source, UINT32 swidth, UINT32 sheight, UINT32 sstride, UINT32 sxor, UINT8 *dest, UINT32 dlength, UINT32 *actlength);
-huffman_error huffman_encode_data_interleaved(int numcontexts, huffman_context **contexts, const UINT8 *source, UINT32 swidth, UINT32 sheight, UINT32 sstride, UINT32 sxor, UINT8 *dest, UINT32 dlength, UINT32 *actlength);
-huffman_error huffman_deltarle_encode_data(huffman_context *context, const UINT8 *source, UINT32 swidth, UINT32 sheight, UINT32 sstride, UINT32 sxor, UINT8 *dest, UINT32 dlength, UINT32 *actlength);
-huffman_error huffman_deltarle_encode_data_interleaved(int numcontexts, huffman_context **contexts, const UINT8 *source, UINT32 swidth, UINT32 sheight, UINT32 sstride, UINT32 sxor, UINT8 *dest, UINT32 dlength, UINT32 *actlength);
+ // return the value
+ return lookup >> 5;
+}
-huffman_error huffman_decode_data(huffman_context *context, const UINT8 *source, UINT32 slength, UINT8 *dest, UINT32 dwidth, UINT32 dheight, UINT32 dstride, UINT32 dxor, UINT32 *actlength);
-huffman_error huffman_decode_data_interleaved(int numcontexts, huffman_context **contexts, const UINT8 *source, UINT32 slength, UINT8 *dest, UINT32 dwidth, UINT32 dheight, UINT32 dstride, UINT32 dxor, UINT32 *actlength);
-huffman_error huffman_deltarle_decode_data(huffman_context *context, const UINT8 *source, UINT32 slength, UINT8 *dest, UINT32 dwidth, UINT32 dheight, UINT32 dstride, UINT32 dxor, UINT32 *actlength);
-huffman_error huffman_deltarle_decode_data_interleaved(int numcontexts, huffman_context **contexts, const UINT8 *source, UINT32 slength, UINT8 *dest, UINT32 dwidth, UINT32 dheight, UINT32 dstride, UINT32 dxor, UINT32 *actlength);
#endif
diff --git a/src/lib/util/tagmap.h b/src/lib/util/tagmap.h
index 4bbe458f92b..63855b30e07 100644
--- a/src/lib/util/tagmap.h
+++ b/src/lib/util/tagmap.h
@@ -158,7 +158,7 @@ public:
for (entry_t *entry = m_table[fullhash % ARRAY_LENGTH(m_table)]; entry != NULL; entry = entry->next())
if (entry->fullhash() == fullhash && entry->tag() == tag)
return entry->object();
- return (_ElementType)NULL;
+ return _ElementType(NULL);
}
// find by tag without checking anything but the hash