Design pattern: C++ Abstraction Layer - c++

I'm trying to write an abstraction layer to let my code run on different platforms. Let me give an example for two classes that I ultimately want to use in the high level code:
class Thread
{
public:
Thread();
virtual ~Thread();
void start();
void stop();
virtual void callback() = 0;
};
class Display
{
public:
static void drawText(const char* text);
};
My trouble is: What design pattern can I use to let low-level code fill in the implementation?
Here are my thoughs and why I don't think they are a good solution:
In theory there's no problem in having the above definition sit in highLevel/thread.h and the platform specific implementation sit in lowLevel/platformA/thread.cpp. This is a low-overhead solution that is resolved at link-time. The only problem is that the low level implementation can't add any member variables or member functions to it. This makes certain things impossible to implement.
A way out would be to add this to the definition (basically the Pimpl-Idiom):
class Thread
{
// ...
private:
void* impl_data;
}
Now the low level code can have it's own struct or objects stored in the void pointer. The trouble here is that its ugly to read and painful to program.
I could make class Thread pure virtual and implement the low level functionality by inheriting from it. The high level code could access the low level implementation by calling a factory function like this:
// thread.h, below the pure virtual class definition
extern "C" void* makeNewThread();
// in lowlevel/platformA/thread.h
class ThreadImpl: public Thread
{ ... };
// in lowLevel/platformA/thread.cpp
extern "C" void* makeNewThread() { return new ThreadImpl(); }
This would be tidy enough but it fails for static classes. My abstraction layer will be used for hardware and IO things and I would really like to be able to have Display::drawText(...) instead of carrying around pointers to a single Display class.
Another option is to use only C-style functions that can be resolved at link time like this extern "C" handle_t createThread(). This is easy and great for accessing low level hardware that is there only once (like a display). But for anything that can be there multiple times (locks, threads, memory management) I have to carry around handles in my high level code which is ugly or have a high level wrapper class that hides the handles. Either way I have the overhead of having to associate the handles with the respective functionality on both the high level and the low level side.
My last thought is a hybrid structure. Pure C-style extern "C" functions for low level stuff that is there only once. Factory functions (see 3.) for stuff that can be there multiple times. But I fear that something hybrid will lead to inconsistent, unreadable code.
I'd be very grateful for hints to design patterns that fit my requirements.

You don't need to have a platform-agnostic base class, because your code is only compiled for a single concrete platform at a time.
Just set the include path to, for example, -Iinclude/generic -Iinclude/platform, and have a separate Thread class in each supported platform's include directory.
You can (and should) write platform-agnostic tests, compiled & executed by default, which confirm your different platform-specific implementations adhere to the same interface and semantics.
PS. As StoryTeller says, Thread is a bad example since there's already a portable std::thread. I'm assuming there's some other platform-specific detail you really do need to abstract.
PPS. You still need to figure out the correct split between generic (platform-agnostic) code and platform-specific code: there's no magic bullet for deciding what goes where, just a series of tradeoffs between reuse/duplication, simple versus highly-parameterized code, etc.

You seem to want value semantics for your Thread class and wonder where to add the indirection to make it portable. So you use the pimpl idiom, and some conditional compilation.
Depending on where you want the complexity of your build tool to be, and if you want to keep all the low level code as self contained as possible, You do the following:
In you high level header Thread.hpp, you define:
class Thread
{
class Impl:
Impl *pimpl; // or better yet, some smart pointer
public:
Thread ();
~Thread();
// Other stuff;
};
Than, in your thread sources directory, you define files along this fashion:
Thread_PlatformA.cpp
#ifdef PLATFORM_A
#include <Thread.hpp>
Thread::Thread()
{
// Platform A specific code goes here, initialize the pimpl;
}
Thread::~Thread()
{
// Platform A specific code goes here, release the pimpl;
}
#endif
Building Thread.o becomes a simple matter of taking all Thread_*.cpp files in the Thread directory, and having your build system come up with the correct -D option to the compiler.

I am curious, what would it be like to design this situation like the following (just sticking to the thread):
// Your generic include level:
// thread.h
class Thread : public
#ifdef PLATFORM_A
PlatformAThread
#elif PLATFORM_B
PlatformBThread
// any more stuff you need in here
#endif
{
Thread();
virtual ~Thread();
void start();
void stop();
virtual void callback() = 0;
} ;
which does not contain anything about implementation, just the interface
Then you have:
// platformA directory
class PlatformAThread { ... };
and this will automatically result that when you create your "generic" Thread object you automatically get also a platform dependent class which automatically sets up its internals, and which might have platform specific operations, and certainly your PlatformAThread class might derive from a generic Base class having common things you might need.
You will also need to set up your build system to automatically recognize the platform specific directories.
Also, please note, that I have the tendency to create hierarchies of class inheritances, and some people advise against this: https://en.wikipedia.org/wiki/Composition_over_inheritance

Related

Should pointers be used to reduce header dependencies?

When creating a class that is composed of other classes, is it worthwhile reducing dependencies (and hence compile times) by using pointers rather than values?
For example, the below uses values.
// ThingId.hpp
class ThingId
{
// ...
};
// Thing.hpp
#include "ThingId.hpp"
class Thing
{
public:
Thing(const ThingId& thingId);
private:
ThingId thingId_;
};
// Thing.cpp
#include "Thing.hpp"
Thing::Thing(const ThingId& thingId) :
thingId_(thingId) {}
However, the modified version below uses pointers.
// ThingId.hpp
class ThingId
{
// ...
};
// Thing.hpp
class ThingId;
class Thing
{
public:
Thing(const ThingId& thingId);
private:
ThingId* thingId_;
};
// Thing.cpp
#include "ThingId.hpp"
#include "Thing.hpp"
Thing::Thing(const ThingId& thingId) :
thingId_(new ThingId(thingId)) {}
I've read a post that recommends such an approach, but if you have a large number of pointers, there'll be a large number of new calls, which I imagine would be slow.
This is what most people call the Pimpl idiom (http://c2.com/cgi/wiki?PimplIdiom).
Simple Answer
I highly suspect that you do not have a good use case for this and should avoid it at all cost.
My Experience
The main way that Pimpl has ever been useful for me is to make an implementation detail private. It achieves this because you do not need to include the headers of your dependencies, but can simply forward declare their types.
Example
If you want to provide an SDK to someone which uses some boost library code under the hood, but you want the option of later swapping that out for some other library without causing any problem for the consumer of your SDK, then Pimpl can make a lot of sense.
It also helps create a facade over an implementation so that you have control over the entire exposed public interface, rather than exposing the library you implicitly depend on, and consequently its entire interface that you don't have control over and may change, may expose too much, may be hard to use, etc.
If your program doesn't warrant dynamic allocation, don't introduce it just for the sake of project organisation. That would definitely be a false economy.
What you really want to do is attempt to reduce the number of inter-class dependencies entirely.
However, as long as your coupling is sensible and tree-like, don't worry too much about it. If you're using precompiled headers (and you are, right?) then none of this really matters for compilation times.

Multiplatform class design C++

I have had an idea for a non standard way to handle multiplatform interfaces in C++ and would like to know if that is generally a bad idea and why.
Currently I can only think of one disadvantage: It is very(?) uncommon do to something like that and maybe its not obvious how it works on first sight:
I have a class that will be used on different platforms, for example CMat4x4f32 (4x4 matrix class using 32 bit floats).
My platform independent interface looks like this:
class CMat4x4f32
{
public:
//Some methods
#include "Mat4x4f32.platform.inl"
};
Mat4x4f32.platform.inl looks like this:
public:
// Fills the matrix from a DirectX11 SSE matrix
void FromXMMatrix(const XMMatrix& _Matrix);
It just adds a platform depending interface to the matrix class.
The .cpp and the Mat4x4f32.platform.inl are located inside subfolders like "win32" or "posix" so in win32 I implement the FromXMMatrix function. My buildsystem adds these subfolders to the include path depending on the platform I build for.
I could even go a step beyond and implement a .platform.cpp that is located inside win32 and contains only the functions I add to the interface for that platform.
I personally think this is a good idea because it makes writing and using interfaces very easy and clean.
Especially in my Renderer library that heavily uses the Matrix class from my base library I can now use platform depending functions (FromXMMatrix) in the DirectX part as if I dont have any other platforms to worry about.
In the base library itself I can still write platform independent code using the common matrix interface.
I also have other classes where this is useful: For example an Error class that collects errors and automatically translates them into readable messages and provide some debugging options.
For win32 I can create error instances from bad DirectX and Win32 HResults and on Linux I can create them from returned errno's. In the base library I have a class that manages these errors using the common error interface.
It heavily reduces code required and prevents having ugly platform depending util classes.
So is this bad or good design and what are the alternatives?
It sounds like you're talking about using the bridge pattern:
http://c2.com/cgi/wiki?BridgePattern
In my personal experience I've developed a lot of platform independent interfaces, with specific implementations using this pattern and it has worked very well, I've often used it with the Pimpl idiom:
http://c2.com/cgi/wiki?PimplIdiom
As in alternatives I've found that this site in general is very good for explaining pros & cons of various patterns and paradigms:
http://c2.com/cgi/wiki
I would recommend you used "pimpl" instead:
class CMat4x4f32
{
public:
CMat4x4f32();
void Foo();
void Bar();
private:
std::unique_ptr<CMat4x4f32Impl> m_impl;
};
And then in build-file configs pull in platform specific .cpp files, where you for instance define your platform specific functions:
class CMat4x4f32::CMat4x4f32Impl
{
public:
void Foo() { /* Actual impl */ }
void Bar() { /* Actual impl */ }
// Fills the matrix from a DirectX11 SSE matrix
void FromXMMatrix(const XMMatrix& _Matrix);
};
CMat4x4f32::CMat4x4f32() : m_impl(new CMat4x4f32Impl()) {}
CMat4x4f32::Foo() { m_impl->Foo(); }
CMat4x4f32::Bar() { m_impl->Bar(); }

Wrapper over Graphics APIs

I'm a huge fan of having a game engine that has the abilty to adapt, not just in what it can do, but also in how it can handle new code. Recently, for my graphics subsystem, I wrote a class to be overriden that works like this:
class LowLevelGraphicsInterface {
virtual bool setRenderTarget(const RenderTarget* renderTarget) = 0;
virtual bool setStreamSource(const VertexBuffer* vertexBuffer) = 0;
virtual bool setShader(const Shader* shader) = 0;
virtual bool draw(void) = 0;
//etc.
};
My idea was to create a list of functions that are universal among most graphics APIs. Then for DirectX11 I would just create a new child class:
class LGI_DX11 : public LowLevelGraphicsInterface {
virtual bool setRenderTarget(const RenderTarget* renderTarget);
virtual bool setStreamSource(const VertexBuffer* vertexBuffer);
virtual bool setShader(const Shader* shader);
virtual bool draw(void);
//etc.
};
Each of these functions would then interface with DX11 directly. I do realize that there is a layer of indirection here. Are people turned off by this fact?
Is this a widely used method? Is there something else I could/should be doing? There is the option of using the preprocessor but that seems messy to me. Someone also mentioned templates to me. What do you guys think?
If the virtual function calls become a problem, there is a compile time method that removes virtual calls using a small amount of preprocessor and a compiler optimization. One possible implementation is something like this:
Declare your base renderer with pure virtual functions:
class RendererBase {
public:
virtual bool Draw() = 0;
};
Declare a specific implementation:
#include <d3d11.h>
class RendererDX11 : public RendererBase {
public:
bool Draw();
private:
// D3D11 specific data
};
Create a header RendererTypes.h to forward declare your renderer based on the type you want to use with some preprocessor:
#ifdef DX11_RENDERER
class RendererDX11;
typedef RendererDX11 Renderer;
#else
class RendererOGL;
typedef RendererOGL Renderer;
#endif
Also create a header Renderer.h to include appropriate headers for your renderer:
#ifdef DX11_RENDERER
#include "RendererDX11.h"
#else
#include "RendererOGL.h"
#endif
Now everywhere you use your renderer refer to it as the Renderer type, include RendererTypes.h in your header files and Renderer.h in your cpp files.
Each of your renderer implementations should be in different projects. Then create different build configurations to compile with whichever renderer implementation you want to use. You don't want to include DirectX code for a Linux configuration for example.
In debug builds, virtual function calls might still be made, but in release builds they are optimized away because you are never making calls through the base class interface. It is only being used to enforce a common signature for your renderer classes at compile time.
While you do need a little bit of preprocessor for this method, it is minimal and doesn't interfere with the readability of your code since it is isolated and limited to some typedefs and includes. The one downside is that you cannot switch renderer implementations at runtime using this method as each implementation will be built to a separate executable. However, there really isn't much need for switching configurations at runtime anyway.
I use the approach with an abstract base class to the render device in my application. Works fine and lets me dynamically choose the renderer to use at runtime. (I use it to switch from DirectX10 to DirectX9 if the former is not supported, i.e. on Windows XP).
I would like to point out that the virtual function call is not the part which costs performance, but the conversion of the argument types involved. To be really generic, the public interface to the renderer uses its own set of parameter types such as a custom IShader and a custom Matrix3D type. No type declared in the DirectX API is visible to the rest of the application, as i.e. OpenGL would have different Matrix types and shader interfaces. The downside of this is really that I have to convert all Matrix and Vector/Point types from my custom type to the one the shader uses in the concrete render device implementation. This is far more expensive than the cost of a virtual function call.
If you do the distinction using the preprocessor, you also need to map the different interface types like this. Many are the same between DirectX10 and DirectX11, but not between DirectX and OpenGL.
Edit: See the answer in c++ Having multiple graphics options for an example implementation.
So, I realize that this is an old question, but I can't resist chiming in. Wanting to write code like this is just a side effect of trying to cope with object-oriented indoctrination.
The first question is whether or not you really need to swap out rendering back-ends, or just think it's cool. If an appropriate back-end can be determined at build time for a given platform, then problem solved: use a plain, non-virtual interface with an implementation selected at build time.
If you find that you really do need to swap it out, still use a non-virtual interface, just load the implementations as shared libraries. With this kind of swapping, you will likely want both engine rendering code and some performance intensive game-specific rendering code factored out and swappable. That way, you can use the common, high-level engine rendering interface for things done mostly by the engine, while still having access to back-end specific code to avoid the conversion costs mentioned by PMF.
Now, it should be said that while swapping with shared libraries introduces indirection, 1. You can easily get the indirection to be < to ~= that of virtual calls and 2. This high-level indirection is never a performance concern in any substantial game/engine. The main benefit is keeping dead code unloaded (and out of the way) and simplifying APIs and overall project design, increasing readability and comprehension.
Beginners aren't typically aware of this, because there is so much blind OO pushing these days, but this style of "OO first, ask questions never" is not without cost. This kind of design has a taxing code comprehension cost and leads to code (much lower-level than this example) that is inherently slow. Object orientation has its place, certainly, but (in games and other performance intensive applications) the best way to design that I have found is to write applications as minimally OO as possible, only conceding when a problem forces your hand. You will develop an intuition for where to draw the line as you gain more experience.

OOP vs macro problem

I came across this problem via a colleague today. He had a design for a front end system which goes like this:
class LWindow
{
//Interface for common methods to Windows
};
class LListBox : public LWindow
{
//Do not override methods in LWindow.
//Interface for List specific stuff
}
class LComboBox : public LWindow{} //So on
The Window system should work on multiple platforms. Suppose for the moment we target Windows and Linux. For Windows we have an implementation for the interface in LWindow. And we have multiple implementations for all the LListBoxes, LComboBoxes, etc. My reaction was to pass an LWindow*(Implementation object) to the base LWindow class so it can do this:
void LWindow::Move(int x, int y)
{
p_Impl->Move(x, y); //Impl is an LWindow*
}
And, do the same thing for implementation of LListBox and so on
The solution originally given was much different. It boiled down to this:
#define WindowsCommonImpl {//Set of overrides for LWindow methods}
class WinListBox : public LListBox
{
WindowsCommonImpl //The overrides for methods in LWindow will get pasted here.
//LListBox overrides
}
//So on
Now, having read all about macros being evil and good design practices, I immediately was against this scheme. After all, it is code duplication in disguise. But I couldn't convince my colleague of that. And I was surprised that that was the case. So, I pose this question to you. What are the possible problems of the latter method? I'd like practical answers please. I need to convince someone who is very practical (and used to doing this sort of stuff. He mentioned that there's lots of macros in MFC!) that this is bad (and myself). Not teach him aesthetics. Further, is there anything wrong with what I proposed? If so, how do I improve it? Thanks.
EDIT: Please give me some reasons so I can feel good about myself supporting oop :(
Going for bounty. Please ask if you need any clarifications. I want to know arguments for and vs OOP against the macro :)
Your colleague is probably thinking of the MFC message map macros; these are used in important-looking places in every MFC derived class, so I can see where your colleague is coming from. However these are not for implementing interfaces, but rather for details with interacting with the rest of the Windows OS.
Specifically, these macros implement part of Windows' message pump system, where "messages" representing requests for MFC classes to do stuff gets directed to the correct handler functions (e.g. mapping the messages to the handlers). If you have access to visual studio, you'll see that these macros wrap the message map entries in a somewhat-complicated array of structs (that the calling OS code knows how to read), and provide functions to access this map.
As MFC users, the macro system makes this look clean to us. But this works mostly because underlying Windows API is well-specified and won't change much, and most of the macro code is generated by the IDE to avoid typos. If you need to implement something that involves messy declarations then macros might make sense, but so far this doesn't seem to be the case.
Practical concerns that your colleague may be interested in:
duplicated macro calls. Looks like you're going to need to copy the line "WindowsCommonImpl" into each class declaration - assuming the macro expands to some inline functions. If they're only declarations and the implementations go in a separate macro, you'll need to do this in every .cpp file too - and change the class name passed into the macro every time.
longer recompile time. For your solution, if you change something in the LWindow implementation, you probably only need to recompile LWindow.cpp. If you change something in the macro, everything that includes the macro header file needs to be recompiled, which is probably your whole project.
harder to debug. If the error has to do with the logic within the macro, the debugger will probably break to the caller, where you don't see the error right away. You may not even think to check the macro definition because you thought you knew exactly what it did.
So basically your LWindow solution is a better solution, to minimize headaches down the road.
Does'nt answer your question directly may be, but can't help from telling you to Read up on the Bridge Design pattern in GOF. It's meant exactly for that.
Decouple an abstraction from its
implementation so that the two can
vary independently.
From what I can understand, you are already on the right path, other than the MACRO stuff.
My reaction was to pass an
LWindow*(Implementation object) to the
base LWindow class so it can do this:
LListBox and LComboBox should receive an instance of WindowsCommonImpl.
In the first solution, inheritance is used so that LListBox and LComboBox can use some common methods. However, inheritance is not meant for this.
I would agree with you. Solution with WindowsCommonImpl macro is really bad. It is error-prone, hard to extend and very hard to debug. MFC is a good example of how you should not design your windows library. If it looks like MFC, you are really on a wrong way.
So, your solution obviously better than macro-based one. Anyway, I wouldn't agree it is good enough. The most significant drawback to me is that you mix interface and implementation. Most practical value of separating interface and implementation is ability to easily write mock objects for testing purposes.
Anyway, it seems the problem you are trying to solve is how to combine interface inheritance with implementation inheritance in C++. I would suggest using template class for window implementation.
// Window interface
class LWindow
{
};
// ListBox interface (inherits Window interface)
class LListBox : public LWindow
{
};
// Window implementation template
template<class Interface>
class WindowImpl : public Interface
{
};
// Window implementation
typedef WindowImpl<LWindow> Window;
// ListBox implementation
// (inherits both Window implementation and Window interface)
class ListBox : public WindowImpl<LListBox>
{
};
As I remember WTL windows library is based on the similar pattern of combining interfaces and implementations. I hope it helps.
Oh man this is confusing.
OK, so L*** is a hierarchy of interfaces, that's fine. Now what are you using the p_Impl for, if you have an interface, why would you include implementation in it?
The macro stuff is of course ugly, plus it's usually impossible to do. The whole point is that you will have different implementations, if you don't, then why create several classes in the first place?
OP seems confused. Here' what to do, it is very complex but it works.
Rule 1: Design the abstractions. If you have an "is-A" relation you must use public virtual inheritance.
struct Window { .. };
struct ListBox : virtual Window { .. };
Rule 2: Make implementations, if you're implementing an abstraction you must use virtual inheritance. You are free to use inheritance to save on duplication.
class WindowImpl : virtual Window { .. };
class BasicListBoxImpl : virtual ListBox, public WindowImpl { .. };
class FancyListBoxImpl : public BasicListBoxImpl { };
Therefore you should read "virtual" to mean "isa" and other inheritance is just saving on rewriting methods.
Rule3: Try to make sure there is only one useful function in a concrete type: the constructor. This is sometimes hard, you may need some default and some set methods to fiddle things. Once the object is set up cast away the implementation. Ideally you'd do this on construction:
ListBox *p = new FancyListBoxImpl (.....);
Notes: you are not going to call any abstract methods directly on or in an implementation so private inheritance of abstract base is just fine. Your task is exclusively to define these methods, not to use them: that's for the clients of the abstractions only. Implementations of virtual methods from the bases also might just as well be private for the same reason. Inheritance for reuse will probably be public since you might want to use these methods in the derived class or from outside of it after construction to configure your object before casting away the implementation details.
Rule 4: There is a standard implementation for many abstractions, known as delegation which is one you were talking about:
struct Abstract { virtual void method()=0; };
struct AbstractImpl_Delegate: virtual Abstract {
Abstract *p;
AbstractImpl_Delegate (Abstract *q) : p(q) {}
void method () { p->method(); }
};
This is a cute implementation since it doesn't require you to know anything about the abstraction or how to implement it... :)
I found that
Using
the preprocessor #define directive to
define constants is not as precise.
[src]
Macros are apparently not as precise, I did not even know that...
The classic hidden dangers of the preprocessor like:
#define PI_PLUS_ONE (3.14 + 1)`
By doing so, you avoid the possibility
that an order of operations issue will
destroy the meaning of your constant:
x = PI_PLUS_ONE * 5;`
Without
parentheses, the above would be
converted to
x = 3.14 + 1 * 5;
[src]

Design with (pure)virtual C++

First of all I have to mention that I have read many C++ virtual questions in on stackoverflow. I have some knowledge how they work, but when I start the project and try to design something I never consider/use virtual or pure virtual implementations. Maybe it is because I am lack of knowledge how do they work or I don't know how to realize some stuff with them. I think it's bad because I don't use fully Object Oriented development.
Maybe someone can advise me how to get used to them?
Check out abstract base classes and interfaces in Java or C# to get ideas on when pure virtuals are useful.
Virtual functions are pretty basic to OO. Theree are plenty of books out there to help you. Myself, I like Larman's Applying UML and Patterns.
but when I start the project and try to design something I never consider/use virtual or pure virtual implementations.
Here's something you can try:
Figure out the set of classes you use
Do you see some class hierarchies? A Circle is-a Shape sort of relationships?
Isolate behavior
Bubble up/down behavior to form interfaces (base classes) (Code to interfaces and not implementations)
Implement these as virtual functions
The responsibility of defining the exact semantics of the operation(s) rests with the sub-classes'
Create your sub-classes
Implement (override) the virtual functions
But don't force a hierarchy just for the sake of using them. An example from real code I have been working on recently:
class Codec {
public:
virtual GUID Guid() { return GUID_NULL; }
};
class JpegEncoder : public Codec {
public:
virtual GUID Guid() { return GUID_JpegEncoder; }
};
class PngDecoder : public Codec {
public:
virtual GUID Guid() { return GUID_PngDecoder; }
};
I don't have a ton of time ATM, but here is a simple example.
In my job I maintain and application which talks to various hardware devices. Of these devices, many motors are used for various purposes. Now, I don't know if you have done any development with motors and drives, but they are all a bit different, even if they claim to follow a standard like CANOpen. Anyway, you need to create some new code when you switch vendors, perhaps you motor or drive was end-of-life'd, etc. On top of that, this code has to maintain compatibility with older devices, and we also have various models of similar devices. So, all in all, you have to deal with many different motors and interfaces.
Now, in the code I use an abstract class, named "iMotor", which contains only pure virtual functions. In the implementation code only the iMotor class is referenced. I create a dll for different types of motors with different implementations, but they all implement the iMotor interface. So, all that I need to do to add/change a motor is create a new implementation and drop that dll in place of the old one. Because the code which uses these motor implementations deals only with the iMotor interface it never needs to change, only the implementation of how each motor does what it does needs to change.
If you google for design patterns like the "strategy pattern" and "command pattern" you will find some good uses of interfaces and polymorphism. Besides that, design patterns are always very useful to know.
You don't HAVE to use them but they have their advantages.
Generally they are used as an "interface" between 2 different types of functionality that, code wise, aren't very related.
An example would be handling file loading. A simple file handling class would seem to be perfect. However at a later stage you are asked to shift all your files into a single packaged file while maintaining support for individual files for debug purposes. How do you handle loading here? Obviously things will be handled rather differently because suddenly you can't just open a file. Instead you need to be able to look up the files location and then seek to that location before loading, pretty much, as normal.
The obvious thing to do is implement an abstract base class. Perhaps call it BaseFile. The OpenFile function handling will differ dependent on whether you are using the PackageFile or the DiskFile class. So make that a pure virtual.
Then when you derive the PackageFile and DiskFile classes you provide the appropriate implementation for Opening a file.
You can then add something such as
#if !defined( DISK_FILE ) && defined ( _DEBUG )
#define DISK_FILE 1
#elif !defined( DISK_FILE )
#define DISK_FILE 0
#endif
#if DISK_FILE
typedef DiskFile File;
#else
typedef PackageFile File;
#endif
Now you would just use the "File" typedef to do all file handling. Equally if you don't pre-define DISK_FILE as 0 or 1 and debug is set it will automatically load from disk otherwise it will load from the Package file.
Of course such a construct still allows you to load from the Package file in debug simply by defining DISK_FILE to be 1 in advance and it also allows you to use disk access in a release build by setting DISK_FILE to 0.