Rendering Engine 0.2.0
Modular Graphics Rendering Engine | v0.2.0
Loading...
Searching...
No Matches
rendering_engine::ModelCache Class Reference

Manages loading, caching, and GPU residency of all model/mesh resources. More...

#include <model_cache.hpp>

Inherits rendering_engine::IRendererObserver.

Public Member Functions

 ModelCache (IRenderer *renderer)
 Construct a ModelCache and register it with the given renderer.
 ~ModelCache ()
 Destructor. Releases all resources and unregisters observer.
void LoadModelsFromFolder (std::string pathToFolder)
 Load all models found in the specified folder into RAM.
void LoadModelsFromPackage ()
 Load all models from the packed asset container.
void CreateQuad2D ()
 Creates a built-in 2D quad mesh.
std::string UploadModelToRAM (std::string path)
 Load a single model from file into RAM.
std::string UploadModelToRAM (std::string fileName, std::vector< uint8_t > const &fileBytes)
 Load a single model into RAM from a raw file buffer.
void UploadModelToGPU (std::string filename)
 Upload a cached model's mesh data to the GPU.
void ReleaseModelFromGPU (std::string filename)
 Release a model's mesh data from GPU memory.
void ReleaseAllFromGPU ()
 Release all model mesh data from GPU memory.
void ReleaseAll ()
 Remove all models from both GPU and RAM, clearing the cache.
std::shared_ptr< MeshDataGpuGetMeshResources (std::string filename)
 Get a shared pointer to the MeshDataGpu for a model.
IMeshRenderResourcesGetMeshRenderResources (std::string filename)
 Get the IMeshRenderResources interface for a model's GPU resources.
size_t GetSizeInRAM () const
 Get the total size (in bytes) of all models loaded in RAM.
size_t GetSizeInGPU () const
 Get the total size (in bytes) of all models currently resident on GPU.
Public Member Functions inherited from rendering_engine::IRendererObserver
virtual ~IRendererObserver ()=default
 Virtual destructor.

Protected Member Functions

void OnRenderResourcesRelease () override
 Renderer callback: release all GPU resources (used during device loss/reset).
void OnRenderResourcesRebuild () override
 Renderer callback: re-upload or recreate all GPU resources (used after device reset/rebuild).

Protected Attributes

IRenderermRenderer
std::unordered_map< std::string, std::shared_ptr< MeshDataGpu > > mModels
size_t mTotalSizeRAM
size_t mTotalSizeGPU

Detailed Description

Manages loading, caching, and GPU residency of all model/mesh resources.

Responsible for importing model files into RAM, uploading mesh data to GPU memory, and managing the lifetime and statistics of all managed models in the rendering engine. Also observes renderer resource lifecycle events (release/rebuild).

Definition at line 27 of file model_cache.hpp.

Constructor & Destructor Documentation

◆ ModelCache()

rendering_engine::ModelCache::ModelCache ( IRenderer * renderer)

Construct a ModelCache and register it with the given renderer.

Parameters
rendererPointer to the owning IRenderer implementation.

Definition at line 10 of file model_cache.cpp.

11 :
12 mRenderer(renderer),
15{
16 mRenderer->RegisterObserver(this);
17}

◆ ~ModelCache()

rendering_engine::ModelCache::~ModelCache ( )

Destructor. Releases all resources and unregisters observer.

Definition at line 19 of file model_cache.cpp.

20{
21 mRenderer->UnregisterObserver(this);
22}

Member Function Documentation

◆ CreateQuad2D()

void rendering_engine::ModelCache::CreateQuad2D ( )

Creates a built-in 2D quad mesh.

This helper is used by 2D systems (e.g., UI, sprites) to create a reusable quad mesh without external model files.

Definition at line 67 of file model_cache.cpp.

68{
69 mModels["Quad2D"] = std::make_shared<MeshDataGpu>(mRenderer);
70 mModels["Quad2D"]->CreateQuad2D();
71
72 const size_t sizeVertices = mModels.at("Quad2D")->GetCpuVertexBufferSize();
73 mTotalSizeRAM += sizeVertices;
74 const size_t sizeIndices = mModels.at("Quad2D")->GetCpuIndexBufferSize();
75 mTotalSizeRAM += sizeIndices;
76}
std::unordered_map< std::string, std::shared_ptr< MeshDataGpu > > mModels

◆ GetMeshRenderResources()

IMeshRenderResources * rendering_engine::ModelCache::GetMeshRenderResources ( std::string filename)

Get the IMeshRenderResources interface for a model's GPU resources.

Parameters
filenameThe model's cache key (filename without extension).
Returns
Pointer to IMeshRenderResources, or nullptr if not found or not on GPU.

Definition at line 190 of file model_cache.cpp.

191{
192 if (auto search = mModels.find(filename); search == mModels.end())
193 {
194 return nullptr;
195 }
196
197 return mModels.at(filename)->GetMeshRenderResources();
198}

◆ GetMeshResources()

std::shared_ptr< MeshDataGpu > rendering_engine::ModelCache::GetMeshResources ( std::string filename)

Get a shared pointer to the MeshDataGpu for a model.

Parameters
filenameThe model's cache key (filename without extension).
Returns
Shared pointer to MeshDataGpu, or nullptr if not found.

Definition at line 180 of file model_cache.cpp.

181{
182 auto search = mModels.find(filename);
183 if (search == mModels.end())
184 {
185 return nullptr;
186 }
187 return search->second;
188}

◆ GetSizeInGPU()

size_t rendering_engine::ModelCache::GetSizeInGPU ( ) const
inline

Get the total size (in bytes) of all models currently resident on GPU.

Returns
Total GPU usage in bytes.

Definition at line 205 of file model_cache.cpp.

206{
207 return mTotalSizeGPU;
208}

◆ GetSizeInRAM()

size_t rendering_engine::ModelCache::GetSizeInRAM ( ) const
inline

Get the total size (in bytes) of all models loaded in RAM.

Returns
Total RAM usage in bytes.

Definition at line 200 of file model_cache.cpp.

201{
202 return mTotalSizeRAM;
203}

◆ LoadModelsFromFolder()

void rendering_engine::ModelCache::LoadModelsFromFolder ( std::string pathToFolder)

Load all models found in the specified folder into RAM.

Parameters
pathToFolderPath to the directory containing model files.

Definition at line 24 of file model_cache.cpp.

25{
26 // 1. Check if path is valid and exist
27 boost::filesystem::path pathToDirectory = boost::filesystem::path(pathToFolder);
28 const bool isValidFolderPath = boost::filesystem::exists(boost::filesystem::path(pathToFolder)) && boost::filesystem::is_directory(boost::filesystem::path(pathToFolder));
29 if (!isValidFolderPath)
30 {
31 return;
32 }
33 // 2. Iterate through files in the folder.
34 // if file is in the list of supported extensions
35 for (boost::filesystem::directory_entry& x : boost::filesystem::directory_iterator(pathToDirectory))
36 {
37 (void)UploadModelToRAM(x.path().string());
38 }
39}
std::string UploadModelToRAM(std::string path)
Load a single model from file into RAM.

◆ LoadModelsFromPackage()

void rendering_engine::ModelCache::LoadModelsFromPackage ( )

Load all models from the packed asset container.

This function behaves similarly to LoadModelsFromFolder(), but instead retrieves model files from the packed asset system created by the Packaging Tool. Each packed file is read into memory and processed via UploadModelToRAM(std::string, const std::vector<uint8_t>&).

Only file entries located under the virtual folder: "Models/" are considered.

The resulting MeshDataGpu objects remain cached and ready for GPU upload.

Definition at line 41 of file model_cache.cpp.

42{
43 const auto& entries = Utility::GetPackEntries();
44
45 std::string folderEntry = { "Models/" };
46 for (auto& entry : entries)
47 {
48 const std::string& virtualPath = entry.first;
49 if (virtualPath.rfind(folderEntry, 0) == 0) // starts with Models/
50 {
51 std::string modelName = virtualPath.substr(folderEntry.size());
52
53 std::vector<uint8_t> binaryFileData = Utility::ReadPackedFile(virtualPath);
54 if (binaryFileData.empty())
55 {
56 std::cerr << "[TextureCache] Could not read packed texture: "
57 << virtualPath << std::endl;
58 continue;
59 }
60
61 // Upload to RAM with textureName + binaryFileData
62 (void)UploadModelToRAM(modelName, binaryFileData);
63 }
64 }
65}
static const PackEntries & GetPackEntries()
Returns the manifest of packed files.
Definition utility.cpp:219
static std::vector< uint8_t > ReadPackedFile(const std::string &entryPath)
Reads raw bytes of a file stored inside Pack.bin.
Definition utility.cpp:260

◆ OnRenderResourcesRebuild()

void rendering_engine::ModelCache::OnRenderResourcesRebuild ( )
overrideprotectedvirtual

Renderer callback: re-upload or recreate all GPU resources (used after device reset/rebuild).

This method will be called after the device or swapchain is recreated, allowing the observer to re-upload or recreate all necessary resources for rendering.

Implements rendering_engine::IRendererObserver.

Definition at line 215 of file model_cache.cpp.

216{
217 for (auto& model : mModels)
218 {
219 model.second->UploadToGPU();
220 size_t sizeVertices = model.second->GetGpuVertexBufferSize();
221 mTotalSizeGPU += sizeVertices;
222 size_t sizeIndices = model.second->GetGpuIndexBufferSize();
223 mTotalSizeGPU += sizeIndices;
224 }
225}

◆ OnRenderResourcesRelease()

void rendering_engine::ModelCache::OnRenderResourcesRelease ( )
overrideprotectedvirtual

Renderer callback: release all GPU resources (used during device loss/reset).

This method will be called before any device or swapchain is destroyed, allowing the observer to safely release all handles and deallocate any GPU memory.

Implements rendering_engine::IRendererObserver.

Definition at line 210 of file model_cache.cpp.

211{
213}
void ReleaseAllFromGPU()
Release all model mesh data from GPU memory.

◆ ReleaseAll()

void rendering_engine::ModelCache::ReleaseAll ( )

Remove all models from both GPU and RAM, clearing the cache.

Definition at line 173 of file model_cache.cpp.

174{
175 mModels.clear();
176 mTotalSizeRAM = 0;
177 mTotalSizeGPU = 0;
178}

◆ ReleaseAllFromGPU()

void rendering_engine::ModelCache::ReleaseAllFromGPU ( )

Release all model mesh data from GPU memory.

Definition at line 164 of file model_cache.cpp.

165{
166 for (auto& texture : mModels)
167 {
168 texture.second->ReleaseFromGPU();
169 }
170 mTotalSizeGPU = 0;
171}

◆ ReleaseModelFromGPU()

void rendering_engine::ModelCache::ReleaseModelFromGPU ( std::string filename)

Release a model's mesh data from GPU memory.

Parameters
filenameThe model's cache key (filename without extension).

Definition at line 148 of file model_cache.cpp.

149{
150 if (auto search = mModels.find(filename); search == mModels.end())
151 {
152 return;
153 }
154
155 auto& model = mModels[filename];
156 size_t sizeVertices = model->GetGpuVertexBufferSize();
157 size_t sizeIndices = model->GetGpuIndexBufferSize();
158 model->ReleaseFromGPU();
159
160 mTotalSizeGPU -= sizeVertices;
161 mTotalSizeGPU -= sizeIndices;
162}

◆ UploadModelToGPU()

void rendering_engine::ModelCache::UploadModelToGPU ( std::string filename)

Upload a cached model's mesh data to the GPU.

Parameters
filenameThe model's cache key (filename without extension).

Definition at line 129 of file model_cache.cpp.

130{
131 // If texture is not loaded in RAM yet, skip loading to GPU.
132 if (auto search = mModels.find(filename); search == mModels.end())
133 {
134 return;
135 }
136 if (mModels[filename]->IsOnGPU())
137 {
138 return;
139 }
140
141 mModels[filename]->UploadToGPU();
142 size_t sizeVertices = mModels[filename]->GetGpuVertexBufferSize();
143 mTotalSizeGPU += sizeVertices;
144 size_t sizeIndices = mModels[filename]->GetGpuIndexBufferSize();
145 mTotalSizeGPU += sizeIndices;
146}

◆ UploadModelToRAM() [1/2]

std::string rendering_engine::ModelCache::UploadModelToRAM ( std::string fileName,
std::vector< uint8_t > const & fileBytes )

Load a single model into RAM from a raw file buffer.

This overload is used when the model originates from a packed asset archive or any virtual filesystem source rather than the OS filesystem.

Parameters
fileNameCache key (typically relative virtual path, e.g. "Models/cube.fbx").
fileBytesRaw contents of the model file.
Returns
The cache key on success, or an empty string on failure.

Definition at line 109 of file model_cache.cpp.

110{
111 auto modelName = boost::filesystem::path(fileName).stem().string();
112
113 // If model is already loaded into RAM yet, do not add again.
114 if (auto search = mModels.find(modelName); search != mModels.end())
115 {
116 return std::string{};
117 }
118
119 mModels[modelName] = std::make_shared<MeshDataGpu>(fileBytes, mRenderer);
120
121 const size_t sizeVertices = mModels.at(modelName)->GetCpuVertexBufferSize();
122 mTotalSizeRAM += sizeVertices;
123 const size_t sizeIndices = mModels.at(modelName)->GetCpuIndexBufferSize();
124 mTotalSizeRAM += sizeIndices;
125
126 return modelName;
127}

◆ UploadModelToRAM() [2/2]

std::string rendering_engine::ModelCache::UploadModelToRAM ( std::string path)

Load a single model from file into RAM.

Parameters
pathPath to the model file.
Returns
The filename used as a cache key, or empty string on failure.

Definition at line 78 of file model_cache.cpp.

79{
80 auto filePath = boost::filesystem::path(path);
81 if (!boost::filesystem::is_regular_file(filePath))
82 {
83 return std::string{};
84 }
85
86 const std::string ext = filePath.extension().string();
87 const bool isExtensionSupported = (ext == ".fbx");
88 if (!isExtensionSupported)
89 {
90 return std::string{};
91 }
92
93 std::string filename = filePath.stem().string();
94 // If model is already loaded into RAM yet, do not add again.
95 if (auto search = mModels.find(filename); search != mModels.end())
96 {
97 return std::string{};
98 }
99 mModels[filename] = std::make_shared<MeshDataGpu>(filePath.string(), mRenderer);
100
101 const size_t sizeVertices = mModels.at(filename)->GetCpuVertexBufferSize();
102 mTotalSizeRAM += sizeVertices;
103 const size_t sizeIndices = mModels.at(filename)->GetCpuIndexBufferSize();
104 mTotalSizeRAM += sizeIndices;
105
106 return filename;
107}

Member Data Documentation

◆ mModels

std::unordered_map<std::string, std::shared_ptr<MeshDataGpu> > rendering_engine::ModelCache::mModels
protected

Definition at line 150 of file model_cache.hpp.

◆ mRenderer

IRenderer* rendering_engine::ModelCache::mRenderer
protected

Definition at line 149 of file model_cache.hpp.

◆ mTotalSizeGPU

size_t rendering_engine::ModelCache::mTotalSizeGPU
protected

Definition at line 153 of file model_cache.hpp.

◆ mTotalSizeRAM

size_t rendering_engine::ModelCache::mTotalSizeRAM
protected

Definition at line 152 of file model_cache.hpp.


The documentation for this class was generated from the following files: