Rendering Engine 0.2.9
Modular Graphics Rendering Engine | v0.2.9
rendering_engine::ModelCache Class Reference

Manages loading, caching, and GPU residency of all model/mesh resources. More...

#include <model_cache.hpp>

Inherits rendering_engine::IRendererObserver.

Public Member Functions

 ModelCache (IRenderer *renderer)
 Construct a ModelCache and register it with the given renderer. More...
 
 ~ModelCache ()
 Destructor. Releases all resources and unregisters observer. More...
 
void LoadModelsFromFolder (std::string pathToFolder)
 Load all models found in the specified folder into RAM. More...
 
void LoadModelsFromPackage ()
 Load all models from the packed asset container. More...
 
void CreateQuad2D ()
 Creates a built-in 2D quad mesh. More...
 
void LoadCustomMesh (std::string meshName, std::vector< glm::vec2 > positions2D, std::vector< glm::vec2 > texCoords, std::vector< glm::vec4 > colors, std::vector< std::uint32_t > indices)
 
std::string UploadModelToRAM (std::string path)
 Load a single model from file into RAM. More...
 
std::string UploadModelToRAM (std::string fileName, std::vector< uint8_t > const &fileBytes)
 Load a single model into RAM from a raw file buffer. More...
 
void UploadModelToGPU (std::string filename)
 Upload a cached model's mesh data to the GPU. More...
 
void ReleaseModelFromGPU (std::string filename)
 Release a model's mesh data from GPU memory. More...
 
void ReleaseAllFromGPU ()
 Release all model mesh data from GPU memory. More...
 
void ReleaseAll ()
 Remove all models from both GPU and RAM, clearing the cache. More...
 
std::shared_ptr< MeshDataGpuGetMeshResources (std::string filename)
 Get a shared pointer to the MeshDataGpu for a model. More...
 
IMeshRenderResourcesGetMeshRenderResources (std::string filename)
 Get the IMeshRenderResources interface for a model's GPU resources. More...
 
size_t GetSizeInRAM () const
 Get the total size (in bytes) of all models loaded in RAM. More...
 
size_t GetSizeInGPU () const
 Get the total size (in bytes) of all models currently resident on GPU. More...
 
- Public Member Functions inherited from rendering_engine::IRendererObserver
virtual void OnRenderResourcesRelease ()=0
 Renderer callback: release all GPU resources (used during device loss/reset). More...
 
virtual void OnRenderResourcesRebuild ()=0
 Renderer callback: re-upload or recreate all GPU resources (used after device reset/rebuild). More...
 
virtual ~IRendererObserver ()=default
 Virtual destructor. More...
 

Protected Member Functions

void OnRenderResourcesRelease () override
 Renderer callback: release all GPU resources (used during device loss/reset). More...
 
void OnRenderResourcesRebuild () override
 Renderer callback: re-upload or recreate all GPU resources (used after device reset/rebuild). More...
 

Protected Attributes

IRenderermRenderer
 
std::unordered_map< std::string, std::shared_ptr< MeshDataGpu > > mModels
 
size_t mTotalSizeRAM
 
size_t mTotalSizeGPU
 

Detailed Description

Manages loading, caching, and GPU residency of all model/mesh resources.

Responsible for importing model files into RAM, uploading mesh data to GPU memory, and managing the lifetime and statistics of all managed models in the rendering engine. Also observes renderer resource lifecycle events (release/rebuild).

Definition at line 29 of file model_cache.hpp.

Constructor & Destructor Documentation

◆ ModelCache()

rendering_engine::ModelCache::ModelCache ( IRenderer renderer)

Construct a ModelCache and register it with the given renderer.

Parameters
rendererPointer to the owning IRenderer implementation.

Definition at line 13 of file model_cache.cpp.

14 :
15 mRenderer(renderer),
18{
19 LOG_DEBUG("ModelCache created.");
21}
virtual void RegisterObserver(IRendererObserver *notifier)=0
Registers an observer for rendering events.
#define LOG_DEBUG(msg)
Definition: logger.hpp:38

◆ ~ModelCache()

rendering_engine::ModelCache::~ModelCache ( )

Destructor. Releases all resources and unregisters observer.

Definition at line 23 of file model_cache.cpp.

24{
25 LOG_DEBUG("ModelCache destroyed.");
27}
virtual void UnregisterObserver(IRendererObserver *notifier)=0
Unregisters a previously registered observer.

Member Function Documentation

◆ CreateQuad2D()

void rendering_engine::ModelCache::CreateQuad2D ( )

Creates a built-in 2D quad mesh.

This helper is used by 2D systems (e.g., UI, sprites) to create a reusable quad mesh without external model files.

Definition at line 90 of file model_cache.cpp.

91{
92 mModels["Quad2D"] = std::make_shared<MeshDataGpu>(mRenderer);
93 mModels["Quad2D"]->CreateQuad2D();
94
95 const size_t sizeVertices = mModels.at("Quad2D")->GetCpuVertexBufferSize();
96 mTotalSizeRAM += sizeVertices;
97 const size_t sizeIndices = mModels.at("Quad2D")->GetCpuIndexBufferSize();
98 mTotalSizeRAM += sizeIndices;
99}
std::unordered_map< std::string, std::shared_ptr< MeshDataGpu > > mModels

◆ GetMeshRenderResources()

IMeshRenderResources * rendering_engine::ModelCache::GetMeshRenderResources ( std::string  filename)

Get the IMeshRenderResources interface for a model's GPU resources.

Parameters
filenameThe model's cache key (filename without extension).
Returns
Pointer to IMeshRenderResources, or nullptr if not found or not on GPU.

Definition at line 269 of file model_cache.cpp.

270{
271 if (auto search = mModels.find(filename); search == mModels.end())
272 {
273 return nullptr;
274 }
275
276 return mModels.at(filename)->GetMeshRenderResources();
277}

◆ GetMeshResources()

std::shared_ptr< MeshDataGpu > rendering_engine::ModelCache::GetMeshResources ( std::string  filename)

Get a shared pointer to the MeshDataGpu for a model.

Parameters
filenameThe model's cache key (filename without extension).
Returns
Shared pointer to MeshDataGpu, or nullptr if not found.

Definition at line 259 of file model_cache.cpp.

260{
261 auto search = mModels.find(filename);
262 if (search == mModels.end())
263 {
264 return nullptr;
265 }
266 return search->second;
267}

◆ GetSizeInGPU()

size_t rendering_engine::ModelCache::GetSizeInGPU ( ) const
inline

Get the total size (in bytes) of all models currently resident on GPU.

Returns
Total GPU usage in bytes.

Definition at line 284 of file model_cache.cpp.

285{
286 return mTotalSizeGPU;
287}

◆ GetSizeInRAM()

size_t rendering_engine::ModelCache::GetSizeInRAM ( ) const
inline

Get the total size (in bytes) of all models loaded in RAM.

Returns
Total RAM usage in bytes.

Definition at line 279 of file model_cache.cpp.

280{
281 return mTotalSizeRAM;
282}

◆ LoadCustomMesh()

void rendering_engine::ModelCache::LoadCustomMesh ( std::string  meshName,
std::vector< glm::vec2 >  positions2D,
std::vector< glm::vec2 >  texCoords,
std::vector< glm::vec4 >  colors,
std::vector< std::uint32_t >  indices 
)

Definition at line 101 of file model_cache.cpp.

102{
103 LOG_DEBUG("Loading custom mesh: " + meshName);
104 auto start = std::chrono::steady_clock::now();
105 auto search = mModels.find(meshName);
106 if (search != mModels.end())
107 {
108 mTotalSizeRAM -= search->second->GetCpuVertexBufferSize();
109 mTotalSizeRAM -= search->second->GetCpuIndexBufferSize();
110 ReleaseModelFromGPU(meshName);
111 }
112
113 mModels[meshName] = std::make_shared<MeshDataGpu>(mRenderer);
114 mModels[meshName]->LoadCustomMesh(positions2D, texCoords, colors, indices);
115
116 const size_t sizeVertices = mModels.at(meshName)->GetCpuVertexBufferSize();
117 mTotalSizeRAM += sizeVertices;
118 const size_t sizeIndices = mModels.at(meshName)->GetCpuIndexBufferSize();
119 mTotalSizeRAM += sizeIndices;
120
121 auto end = std::chrono::steady_clock::now();
122 float ms = std::chrono::duration<float, std::milli>(end - start).count();
123
124 LOG_DEBUG("Custom mesh loaded: " + meshName +
125 " (RAM: " +
126 std::to_string(sizeVertices + sizeIndices) +
127 " bytes, " + std::to_string(ms) + " ms)");
128}
void ReleaseModelFromGPU(std::string filename)
Release a model's mesh data from GPU memory.

◆ LoadModelsFromFolder()

void rendering_engine::ModelCache::LoadModelsFromFolder ( std::string  pathToFolder)

Load all models found in the specified folder into RAM.

Parameters
pathToFolderPath to the directory containing model files.

Definition at line 29 of file model_cache.cpp.

30{
31 LOG_INFO("Loading models from folder: " + pathToFolder);
32 auto start = std::chrono::steady_clock::now();
33 // 1. Check if path is valid and exist
34 boost::filesystem::path pathToDirectory = boost::filesystem::path(pathToFolder);
35 const bool isValidFolderPath = boost::filesystem::exists(boost::filesystem::path(pathToFolder)) && boost::filesystem::is_directory(boost::filesystem::path(pathToFolder));
36 if (!isValidFolderPath)
37 {
38 return;
39 }
40 // 2. Iterate through files in the folder.
41 // if file is in the list of supported extensions
42 for (boost::filesystem::directory_entry& x : boost::filesystem::directory_iterator(pathToDirectory))
43 {
44 (void)UploadModelToRAM(x.path().string());
45 }
46
47 auto end = std::chrono::steady_clock::now();
48 float ms = std::chrono::duration<float, std::milli>(end - start).count();
49
50 LOG_INFO("Loaded " + std::to_string(mModels.size()) +
51 " models from folder in " +
52 std::to_string(ms) + " ms. RAM usage: " +
53 std::to_string(mTotalSizeRAM) + " bytes.");
54}
std::string UploadModelToRAM(std::string path)
Load a single model from file into RAM.
#define LOG_INFO(msg)
Definition: logger.hpp:39

◆ LoadModelsFromPackage()

void rendering_engine::ModelCache::LoadModelsFromPackage ( )

Load all models from the packed asset container.

This function behaves similarly to LoadModelsFromFolder(), but instead retrieves model files from the packed asset system created by the Packaging Tool. Each packed file is read into memory and processed via UploadModelToRAM(std::string, const std::vector<uint8_t>&).

Only file entries located under the virtual folder: "Models/" are considered.

The resulting MeshDataGpu objects remain cached and ready for GPU upload.

Definition at line 56 of file model_cache.cpp.

57{
58 LOG_INFO("Loading models from package.");
59 auto start = std::chrono::steady_clock::now();
60 const auto& entries = Utility::GetPackEntries();
61
62 std::string folderEntry = { "Models/" };
63 for (auto& entry : entries)
64 {
65 const std::string& virtualPath = entry.first;
66 if (virtualPath.rfind(folderEntry, 0) == 0) // starts with Models/
67 {
68 std::string modelName = virtualPath.substr(folderEntry.size());
69
70 std::vector<uint8_t> binaryFileData = Utility::ReadPackedFile(virtualPath);
71 if (binaryFileData.empty())
72 {
73 LOG_ERROR("Failed to read packed model: " + virtualPath);
74 continue;
75 }
76
77 // Upload to RAM with textureName + binaryFileData
78 (void)UploadModelToRAM(modelName, binaryFileData);
79 }
80 }
81 auto end = std::chrono::steady_clock::now();
82 float ms = std::chrono::duration<float, std::milli>(end - start).count();
83
84 LOG_INFO("Loaded " + std::to_string(mModels.size()) +
85 " models from package in " +
86 std::to_string(ms) + " ms. RAM usage: " +
87 std::to_string(mTotalSizeRAM) + " bytes.");
88}
static const PackEntries & GetPackEntries()
Returns the manifest of packed files.
Definition: utility.cpp:281
static std::vector< uint8_t > ReadPackedFile(const std::string &entryPath)
Reads raw bytes of a file stored inside Pack.bin.
Definition: utility.cpp:322
#define LOG_ERROR(msg)
Definition: logger.hpp:41

◆ OnRenderResourcesRebuild()

void rendering_engine::ModelCache::OnRenderResourcesRebuild ( )
overrideprotectedvirtual

Renderer callback: re-upload or recreate all GPU resources (used after device reset/rebuild).

This method will be called after the device or swapchain is recreated, allowing the observer to re-upload or recreate all necessary resources for rendering.

Implements rendering_engine::IRendererObserver.

Definition at line 294 of file model_cache.cpp.

295{
296 for (auto& model : mModels)
297 {
298 model.second->UploadToGPU();
299 size_t sizeVertices = model.second->GetGpuVertexBufferSize();
300 mTotalSizeGPU += sizeVertices;
301 size_t sizeIndices = model.second->GetGpuIndexBufferSize();
302 mTotalSizeGPU += sizeIndices;
303 }
304}

◆ OnRenderResourcesRelease()

void rendering_engine::ModelCache::OnRenderResourcesRelease ( )
overrideprotectedvirtual

Renderer callback: release all GPU resources (used during device loss/reset).

This method will be called before any device or swapchain is destroyed, allowing the observer to safely release all handles and deallocate any GPU memory.

Implements rendering_engine::IRendererObserver.

Definition at line 289 of file model_cache.cpp.

290{
292}
void ReleaseAllFromGPU()
Release all model mesh data from GPU memory.

◆ ReleaseAll()

void rendering_engine::ModelCache::ReleaseAll ( )

Remove all models from both GPU and RAM, clearing the cache.

Definition at line 248 of file model_cache.cpp.

249{
250 LOG_INFO("Releasing all models. RAM usage: " +
251 std::to_string(mTotalSizeRAM) +
252 ", GPU usage: " +
253 std::to_string(mTotalSizeGPU));
254 mModels.clear();
255 mTotalSizeRAM = 0;
256 mTotalSizeGPU = 0;
257}

◆ ReleaseAllFromGPU()

void rendering_engine::ModelCache::ReleaseAllFromGPU ( )

Release all model mesh data from GPU memory.

Definition at line 239 of file model_cache.cpp.

240{
241 for (auto& texture : mModels)
242 {
243 texture.second->ReleaseFromGPU();
244 }
245 mTotalSizeGPU = 0;
246}

◆ ReleaseModelFromGPU()

void rendering_engine::ModelCache::ReleaseModelFromGPU ( std::string  filename)

Release a model's mesh data from GPU memory.

Parameters
filenameThe model's cache key (filename without extension).

Definition at line 223 of file model_cache.cpp.

224{
225 if (auto search = mModels.find(filename); search == mModels.end())
226 {
227 return;
228 }
229 LOG_DEBUG("Releasing model from GPU: " + filename);
230 auto& model = mModels[filename];
231 size_t sizeVertices = model->GetGpuVertexBufferSize();
232 size_t sizeIndices = model->GetGpuIndexBufferSize();
233 model->ReleaseFromGPU();
234
235 mTotalSizeGPU -= sizeVertices;
236 mTotalSizeGPU -= sizeIndices;
237}

◆ UploadModelToGPU()

void rendering_engine::ModelCache::UploadModelToGPU ( std::string  filename)

Upload a cached model's mesh data to the GPU.

Parameters
filenameThe model's cache key (filename without extension).

Definition at line 196 of file model_cache.cpp.

197{
198 // If texture is not loaded in RAM yet, skip loading to GPU.
199 if (auto search = mModels.find(filename); search == mModels.end())
200 {
201 return;
202 }
203 if (mModels[filename]->IsOnGPU())
204 {
205 return;
206 }
207 LOG_DEBUG("Uploading model to GPU: " + filename);
208 auto start = std::chrono::steady_clock::now();
209 mModels[filename]->UploadToGPU();
210 size_t sizeVertices = mModels[filename]->GetGpuVertexBufferSize();
211 mTotalSizeGPU += sizeVertices;
212 size_t sizeIndices = mModels[filename]->GetGpuIndexBufferSize();
213 mTotalSizeGPU += sizeIndices;
214 auto end = std::chrono::steady_clock::now();
215 float ms = std::chrono::duration<float, std::milli>(end - start).count();
216
217 LOG_DEBUG("Model uploaded to GPU: " + filename +
218 " (GPU: " +
219 std::to_string(sizeVertices + sizeIndices) +
220 " bytes, " + std::to_string(ms) + " ms)");
221}

◆ UploadModelToRAM() [1/2]

std::string rendering_engine::ModelCache::UploadModelToRAM ( std::string  fileName,
std::vector< uint8_t > const &  fileBytes 
)

Load a single model into RAM from a raw file buffer.

This overload is used when the model originates from a packed asset archive or any virtual filesystem source rather than the OS filesystem.

Parameters
fileNameCache key (typically relative virtual path, e.g. "Models/cube.fbx").
fileBytesRaw contents of the model file.
Returns
The cache key on success, or an empty string on failure.

Definition at line 169 of file model_cache.cpp.

170{
171 auto modelName = boost::filesystem::path(fileName).stem().string();
172
173 // If model is already loaded into RAM yet, do not add again.
174 if (auto search = mModels.find(modelName); search != mModels.end())
175 {
176 return std::string{};
177 }
178 LOG_DEBUG("Uploading model to RAM: " + fileName);
179 auto start = std::chrono::steady_clock::now();
180 mModels[modelName] = std::make_shared<MeshDataGpu>(fileBytes, mRenderer);
181
182 const size_t sizeVertices = mModels.at(modelName)->GetCpuVertexBufferSize();
183 mTotalSizeRAM += sizeVertices;
184 const size_t sizeIndices = mModels.at(modelName)->GetCpuIndexBufferSize();
185 mTotalSizeRAM += sizeIndices;
186 auto end = std::chrono::steady_clock::now();
187 float ms = std::chrono::duration<float, std::milli>(end - start).count();
188
189 LOG_INFO("Model loaded to RAM: " + fileName +
190 " (Vertices+Indices: " +
191 std::to_string(sizeVertices + sizeIndices) +
192 " bytes, " + std::to_string(ms) + " ms)");
193 return modelName;
194}

◆ UploadModelToRAM() [2/2]

std::string rendering_engine::ModelCache::UploadModelToRAM ( std::string  path)

Load a single model from file into RAM.

Parameters
pathPath to the model file.
Returns
The filename used as a cache key, or empty string on failure.

Definition at line 130 of file model_cache.cpp.

131{
132 auto filePath = boost::filesystem::path(path);
133 if (!boost::filesystem::is_regular_file(filePath))
134 {
135 return std::string{};
136 }
137
138 const std::string ext = filePath.extension().string();
139 const bool isExtensionSupported = (ext == ".fbx");
140 if (!isExtensionSupported)
141 {
142 return std::string{};
143 }
144
145 std::string filename = filePath.stem().string();
146 // If model is already loaded into RAM yet, do not add again.
147 if (auto search = mModels.find(filename); search != mModels.end())
148 {
149 return std::string{};
150 }
151 LOG_DEBUG("Uploading model to RAM: " + filename);
152 auto start = std::chrono::steady_clock::now();
153 mModels[filename] = std::make_shared<MeshDataGpu>(filePath.string(), mRenderer);
154
155 const size_t sizeVertices = mModels.at(filename)->GetCpuVertexBufferSize();
156 mTotalSizeRAM += sizeVertices;
157 const size_t sizeIndices = mModels.at(filename)->GetCpuIndexBufferSize();
158 mTotalSizeRAM += sizeIndices;
159 auto end = std::chrono::steady_clock::now();
160 float ms = std::chrono::duration<float, std::milli>(end - start).count();
161
162 LOG_DEBUG("Model loaded to RAM: " + filename +
163 " (Vertices+Indices: " +
164 std::to_string(sizeVertices + sizeIndices) +
165 " bytes, " + std::to_string(ms) + " ms)");
166 return filename;
167}

Member Data Documentation

◆ mModels

std::unordered_map<std::string, std::shared_ptr<MeshDataGpu> > rendering_engine::ModelCache::mModels
protected

Definition at line 158 of file model_cache.hpp.

◆ mRenderer

IRenderer* rendering_engine::ModelCache::mRenderer
protected

Definition at line 157 of file model_cache.hpp.

◆ mTotalSizeGPU

size_t rendering_engine::ModelCache::mTotalSizeGPU
protected

Definition at line 161 of file model_cache.hpp.

◆ mTotalSizeRAM

size_t rendering_engine::ModelCache::mTotalSizeRAM
protected

Definition at line 160 of file model_cache.hpp.


The documentation for this class was generated from the following files: