Editor Architecture
📐

Editor Architecture

Tags
Forge
Architecture
Published
Apr 4, 2022
Forge is a C++ 17 application that uses OpenGL, GLAD and GLFW for rendering and window management, respectively. Regarding the build, I use CMake to generate the executable and then I test with a program called RenderDoc. CMake is an open-source build generation tool that generates the build information per platform. For my use case I was compiling on Windows Subsystem for Linux (WSL) [1] on Windows 10 so CMake was generating the makefile necessary for building the executable. Debugging graphical issues can be problematic as there is not a great way to see what data is flowing through the graphics pipeline. To help alleviate this problem, I found a tool called RenderDoc [2] that allows you to analyze the pipeline and see the data as it moves from one step to another.
 
Other than OpenGL, GLAD and GLFW, I did incorporate some third-party libraries to ease the implementation of the editor as I felt that building robust enough systems to enable me to start to build the feature that I wanted was out of the scope of the work. I also made sure to only use libraries that were well regarded and widely used across the industry to ensure that the experience I had while building the editor was consistent with the developers who created engines and editors before me. These libraries were Assimp, GLM, dear-imgui, yaml-cpp, and stb. These libraries are utility libraries that I used to aid in various functions within the editor. Assimp stands for Open Asset Import Library and is a library that “Loads 40+ 3D-file-formats into one unified and clean data structure.” according to their documentation [3]. I used this library to handle the loading of models into the editor for manipulation. This way I did not have to worry about writing an implementation for each file type and a unified format. GLM stands for OpenGL Mathematics and is a header-only math library [4] that I used to handle matrices and vectors for the mathematics calculations necessary for rendering in the editor. Dear-imgui [5] is the library that I used to design the UI. It is an immediate mode GUI library that is used widely across the industry. The immediate mode means that the scene is redrawn each frame. Dear-imgui handles the logic for the draggable windows as well as the rendering of all the UI elements. Yaml-cpp [6] is a library that reads and writes files in YAML format [7]. I use YAML as the format for my scene/configuration files to store and load the data. Lastly, I use two header files from the stb project which is a set of header-only utility files [8]. Specifically, I use stb_image to read in images for loading things like UI image elements and imported model textures. While stb_ image_write gets used as part of a screenshot utility that is built into the editor.
 
OpenGL/GLAD/GLFW. Regarding the core graphics libraries that are needed to draw to the screen I am using OpenGL, GLAD and GLFW. OpenGL is a graphics API developed by the Khronos Group [9]. This API or application programming interface is a set of functions that communicate with the graphics hardware. OpenGL translates the data on the CPU to commands for the GPU through a series of buffers and shaders. I decided to go with OpenGL and GLFW due to the familiarity I had with them from the Introduction to Computer Graphics course. There has been a push in recent years to a new graphics APIs such as Vulkan, Metal, and DirectX 11+ but I felt that at the start of the project that they were outside of the scope of my knowledge and OpenGL was suitable to fulfill my goals. Along with this APIs such as Vulkan remove any abstractions away from dealing with the hardware. This means that the developer is responsible for handling concepts that were previously handled by the API know are handled by the developer. This makes the code more verbose and more difficult to set up but allows for more configuration and the potential for performance increases. According to the OpenGL wiki GLAD is, “a library that loads pointers to OpenGL functions at runtime, core as well as extensions. This is required to access functions from OpenGL versions above 1.1 on most platforms.” [10]. I needed this as I was using OpenGL version 3.3. GLFW is an OpenGL library that supports windowing [11]. This is needed to display anything on the screen. The library handles window creation as well as binding keyboard and even controller input. OpenGL uses the window created as a target or context to draw to and that window is what is used to house the editor runtime.
notion image
Figure 1. Architecture of the Forge Editor
 
Now that I have discussed some of the technology behind the project, I will outline the structure of the components that make up the project. As shown in Figure 1 there are a few main classes that the editor leverages. I was aiming to have a simple driver program that can instantiate one class to get the editor running. To do this I ended up creating a set of manager classes that handle the state of various resources that are then modified through the UI. The Editor class relies on these manager classes to instantiate the objects that it needs for the runtime of the engine. These manager classes follow a façade pattern [12]. This pattern is a system design pattern that hides the implementation details from the caller. This is especially helpful when working with OpenGL and other graphics APIs as it is not the editor’s responsibility to manage the resources needed to instantiate the window or the rendering context. From the editor’s perspective the only thing it needs to do is create a window. The window manager then knows how to create the window with the necessary resources. The main classes that I want to touch on in this section are the Editor, UI Layer (ImGui Layer), Shaders, Renderer, Scene, Serializer, and Model. These are the classes that do most of the heavy lifting. Outside of these main classes there are a set of smaller classes that handle modeling the data that is manipulated in the larger classes.
 
Editor
This is the main driver of the program and all that the main function needs to run the program. The editor’s responsibility is to instantiate the runtime of the application and handle creating resources necessary to scene creation. As stated above in the “Project Structure” section, the Editor does not create all the resources itself, but rather delegates the work to a set of managers that have the details necessary for the creation of the resources. The managers also are stateful, so they can maintain the state of the resources that they have allocated. This means that the editor does not own a specific resource but owns the ability to delegate work to the specific resource for objects to be created.
#ifndef FORGE_EDITOR_H
#define FORGE_EDITOR_H
// includes go here
class Editor {
  public: UIManager uiManager;
  Shader meshShader = Shader("../shaders/vert.glsl", "../shaders/frag.glsl");
  Shader lightShader = Shader("../shaders/light/vert.glsl", "../shaders/light/frag.glsl");
  Shader screenShader = Shader("../shaders/framebuffer/vert.glsl", "../shaders/framebuffer/frag.glsl");
  Skybox skybox;
  ScreenTexture screenTexture;
  Scene scene;
  Framebuffer framebuffer;
  Editor();
  void calculateFrame();
  void renderScene();
  void Editor::renderSceneDepth(std::shared_ptr < Light > light);
  void drawToQuad();
  void setupUI();
  void updateUI();
  void run();
  void destroy();
};
#endif //FORGE_EDITOR_H
Figure 2: The header file for the Editor class 
This is useful as it keeps the job of the editor to three very succinct tasks, initialize resources, use resources, and destroy resources. In Figure 2, I show the setup of these manager classes in the editor. The UIManager does the work of setting up the window but the editor is not responsible for the instantiation as this can be seen in the UIManager construction shown in Figure 3. Specifically, the first line of the UIManager constructor creates the context from the window property that is attached to the UIManager. I also want to note that this context is not the UIManager’s responsibility but is the responsibility of the GuiLayer where dear-imgui is creating the context to draw into for the tool windows
class UIManager {
  public:
    Window window;
  ModalManager modalManager;
  Settings settings;
  std::map < std::string, UITexture > uiTextures;
  UIManager();
  private:
    void loadDefaultTextures();
};
UIManager::UIManager() {
  GuiLayer::createContext(this -> window.windowInstance);
  UIManager::loadDefaultTextures();
}
void UIManager::loadDefaultTextures() {
  uiTextures.insert(std::pair < std::string, UITexture > ("folder",
    UITexture("../assets/editor/folder.png")));
  uiTextures.insert(std::pair < std::string, UITexture > ("file",
    UITexture("../assets/editor/file.png")));
  uiTextures.insert(std::pair < std::string, UITexture > ("backArrow",
    UITexture("../assets/editor/back_arrow.png")));
  uiTextures.insert(std::pair < std::string, UITexture > ("yml",
    UITexture("../assets/editor/yml.png")));
  uiTextures.insert(std::pair < std::string, UITexture > ("default",
    UITexture("../assets/editor/no_texture.jpg")));
}
Figure 3: An example of how the UIManager works
With the editor not worried about the details of the implementation it also allows for the underlying implementation to change easily without the Editor needing to be rewritten. This means that I could implement support for more graphics APIs in the future and then it would be a matter of changing the implementation of the managers while the editor is able to stay consistent throughout the implementation.
 
UI Layer  For the UI I decided to use dear-imgui after consulting a few folks in the industry and finding out that it is widely used across multiple products from Nvidia to Ubisoft, Rockstar Games and many more. Dear-imgui is an immediate mode graphical user interface (GUI), the immediate mode paradigm means that a set of draw calls are sent every frame to render the UI. As for the implementation of the UI, with it being immediate mode the UI is rendered every frame with data. Ideally, I wanted to keep the UI separate from the data so that it is not the UI’s responsibility to managing the lifecycle of the components. It was challenging to separate the data with the immediate-mode GUI as there is no concept of a state as there would be in a retained-mode GUI. I ended up with more of a two-way binding with the data in the manager and the data being edited in the UI. With the UI being updated each frame from the immediate mode paradigm, there was not a worry about the data being out of sync as the UI enables the user to edit the properties of the data and then the updated data is passed to the UI for the process to repeat itself. Another notable implementation detail of the UI Layer is that it is a namespace of functions instead of a class. This allowed me to just create a new function per UI element that I needed. From the editor I am then able to send the requisite data to the UI function that needs that data for rendering purposes. This meant that the editor only needed to initialize the UI context before it could start issuing draw commands to it.
 
Model Representation  With the consumers of data having been discussed I want to spend the time to talk about the structure of my objects. This is where I rely on the Assimp representation and implement a wrapper around the library to consume and produce data in both their scene representation format along with any export formats. There are a few concepts that Assimp relies on that I want to outline before talking about the scene representation as they will make the rest of it make sense. The concepts I want to cover are meshes and materials. Meshes are 3D objects that are represented by vertices, normals, and their material data. A normal is the direction that a surface points in, represented by a unit vector. These meshes are represented as files and store the vertices and sometimes normals along with material data references if the mesh has a material. Materials are representations of what the mesh looks like and contains data from colors, textures, and depth maps. These materials are represented as files that reference the local image files for the texture and depth maps and are then linked in the mesh file that represents the vertex data. The scene representation in Assimp is known as an aiScene object and is an implementation of something known as a scene graph. A scene graph is a tree data structure that can be used to represent a scene. A scene can be thought of as a collection of objects positioned in 3D space. These objects have properties associated with them that are necessary for rendering such as position, normals, textures, vertices and many other data points depending on the features available in the programing consuming the data. The scene graph allows for a simple structure that is quick to traverse to find the data along with finding the data in relation to other pieces of data. The tree starts with a root node that represents the entry point of the scene. As objects are added to the scene, they become children of this root node. There is a great visualization of this tree in the LearnOpenGL documentation [13] that can be seen in below:
notion image
Figure 4: Example of the Assimp Scene Object from LearnOpenGL
You can see from Figure 4 how there is a root node and then the children connected. This diagram also shows the composition of the nodes regarding what they contain. As you can see the nodes contain their children and the set of meshes that they are made from. There is a smart storage concept for mesh and material data stored on the scene object as well. To save from storing the mesh data multiple times at each node, the scene stores the mesh and material for the entire scene in one array. The nodes then reference the array to prevent needless copies. This is a nice optimization to have especially as the scene grows larger and reuses elements such as a city scene with the same lamppost or car model that can be reused multiple times.
 
Serialization  A requirement of the workflow I set for the editor I wanted the ability to save and load scene data. To do this I needed to serialize the scene data into a format suitable for storing in a file. Serialization is the process of transforming the object from its mesh representation into a representation that can be loaded by the editor for display. For my implementation I decided to use YAML as the file format for serializing into. I chose YAML after doing research and finding that YAML is used for the scene serialization in the Unity engine [14]. It is like JSON which I have a large amount of experience with from being a full stack web developer, so I felt comfortable incorporating it into the editor as I was not completely unfamiliar. As for the process of serializing scene data I decided to build a Serializer class that takes in the entire scene and iterates through all the objects writing them to a file through the yaml-cpp library. Then upon selecting a scene file to load the scene is read into a yaml-cpp node. I then iterate through the file and translate them into their corresponding scene components. I iterated on this initial method by creating a class of virtual functions to act as an interface that any class that needs to serialize data can create an implementation for. This cannot be included until I decouple the way that scene objects are represented, something I will touch on in the “System Implementation” and “Future Work” sections.
 
Shaders  For any of the geometry to be rendered on the screen it must be passed through a shader. A shader is a program that processes a single pixel on screen. For the traditional forward rendering pipeline there are two shaders that input is passed to before the final image is rendered to the screen. These shaders are written in a programming language, most of the time this language is GLSL/HLSL but can change depending on the graphics library that you use. These programs may also have to be compiled before they can be used, this is not the case for OpenGL but is the case for the Vulkan API. Below is an example of the set of vertex and fragment shaders for rendering the light as a basic cube model.
 
#version 330 core
layout (location = 0) in vec3 aPos;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;

void main()
{
  vec3 FragPos = vec3(model * vec4(aPos, 1.0));
  gl_Position = projection * view * vec4(FragPos, 1.0);
}
Figure 5. Vertex Shader from the light renderer in Forge
 
#version 330 core
out vec4 FragColor;
uniform vec3 lightColor;

void main() {
  FragColor = vec4(lightColor, 1.0);
}
Figure 6. Fragment Shader from the light renderer in Forge
 
As shown in Figure 2, I use a couple shaders as part of the render pass that include the mesh, light, and screen shaders. Data is then bound to the shader as uniforms during the render loop. A uniform is just a shader variable that is not a part of the attributes of the buffer that is rendering the shader. For example, in Figure 7, there is an example of model properties being bound to the mesh shader through uniforms during the render loop.
 
meshShader.setMat4("model", scene.models[i] -> modelMatrix);
meshShader.setMat4("projection", scene.camera.projection);
meshShader.setMat4("view", scene.camera.view);
meshShader.set3DFloat("scaleAxes", scene.models[i] -> scaleAxes.x,
  scene.models[i] -> scaleAxes.y, scene.models[i] -> scaleAxes.z);
meshShader.set4DFloat("objectColor", scene.models[i] -> color.x,
  scene.models[i] -> color.y, scene.models[i] -> color.z, scene.models[i] -> color.w);
meshShader.set1DFloat("scale", scene.models[i] -> uniformScale);
meshShader.setInt("objectId", i);
Figure 7. Example of binding shader data through uniforms.
 
Renderer  Nested within the editor architecture there is also the rasterization pipeline that is responsible for drawing the geometry on screen. Commands are passed to the rasterizer from the main editor loop. From here the rasterizer processes the commands through the graphics pipeline, passed through the shaders and then the result is shown on screen. I will delve into the specifics of how the data gets from data buffers to be rendered in the “System Implementation” post, specifically the section dealing with the framebuffers. When it comes to techniques for the rasterization pipeline there are a few options of increasing complexity that can have various performance drawbacks and benefits depending upon the application. I decided to use a Forward Rendering technique as it is the common implementation and the focus on the project was the sum of features and not specifically the rendering technique that was used. In the case of rasterization this image is being generated each frame as data is being manipulated by the end user. The core difference between the methods is the order of operations that things are rendered. Some of the techniques change the order entities are rendered to save calculations and overall increase performance in most scenarios. The method that I use differs from the traditional forward rendering pipeline slightly with the caveat that I render to an image texture instead of directly to the screen. This was needed for the scene window to be a self-contained window within the editor runtime. Without this when the scene was being rendered it would be rendered to the scene window and the background of the window behind all the other tool windows. This was unnecessary and would have been a waste of resources as the scene would be rendered twice.
 
 
 

Sources
[1] “What Is Windows Subsystem for Linux.” What Is Windows Subsystem for Linux | Microsoft Docs, Microsoft, docs.microsoft.com/en-us/windows/wsl/about.
[2] Karlsson, B. (n.d.). RenderDoc. Retrieved March 13, 2022, from https://renderdoc.org
[3] “Assimp/Assimp: The Official Open-Asset-Importer-Library Repository. Loads 40+ 3D-File-Formats into One Unified and Clean Data Structure.” GitHub, Assimp, github.com/assimp/assimp.
[4] G-Truc. “G-TRUC/GLM: OpenGL Mathematics (GLM).” GitHub, github.com/g-truc/glm.
[5] Ocornut. (n.d.). OCORNUT/IMGUI: DEAR IMGUI: Bloat-free graphical user interface for C++ with minimal dependencies. GitHub. Retrieved March 13, 2022, from https://github.com/ocornut/imgui
[6] “Jbeder/YAML-CPP: A YAML Parser and Emitter in C++.” GitHub, github.com/jbeder/yaml-cpp.
[7] “The Official YAML Web Site.” The Official YAML Web Site, yaml.org/.
[8] Nothings. “Nothings/STB: STB Single-File Public Domain Libraries for C/C++.” GitHub, github.com/nothings/stb.
[9] “The Industry's Foundation for High Performance Graphics.” OpenGL.org, Khronos Group, www.opengl.org/.
[10] “OpenGL Loading Library.” OpenGL Loading Library - OpenGL Wiki, www.khronos.org/opengl/wiki/OpenGL_Loading_Library.
[11] “An OpenGL Library.” GLFW, www.glfw.org/.
[12] Facade. Refactoring.Guru. (n.d.). Retrieved March 13, 2022, from https://refactoring.guru/design-patterns/facade
[13] Assimp. LearnOpenGL. (n.d.). Retrieved March 13, 2022, from https://learnopengl.com/?p=Model-Loading%2FAssimp#
[14] Technologies, Unity. “An Example of a YAML Scene File.” Unity, docs.unity3d.com/Manual/YAMLSceneExample.html.
[15] Vries, Joey de. “About.” LearnOpenGL, Joey De Vries, learnopengl.com/About.
Â