Migrating from Fixed-Function OpenGL 1.1 to OpenGL 3.3 Core

Migrating from Fixed-Function OpenGL 1.1 to OpenGL 3.3 Core

OpenGL 1.1 is a great API for teaching basic graphics, such as rendering meshes, coloring and shading, and texturing. With display lists it can even scale to relatively large meshes. However, the API is far from the GPU hardware architecture and the industry has been moving to more low level and programmable APIs. This tutorial aims to fill the gap between the fixed-function world and the world of buffers and shaders. As a target, I chose OpenGL 3.3 core profile instead of Direct3D 12, Metal, or Vulkan to avoid making the step too large. The choice of core profile (without the legacy cruft) not only helps to avoid land mines, but allows use of a debugger such as RenderDoc. This debugger allows to inspect the OpenGL state, reports errors, shows the textures or depth buffer, visualizes meshes, and more.

The example renders tetrahedron with three single color faces and one face with interpolated colors from the corners. This tetrahedron is orbited by a perspective camera. The file src/main_gl11.cpp uses OpenGL 1.1 and the other file src/main_gl33.cpp uses OpenGL 3.3 core profile. Both files have associated projects in the Visual Studio solution file. The debug build has all paths set and should build without issues. Shaders for the second part of the tutorial are located in the res folder.

The code for the tutorial can be downloaded here.

OpenGL 1.1

In the code GLFW creates windows with OpenGL 1.1 version; note that the version is specified before call to glfwCreateWindow. Then the clear color is set and culling and depth buffer enabled.

I used display list to better mirror the more modern OpenGL approach we discuss later. Moreover, display lists are commonly used as an optimization to avoid both the CPU overhead from a large number of function calls and to keep the geometry in the fast GPU memory.

The render loop is barebones, it clears color and depth, sets up transformation matrices, executes the display list, and flips the frame on VSync. Warning, the transformations are applied in bottom to top order that can be confusing. The library we use in the next part for handling transformations uses operator overloading and thus reads more like a math.

OpenGL 3.3

Again, the window is created by GLFW, but this time the version is set to 3.3 with core profile that is set as forward compatible. Since the OpenGL on Windows comes only as version 1.1, the newer functions must be loaded during runtime from vendor's dynamic-link library. I used glad as loader, but there are many alternatives available. Notice, the gladLoaderLoadGL function must be called after we have OpenGL context, in our case after the call to glfwMakeContextCurrent.

The initial state setup is slightly different, especially there is no clear color present as we will use the color directly when calling the clear buffer function. This explicit state approach will be used throughtout the tutorial. Culling and depth buffer is enabled as before, but we also enable multisampling (in the OpenGL 1.1 part we did not use it as on Windows the symbol is not exported) and sRGB rendering.

sRGB color space is commonly used by screens, i.e., a color picked in GIMP is in sRGB space, but shading calculations assume linear space. The solution is to convert colors to linear space before using them (either manually or by using an sRGB texture) and enable sRGB framebuffer in OpenGL to force the output color from shaders to be converted back from linear space to sRGB. Using linear colors in shaders is important if a shading model like Phong is applied otherwise the image will be darker. For example, doing distance based attenuation in sRGB space will result in roughly 0.7 times the color of the linear space calculation.

Since the framebuffer provided by the GLFW can be limited, e.g., no multisampling or sRGB support, we create an offscreen framebuffer to which we render. At the end of each frame, this framebuffer is then copied to the GLFW window. This solution provides great flexibility and control over rendering formats, i.e., we can do HDR rendering and then tone map if necessary or supersample the image for higher quality output.

Modern GPUs use a general purpose cores to run both the vertex transformations and fragment shading. These cores are programmed via shading languages such as HLSL or in OpenGL's case GLSL. GLSL is a C-style programming language that starts executing at main function. The vertex shader is executed for each vertex and usually performs transformation, such as from model to projection space. The fragment shader is then used for shading the fragments (kind of pixels) that come out of rasterizer.

The geometry such as vertex positions and colors are copied from an array to the GPU's memory (VRAM) in form of a vertex buffer (VBO). These buffers are very similar to the idea of a display list, as they can cache the vertices on the GPU and speed up rendering considerably.

Vertex Arrays (VAOs) are useless, but since OpenGL mandates at least one, we will create it and ignore it as we proceed.

The render loop binds the framebuffer we want to use, clears it to specified color and depth, updates tranformation matrices (using glm library) and uploads them to the GPU, and renders the geometry. At the end the framebuffer is copied (blitted) to the GLFW window's backbuffer that is then flipped on a VSync to the screen.