At this point, we have a draw loop and we have a way to draw into image using compute shader, but we cant render geometry yet. To render geometry, we need to setup a graphics pipeline that will contain all of the configuration options needed to draw geometry using the special hardware inside a GPU. To render things, we have to understand how the Vulkan rendering pipeline works.
The GPU has a lot of functionality used for rendering geometry. They have a lot of extra hardware apart from the normal generic compute units. This extra hardware allows them to draw triangles or other primitives at great speed and efficiency.
Think of a GPU as an assembly line. It has a lot of different parts doing different things, and the output is pixels rendered on a image. This “assembly line” is what we call the graphics pipeline.
In the graphics pipeline, data and programs come in, and pixels come out. The job of a graphics programmer is to make sure to customize this to get the desired result.
The full-scale Vulkan graphics pipeline is very complex, so we are going to view a simplified version of it. As we write the configuration for building our pipeline for rendering, we will go more into detail about the stages.
Data -> Vertex Shader -> Rasterization -> Fragment Shader -> Render Output.
The 2 shader stages will run custom programs that will do anything we want to. The Rasterization, and Render Output are Fixed stages, and we can only tweak their parameters and configuration.
To begin, we need Data. The Data can be anything we want, and will be our textures, 3d models, material parameters, anything. Everything revolves around data being transformed by the stages until it becomes pixels on the screen.
The vertex shader is a shader that will run once for every vertex we draw. Its job is to output vertex locations and vertex parameters in the way the GPU understands it for the Rasterization stage.
The rasterization stage will grab the vertices generated by the Vertex shader, and it will find what pixels are covered by the triangle. These pixels will be sent to the fragment shader. This part has a lot of configuration options we will need to fill while building our graphics pipeline.
The fragment shader takes care of turning those pixels into real colored pixels for the output, running the shader once per pixel that the rasterization stage sends to it.
With the final colored pixels, we put them on the output framebuffer with the Render Output stage. There are multiple fixed-function hardware configuration steps here, that let us do things like blending pixels or depth-testing.
Next article we will draw a triangle using all this.