Gfx-hal Tutorial part 0: Drawing a triangle

2018-08-16 ·

OpenGL isn’t perfect. I used it for a long time, and it was damn good while it lasted, but it has a lot of drawbacks. It’s stateful, with a large number of invalid states (hello, blank screen!) - it’s bogged down with legacy, fixed-function API design, preventing you from making full use of your modern graphics card - and since it doesn’t capture your intent with its usage, it’s also harder for your graphics driver to optimize it.

(It doesn’t help that Apple have deprecated its use on macOS and iOS in favour of their Metal API.)

So what’s a Rustacean to do then? Well it just so happens that gfx-rs have a shiny new low-level graphics API, gfx-hal, that’s close to stable and ready to use.

What’s good about it? Well:

  1. It’s low-level and versatile (although that does make it more verbose and hard to learn).
  2. Its API is very close to Vulkan, so skills and documentation are roughly applicable to both.
  3. It abstracts over multiple backends, including Metal, DX12, and Vulkan, making it cross-platform.
  4. It’s very explicit, arguably making it easier to understand and debug.
  5. It’s written in Rust! Perfect for projects that are, also, written in Rust.

Now any of those could be considered a disadvantage depending on your requirements, but if it all sounds good to you, then read on!

Looks like it’s gonna be a long one, so here’s a nice table of contents:

The code

Honestly, I’ll try to explain as much as I can here, but the full example code is going to be more useful. You can find it here:

This tutorial does assume that you’re familiar with Rust code in general, and it probably helps if you have at least some experience with graphics programming.

It’s also worth noting that I’m still learning, and it’s entirely possible I’ll get some things wrong. Feel free to let me know (via Twitter, or Github issue) if I make any mistakes.

That said, let’s get started!


First of all, you’ll need to set up the dependencies in your Cargo.toml file.

winit = "=0.16.2"

git = ""
rev = "d428a5d5"

git = ""
rev = "d428a5d5"

glsl-to-spirv = "=0.1.6"

You’ll notice that gfx-hal isn’t currently published to, so we’re picking a specific commit to lock to. This is just in case gfx introduce some breaking changes before release - I don’t want my tutorials to stop compiling. (I’ll try to keep the version up to date in the tutorials repo.)

EDIT 2018-08-20: As was inevitable, there have indeed been some small breaking changes. See the full code for a more up-to-date example.

I’ve also chosen the Metal backend because I’m working on macOS - but you should be able to swap this out with another backend trivially.

As for that build dependency - the shaders in that repo are compiled using the method I wrote about here. I’d encourage you to copy that, but otherwise, you’ll have to compile the glsl shaders in that repo to SPIR-V.

Now that that’s all ready, we can start working towards our modest goal.

How do we draw a triangle?

I’m not gonna lie, it takes quite a lot. As mentioned previously, gfx-hal is pretty verbose. Once you’ve got a triangle though, building on top of that is relatively easy. And if you’re like me, you’ll learn to love the fine details you’re forced to think about. It’s very educational.

To begin with, we have to set up a bunch of render state. We’ll need:

  1. A window.
  2. An instance, device, adapter, and assorted extras. (I’ll explain each of these as we get to them.)
  3. A render pass, which defines how different images are used.
  4. A pipeline definition, including our shaders. This defines how we should render things.
  5. A swapchain, which is a chain of images for rendering to, then displaying on screen.
  6. An image view and a framebuffer for each image in the swapchain. These allow us to bind specific swapchain images to our render pass.

We do all that once, and then on each frame we can render our triangle fairly simply:

  1. First, we create a command buffer representing what we want to render.
  2. We submit the command buffer to a command queue, which renders it to a swapchain image.
  3. Then we “present” the swapchain image, freeing up the old one for rendering.

If none of that makes sense right now, don’t worry. The code will hopefully make it a little clearer, but it’ll also take time for it to sink in. I didn’t understand this when I wrote it either.


Before we look at the code, it’s worth noting that the full code for this tutorial is commented, and might shed some extra light on each concept.

Not to mention there’s a little too much to go through it all here, so I’ll be skipping over pieces that don’t warrant much explanation. I’ll simplify code so that it doesn’t take up too much space here, so be sure to look at the full version if you want to copy-paste anything.

So, first let’s initialize a window:

    let mut events_loop = EventsLoop::new();
    let window = WindowBuilder::new().build(&events_loop).unwrap();

    let instance = backend::Instance::create("Part 00: Triangle", 1);
    let mut surface = instance.create_surface(&window);
    let mut adapter = instance.enumerate_adapters().remove(0);

    let (device, mut queue_group) = adapter
        .open_with::<_, Graphics>(1, |family| surface.supports_queue_family(family))

    let max_buffers = 16;
    let mut command_pool = device.create_command_pool_typed(

This is mostly straightforward creation functions. Let’s go through each item:

The window and events_loop are both part of the winit crate - and not to do with gfx specifically, but we need them to have somewhere to render to. Other windowing crates are supported I believe, but I went with winit since I’m familiar with it.

The instance is used to initialize the API and give us access to everything else we need, including the surface which is a representation of the window we’re going to draw into.

An adapter represents a physical device. For example, one of the graphics cards in your machine. In the code above, we just use whichever one is first in the list.

Next we acquire a device and a queue group. The device here is a logical device rather than a physical one. It’s an abstraction responsible for allocating and freeing resources, which we’ll see later.

The queue_group is a collection of command queues, which are queues that you submit command buffers to in order to render. Again, we’ll go into more detail later. The kind of cryptic open_with function there is saying: “give me a queue group that supports the Graphics capability, contains at least 1 queue, and is supported by my surface”.

The command_pool is where we get command buffers from in the first place, which we can then submit to a queue.

So that’s all boilerplate. Next we need to tell gfx how we actually want to render things.

Defining a rendering pipeline

A pipeline state object contains almost all of the state you need in order to draw something. This includes shaders, primitive type, blending type, etc.

It also contains a render_pass, so let’s make that first:

    let render_pass = {
        let color_attachment = Attachment {
            format: Some(surface_color_format),
            samples: 1,
            ops: AttachmentOps::new(AttachmentLoadOp::Clear, AttachmentStoreOp::Store),
            stencil_ops: AttachmentOps::DONT_CARE,
            layouts: Layout::Undefined..Layout::Present,

        let subpass = SubpassDesc {
            colors: &[(0, Layout::ColorAttachmentOptimal)],
            depth_stencil: None,

        let dependency = SubpassDependency {

        device.create_render_pass(&[color_attachment], &[subpass], &[dependency])

A render pass defines how many images (“attachments”) we will need for rendering, and what they’ll be used for. In this case, we only care about one image, which is the one we’re rendering to. Each render pass has at least one subpass - you can see above we’re using a color attachment, but no depth attachment. We’ll come back to this in a future tutorial.

Next we need to create some shader modules to pass to our pipeline:

    let vertex_shader_module = device.create_shader_module(

    let fragment_shader_module = device.create_shader_module(

You can see the original vertex and fragment shader code in the source repo. Note that we’re setting the vertex positions inside the vertex shader. This is a neat trick that allows us to avoid making a vertex buffer just yet.1

Finally, we can make the pipeline itself:

    let pipeline_layout = device.create_pipeline_layout(&[], &[]);

    let pipeline = {
        let vs_entry = EntryPoint::<backend::Backend> {
            entry: "main",
            module: &vertex_shader_module,
            specialization: &[],

        let fs_entry = ...;

        let shader_entries = GraphicsShaderSet {
            vertex: vs_entry,
            fragment: Some(fs_entry),

        let subpass = Subpass { index: 0, main_pass: &render_pass };

        let mut pipeline_desc = GraphicsPipelineDesc::new(

        pipeline_desc.blender.targets.push(ColorBlendDesc(ColorMask::ALL, BlendState::ALPHA));

        device.create_graphics_pipeline(&pipeline_desc, None).unwrap()

The important part here is the pipeline_desc struct. As you can see, it contains our shaders, the primitive type, the rasterization type, a pipeline layout (which we can ignore for a while), and a render pass. We also set the blend mode on it after construction, before creating the pipeline.

We’ll end up adding a lot more to here in future tutorials.

Now we’ve defined our rendering, the last thing we need is a somewhere to render to.

Swapchains and framebuffers

Typically, we want to render to one image while displaying another on screen. When we’re done rendering, we swap them over and start again. (Things can get more complicated, but we’ll stick with that for now.)

These two images form a swapchain. So let’s make one of those:

    let (mut swapchain, backbuffer) = {
        let extent = {
            let (width, height) = window_size;
            Extent2D { width, height }

        let swap_config = SwapchainConfig::new()

        device.create_swapchain(&mut surface, swap_config, None, &extent)

We tell it what image format to use, that we’re going to use those images as color images (though I’m not sure when you wouldn’t), and of course we give it the extents of our window. This returns the swapchain, and also the backbuffer which is the actual list of images used by the swapchain.

Now you might think we can just stop at images, but in order to access the contents of the image, we also need an image_view for each. You can mostly ignore this detail - an image view can refer to a smaller slice of a full image, but here we’re going to use a view of the entire image anyway.

We also need to create framebuffer objects. Remember we defined a render pass which described how many images we would use to render, and what purpose each would serve? Well a frambuffer binds a specific image view to a specific attachment of your render pass:

    let (frame_views, framebuffers) = match backbuffer {
        Backbuffer::Images(images) => {
            let (width, height) = window_size;
            let extent = Extent { width, height, depth: 1 };

            let color_range =
                SubresourceRange { aspects: Aspects::COLOR, levels: 0..1, layers: 0..1 };

            let image_views = images.iter()
                .map(|image| {

            let fbos = image_views.iter()
                .map(|image_view| {
                    device.create_framebuffer(&render_pass, vec![image_view], extent).unwrap()

            (image_views, fbos)

Here all we’re doing is looping through the images in our backbuffer to create image views, then looping through the image views to create framebuffers.

Note that when we create the framebuffer, we specify a render pass, and a vec of image views to bind to it.

The very last thing we need right now is a couple of synchronization primitives. I won’t go into too much detail here, but basically they allow us to ensure we’re always rendering to a different image than the one currently on screen:

    let frame_semaphore = device.create_semaphore();
    let frame_fence = device.create_fence(false);

And that’s all of the setup we need! Everything’s in place now, and we can begin our rendering loop. Next we’ll look at how to actually render the triangle.

Rendering a frame

The good news is that this is the simplest part. We already described almost all of our rendering process up-front, so all that’s left is to build a command buffer and submit it for rendering.

So here’s how we build our command buffer:

        let frame_index = swapchain.acquire_image(FrameSync::Semaphore(&frame_semaphore)).unwrap();

        let finished_command_buffer = {
            let viewport = Viewport {
                rect: Rect { x: 0, y: 0, w: window_width, h: window_height },
                depth: 0.0..1.0,

            let mut command_buffer = command_pool.acquire_command_buffer(false);
            command_buffer.set_viewports(0, &[viewport.clone()]);
            command_buffer.set_scissors(0, &[viewport.rect]);
                let mut encoder = command_buffer.begin_render_pass_inline(
                    &framebuffers[frame_index as usize],
                    &[ClearValue::Color(ClearColor::Float([0.0, 0.0, 0.0, 1.0]))],
                encoder.draw(0..3, 0..1);

First we choose which image in the swapchain to render to. We also tell it to signal frame_semaphore when the image is ready.

After that, we’re acquiring a new buffer from the command pool. We also set the viewport and scissor rect to be the size of the entire screen. (We could have chosen to render to a smaller sub-region of it.) Then we choose which pipeline to use - the only one we have, as it happens.

Next, we begin our render pass. We can now start recording render commands into the command buffer. We pass in our current framebuffer, a rect to draw into, and the instruction to clear our frame to black.

Now for the triangle itself. That draw command says “draw the first 3 vertices of the first 1 instances”. (Ignore that last part - we’re not using instanced rendering for this tutorial.) The vertex data itself, as mentioned, comes from our vertex shader this time, so this is all we need.

Finally, we finish recording our command buffer and we’re ready to submit it.

To do that, we first wait on frame_semaphore so that our target image is ready, then we build a submission from our command buffer:

        let submission = Submission::new()
            .wait_on(&[(&frame_semaphore, PipelineStage::BOTTOM_OF_PIPE)])

Then we can submit it to our command queue, ask it to signal frame_fence once the rendering is done, and wait:

        queue_group.queues[0].submit(submission, Some(&frame_fence));

        device.wait_for_fence(&frame_fence, !0);

Then finally… finally… after all that… we can present our complete, rendered image on screen:

        swapchain.present(&mut queue_group.queues[0], frame_index, &[]).unwrap();

Ready for it?

A single unimpressive triangle.

Yep, that’s it.


So wow, that was a ton of work right!?

But don’t despair - this will probably be the longest, most complicated tutorial in the whole series. It turns out the amount of effort it takes to get something on screen is a lot more than the amount effort it takes to add more interesting stuff in.

Look back over the code, copy-paste it as much as you need, mull it over a bit - and in the next part, we’ll build on this. I found after I wrote this, I didn’t understand a whole lot - but seeing how the code changes when you add new features makes it a lot clearer. Even if you struggled with this first tutorial, press on anyway, and hopefully you’ll find it easier later.

In the meantime, here are some other resources I found useful. They’re mostly Vulkan-specific (and not in Rust), but the concepts are the same so you might still get some value out of them:

  1. The gfx-hal quad example: It’s more complex than what we’ve done here, but it was the best example I had for writing this.
  2. Exactly what it sounds like.
  3. Sascha Willem’s Vulkan examples: There’s small examples of all kinds of different features. It helped me a lot, especially for the later parts of this tutorial.
  4. Khronos’ “Getting Started with Vulkan” video: This doesn’t teach Vulkan directly, but gives some context and information about it that is also applicable to gfx-hal.

If you made it through all this, well done, and I hope it was helpful to you. I have no specific schedule at the moment, but you can look forward to Part 1 fairly soon.

  1. I first saw this on, an extremely useful tutorial that is mostly applicable to gfx-hal as well.