Welcome to the wgpu-py docs!

The wgpu library is a Python implementation of WebGPU.

Installation

Note

Since the API changes with each release,you may want to check the CHANGELOG.md when you upgrade to a newer version of wgpu.

Install with pip

You can install wgpu-py via pip. Python 3.7 or higher is required. Pypy is supported. Only depends on cffi (installed automatically by pip).

pip install wgpu

Since most users will want to render something to screen, we recommend installing GLGW as well:

pip install wgpu glfw

GUI libraries

Multiple GUI backends are supported, see the GUI API for details:

  • glfw: a lightweight GUI for the desktop
  • jupyter_rfb: only needed if you plan on using wgpu in Jupyter
  • qt (PySide6, PyQt6, PySide2, PyQt5)
  • wx

The wgpu-native library

The wheels that pip installs include the prebuilt binaries of wgpu-native, so on most systems everything Just Works.

On Linux you need at least pip >= 20.3, and a recent Linux distribution, otherwise the binaries will not be available. See below for details.

If you need/want, you can also build wgpu-native yourself. You will then need to set the environment variable WGPU_LIB_PATH to let wgpu-py know where the DLL is located.

Platform requirements

Under the hood, wgpu runs on Vulkan, Metal, or DX12. The wgpu-backend is selected automatically, but can be overridden by setting the WGPU_BACKEND_TYPE environment variable to “Vulkan”, “Metal”, “D3D12”, “D3D11”, or “OpenGL”.

Windows

On Windows 10+, things should just work. On older Windows versions you may need to install the Vulkan drivers.

MacOS

On MacOS you need at least 10.13 (High Sierra) to have Metal/Vulkan support.

Linux

On Linux, it’s advisable to install the proprietary drivers of your GPU (if you have a dedicated GPU). You may need to apt install mesa-vulkan-drivers. Wayland support is currently broken (we could use a hand to fix this).

Binary wheels for Linux are only available for manylinux_2_24. This means that the installation requires pip >= 20.3, and you need a recent Linux distribution, listed here.

If you wish to work with an older distribution, you will have to build wgpu-native yourself, see “dependencies” above. Note that wgpu-native still needs Vulkan support and may not compile / work on older distributions.

Installing LavaPipe on Linux

To run wgpu on systems that do not have a GPU (e.g. CI) you need a software renderer. On Windows this (probably) just works via DX12. On Linux you can use LavaPipe:

sudo apt update -y -qq
sudo apt install --no-install-recommends -y libegl1-mesa libgl1-mesa-dri libxcb-xfixes0-dev mesa-vulkan-drivers

The distribution’s version of Lavapipe may be a bit outdated. To get a more recent version, you can use this PPA:

sudo add-apt-repository ppa:oibaf/graphics-drivers -y

Note

The precise visual output may differ between differen implementations of Vulkan/Metal/DX12. Therefore you should probably avoid per-pixel comparisons when multiple different systems are involved. In wgpu-py and pygfx we have solved this by generating all reference images on CI (with Lavapipe).

Guide

This library (wgpu) presents a Pythonic API for the WebGPU spec. It is an API to control graphics hardware. Like OpenGL but modern. Or like Vulkan but higher level. GPU programming is a craft that requires knowledge of how GPU’s work.

Getting started

Selecting the backend

To use wgpu, you must select a backend. Eventually there may be multiple backends, but at the moment there is only one backend, which is based on the Rust libary wgpu-native. You select the backend by importing it:

import wgpu.backends.rs

Creating a canvas

If you want to render to the screen, you need a canvas. Multiple GUI toolkits are supported, see the GUI API. In general, it’s easiest to let wgpu select a GUI automatically:

from wgpu.gui.auto import WgpuCanvas, run

canvas = WgpuCanvas(title="a wgpu example")

Next, we can setup the render context, which we will need later on.

present_context = canvas.get_context()
render_texture_format = present_context.get_preferred_format(device.adapter)
present_context.configure(device=device, format=render_texture_format)

Obtaining a device

The next step is to obtain an adapter, which represents an abstract render device. You can pass it the canvas that you just created, or pass None for the canvas if you have none (e.g. for compute or offscreen rendering). From the adapter, you can obtain a device. This will be the root object from which most GPU objects will be created.

adapter = wgpu.request_adapter(canvas=canvas, power_preference="high-performance")
device = adapter.request_device()

Creating buffers, textures shaders, etc.

Using the device, you can create buffers, textures, write shader code, and put these together into pipeline objects. How to do this depends a lot on what you want to achieve, and is therefore out of scope for this guide. Have a look at the examples or some of the tutorials that we link to below.

Setting up a draw function

Let’s now define a function that will actually draw the stuff we put together in the previous step.

def draw_frame():

    # We'll record commands that we do on a render pass object
    command_encoder = device.create_command_encoder()
    current_texture_view = present_context.get_current_texture()
    render_pass = command_encoder.begin_render_pass(
        color_attachments=[
            {
                "view": current_texture_view,
                "resolve_target": None,
                "clear_value": (1, 1, 1, 1),
                "load_op": wgpu.LoadOp.clear,
                "store_op": wgpu.StoreOp.store,
            }
        ],
    )

    # Perform commands, something like ...
    render_pass.set_pipeline(...)
    render_pass.set_index_buffer(...)
    render_pass.set_vertex_buffer(...)
    render_pass.set_bind_group(...)
    render_pass.draw_indexed(...)

    # When done, submit the commands to the device queue.
    render_pass.end()
    device.queue.submit([command_encoder.finish()])

    # If you want to draw continuously, request a new draw right now
    canvas.request_draw()

Starting the event loop

We can now pass the above render function to the canvas. The canvas will then call the function whenever it (re)draws the window. And finally, we call run() to enter the mainloop.

canvas.request_draw(draw_frame)
run()

Offscreen

If you render offscreen, or only do compute, you do not need a canvas. You also won’t need a GUI toolkit, draw function or enter the event loop. Instead, you will obtain a command encoder and submit it’s records to the queue directly.

Examples and external resources

Examples that show wgpu-py in action:

Note

The examples in the main branch of the repository may not match the pip installable version. Be sure to refer to the examples from the git tag that matches the version of wgpu you have installed.

External resources:

A brief history of WebGPU

For years, OpenGL has been the only cross-platform API to talk to the GPU. But over time OpenGL has grown into an inconsistent and complex API …

OpenGL is dying — Dzmitry Malyshau at Fosdem 2020

In recent years, modern API’s have emerged that solve many of OpenGL’s problems. You may have heard of Vulkan, Metal, and DX12. These API’s are much closer to the hardware, which makes the drivers more consistent and reliable. Unfortunately, the huge amount of “knobs to turn” also makes them quite hard to work with for developers.

Therefore, higher level API are needed, which use the same concepts, but are much easier to work with. The most notable one is the WebGPU specification. This is what future devs will be using to write GPU code for the browser. And for desktop and mobile as well.

As the WebGPU spec is being developed, a reference implementation is also build. It’s written in Rust and powers the WebGPU implementation in Firefox. This reference implementation, called wgpu, also exposes a C-api (via wgpu-native), so that it can be wrapped in Python. And this is precisely what wgpu-py does.

So in short, wgpu-py is a Python wrapper of wgpu, which is an desktop implementation of WebGPU, an API that wraps Vulkan, Metal and DX12, which talk to the GPU hardware.

Coordinate system

In wgpu, the Y-axis is up in normalized device coordinate (NDC): point(-1.0, -1.0) in NDC is located at the bottom-left corner of NDC. In addition, x and y in NDC should be between -1.0 and 1.0 inclusive, while z in NDC should be between 0.0 and 1.0 inclusive. Vertices out of this range in NDC will not introduce any errors, but they will be clipped.

Array data

The wgpu library makes no assumptions about how you store your data. In places where you provide data to the API, it can consume any data that supports the buffer protocol, which includes bytes, bytearray, memoryview, ctypes arrays, and numpy arrays.

In places where data is returned, the API returns a memoryview object. These objects provide a quite versatile view on ndarray data:

# One could, for instance read the content of a buffer
m = buffer.read_data()
# Cast it to float32
m = m.cast("f")
# Index it
m[0]
# Show the content
print(m.tolist())

Chances are that you prefer Numpy. Converting the memoryview to a numpy array (without copying the data) is easy:

array = np.frombuffer(m, np.float32)

Debugging

If the default wgpu-backend causes issues, or if you want to run on a different backend for another reason, you can set the WGPU_BACKEND_TYPE environment variable to “Vulkan”, “Metal”, “D3D12”, “D3D11”, or “OpenGL”.

The log messages produced (by Rust) in wgpu-native are captured and injected into Python’s “wgpu” logger. One can set the log level to “INFO” or even “DEBUG” to get detailed logging information.

Many GPU objects can be given a string label. This label will be used in Rust validation errors, and are also used in e.g. RenderDoc to identify objects. Additionally, you can insert debug markers at the render/compute pass object, which will then show up in RenderDoc.

Eventually, wgpu-native will fully validate API input. Until then, it may be worthwhile to enable the Vulkan validation layers. To do so, run a debug build of wgpu-native and make sure that the Lunar Vulkan SDK is installed.

You can run your application via RenderDoc, which is able to capture a frame, including all API calls, objects and the complete pipeline state, and display all of that information within a nice UI.

You can use adapter.request_device_tracing() to provide a directory path where a trace of all API calls will be written. This trace can then be used to re-play your use-case elsewhere (it’s cross-platform).

Also see wgpu-core’s section on debugging: https://github.com/gfx-rs/wgpu/wiki/Debugging-wgpu-Applications

Freezing apps

In wgpu a PyInstaller-hook is provided to help simplify the freezing process (it e.g. ensures that the wgpu-native DLL is included). This hook requires PyInstaller version 4+.

WGPU

This document describes the wgpu API, which essentially is a Pythonic version of the WebGPU API. It exposes an API for performing operations, such as rendering and computation, on a Graphics Processing Unit.

Note

The WebGPU API is still being developed and occasionally there are backwards incompatible changes. Since we mostly follow the WebGPU API, there may be backwards incompatible changes to wgpu-py too. This will be so until the WebGPU API settles as a standard.

How to read this API

The classes in this API all have a name staring with “GPU”, this helps discern them from flags and enums. These classes are never instantiated directly; new objects are returned by certain methods.

Most methods in this API have no positional arguments; each argument must be referenced by name. Some argument values must be a dict, these can be thought of as “nested” arguments. Many arguments (and dict fields) must be a flag or enum. Flags are integer bitmasks that can be orred together. Enum values are strings in this API. Some arguments have a default value. Most do not.

Differences from WebGPU

This API is derived from the WebGPU spec, but differs in a few ways. For example, methods that in WebGPU accept a descriptor/struct/dict, here accept the fields in that struct as keyword arguments.

wgpu.base.apidiff Differences of base API:
  • Adds GPU.print_report() - Usefull
  • Adds GPUBuffer.map_read() - Alternative to mapping API
  • Adds GPUBuffer.map_write() - Alternative to mapping API
  • Adds GPUCanvasContext.get_preferred_format() - Better place to define the preferred format
  • Adds GPUCanvasContext.present() - Present method is exposed
  • Adds GPUDevice.adapter() - Too useful to not-have
  • Adds GPUDevice.create_buffer_with_data() - replaces WebGPU’s mapping API
  • Adds GPUQueue.read_buffer() - replaces WebGPU’s mapping API
  • Adds GPUQueue.read_texture() - For symmetry, and to help work around the bytes_per_row constraint
  • Adds GPUTexture.size() - Too useful to not-have
  • Adds GPUTextureView.size() - Need to know size e.g. for texture view provided by canvas.
  • Adds GPUTextureView.texture() - Too useful to not-have
  • Changes GPU.get_preferred_canvas_format() - Disabled because we put it on the canvas context.
  • Changes GPU.request_adapter() - arguments include a canvas object
  • Changes GPU.request_adapter_async() - arguments include a canvas object
  • Hides GPUBuffer.get_mapped_range()
  • Hides GPUBuffer.map_async()
  • Hides GPUBuffer.unmap()
  • Hides GPUDevice.import_external_texture() - Specific to browsers.
  • Hides GPUDevice.pop_error_scope()
  • Hides GPUDevice.push_error_scope()
  • Hides GPUQueue.copy_external_image_to_texture() - Specific to browsers.

Each backend may also implement minor differences (usually additions) from the base API. For the rs backend check print(wgpu.backends.rs.apidiff.__doc__).

Overview

This overview attempts to describe how all classes fit together. Scroll down for a list of all flags, enums, structs, and GPU classes.

Adapter, device and canvas

The GPU represents the root namespace that contains the entrypoint to request an adapter.

The GPUAdapter represents a hardware or software device, with specific features, limits and properties. To actually start using that harware for computations or rendering, a GPUDevice object must be requisted from the adapter. This is a logical unit to control your hardware (or software). The device is the central object; most other GPU objects are created from it. Also see the convenience function wgpu.utils.get_default_device(). Information on the adapter can be obtained using TODO in the form of a GPUAdapterInfo.

A device is controlled with a specific backend API. By default one is selected automatically. This can be overridden by setting the WGPU_BACKEND_TYPE environment variable to “Vulkan”, “Metal”, “D3D12”, “D3D11”, or “OpenGL”.

The device and all objects created from it inherit from GPUObjectBase - they represent something on the GPU.

In most render use-cases you want the result to be presented to a canvas on the screen. The GPUCanvasContext is the bridge between wgpu and the underlying GUI backend.

Buffers and textures

A GPUBuffer can be created from a device. It is used to hold data, that can be uploaded using it’s API. From the shader’s point of view, the buffer can be accessed as a typed array.

A GPUTexture is similar to a buffer, but has some image-specific features. A texture can be 1D, 2D or 3D, can have multiple levels of detail (i.e. lod or mipmaps). The texture itself represents the raw data, you can create one or more GPUTextureView objects for it, that can be attached to a shader.

To let a shader sample from a texture, you also need a GPUSampler that defines the filtering and sampling behavior beyond the edges.

WebGPU also defines the GPUExternalTexture, but this is not (yet?) used in wgpu-py.

Bind groups

Shaders need access to resources like buffers, texture views, and samplers. The access to these resources occurs via so called bindings. There are integer slots, which must be specifie both via the API, and in the shader.

Bindings are organized into GPUBindGroup s, which are essentially a list of GPUBinding s.

Further, in wgpu you need to specify a GPUBindGroupLayout, providing meta-information about the binding (type, texture dimension etc.).

Multiple bind groups layouts are collected in a GPUPipelineLayout, which represents a complete layout description for a pipeline.

Shaders and pipelines

The wgpu API knows three kinds of shaders: compute, vertex and fragment. Pipelines define how the shader is run, and with what resources.

Shaders are represented by a GPUShaderModule.

Compute shaders are combined with a pipelinelayout into a GPUComputePipeline. Similarly, a vertex and (optional) fragment shader are combined with a pipelinelayout into a GPURenderPipeline. Both of these inherit from GPUPipelineBase.

Command buffers and encoders

The actual rendering occurs by recording a series of commands and then submitting these commands.

The root object to generate commands with is the GPUCommandEncoder. This class inherits from GPUCommandsMixin (because it generates commands), and GPUDebugCommandsMixin (because it supports debugging).

Commands specific to compute and rendering are generated with a GPUComputePassEncoder and GPURenderPassEncoder respectively. You get these from the command encoder by the corresponding begin_x_pass() method. These pass encoders inherit from GPUBindingCommandsMixin (because you associate a pipeline) and the latter also from GPURenderCommandsMixin.

When you’re done generating commands, you call finish() and get the list of commands as an opaque object: the GPUCommandBuffer. You don’t really use this object except for submitting it to the GPUQueue.

The command buffers are one-time use. The GPURenderBundle and GPURenderBundleEncoder can be used to record commands to be used multiple times, but this is not yet implememted in wgpu-py.

Error handling

Errors are caught and logged using the wgpu logger.

Todo: document the role of these classes: GPUUncapturedErrorEvent GPUError GPUValidationError GPUOutOfMemoryError GPUInternalError GPUPipelineError GPUDeviceLostInfo

TODO

These classes are not supported and/or documented yet. GPUCompilationMessage GPUCompilationInfo GPUQuerySet

List of flags, enums, and structs

Flags

Flags are bitmasks; zero or multiple fields can be set at the same time. These flags are also available in the root wgpu namespace.

wgpu.flags.BufferUsage =
  • “MAP_READ” (1)
  • “MAP_WRITE” (2)
  • “COPY_SRC” (4)
  • “COPY_DST” (8)
  • “INDEX” (16)
  • “VERTEX” (32)
  • “UNIFORM” (64)
  • “STORAGE” (128)
  • “INDIRECT” (256)
  • “QUERY_RESOLVE” (512)
wgpu.flags.ColorWrite =
  • “RED” (1)
  • “GREEN” (2)
  • “BLUE” (4)
  • “ALPHA” (8)
  • “ALL” (15)
class wgpu.flags.Flags(name, **kwargs)
wgpu.flags.MapMode =
  • “READ” (1)
  • “WRITE” (2)
wgpu.flags.ShaderStage =
  • “VERTEX” (1)
  • “FRAGMENT” (2)
  • “COMPUTE” (4)
wgpu.flags.TextureUsage =
  • “COPY_SRC” (1)
  • “COPY_DST” (2)
  • “TEXTURE_BINDING” (4)
  • “STORAGE_BINDING” (8)
  • “RENDER_ATTACHMENT” (16)

Enums

Enums are choices; exactly one field must be selected. These enums are also available in the root wgpu namespace.

wgpu.enums.AddressMode =
  • “clamp_to_edge”
  • “repeat”
  • “mirror_repeat”
wgpu.enums.AutoLayoutMode =
  • “auto”
wgpu.enums.BlendFactor =
  • “zero”
  • “one”
  • “src”
  • “one_minus_src”
  • “src_alpha”
  • “one_minus_src_alpha”
  • “dst”
  • “one_minus_dst”
  • “dst_alpha”
  • “one_minus_dst_alpha”
  • “src_alpha_saturated”
  • “constant”
  • “one_minus_constant”
wgpu.enums.BlendOperation =
  • “add”
  • “subtract”
  • “reverse_subtract”
  • “min”
  • “max”
wgpu.enums.BufferBindingType =
  • “uniform”
  • “storage”
  • “read_only_storage”
wgpu.enums.BufferMapState =
  • “unmapped”
  • “pending”
  • “mapped”
wgpu.enums.CanvasAlphaMode =
  • “opaque”
  • “premultiplied”
wgpu.enums.CompareFunction =
  • “never”
  • “less”
  • “equal”
  • “less_equal”
  • “greater”
  • “not_equal”
  • “greater_equal”
  • “always”
wgpu.enums.CompilationMessageType =
  • “error”
  • “warning”
  • “info”
wgpu.enums.ComputePassTimestampLocation =
  • “beginning”
  • “end”
wgpu.enums.CullMode =
  • “none”
  • “front”
  • “back”
wgpu.enums.DeviceLostReason =
  • “destroyed”
class wgpu.enums.Enum(name, **kwargs)
wgpu.enums.ErrorFilter =
  • “validation”
  • “out_of_memory”
  • “internal”
wgpu.enums.FeatureName =
  • “depth_clip_control”
  • “depth32float_stencil8”
  • “texture_compression_bc”
  • “texture_compression_etc2”
  • “texture_compression_astc”
  • “timestamp_query”
  • “indirect_first_instance”
  • “shader_f16”
  • “rg11b10ufloat_renderable”
wgpu.enums.FilterMode =
  • “nearest”
  • “linear”
wgpu.enums.FrontFace =
  • “ccw”
  • “cw”
wgpu.enums.IndexFormat =
  • “uint16”
  • “uint32”
wgpu.enums.LoadOp =
  • “load”
  • “clear”
wgpu.enums.MipmapFilterMode =
  • “nearest”
  • “linear”
wgpu.enums.PipelineErrorReason =
  • “validation”
  • “internal”
wgpu.enums.PowerPreference =
  • “low_power”
  • “high_performance”
wgpu.enums.PrimitiveTopology =
  • “point_list”
  • “line_list”
  • “line_strip”
  • “triangle_list”
  • “triangle_strip”
wgpu.enums.QueryType =
  • “occlusion”
  • “timestamp”
wgpu.enums.RenderPassTimestampLocation =
  • “beginning”
  • “end”
wgpu.enums.SamplerBindingType =
  • “filtering”
  • “non_filtering”
  • “comparison”
wgpu.enums.StencilOperation =
  • “keep”
  • “zero”
  • “replace”
  • “invert”
  • “increment_clamp”
  • “decrement_clamp”
  • “increment_wrap”
  • “decrement_wrap”
wgpu.enums.StorageTextureAccess =
  • “write_only”
wgpu.enums.StoreOp =
  • “store”
  • “discard”
wgpu.enums.TextureAspect =
  • “all”
  • “stencil_only”
  • “depth_only”
wgpu.enums.TextureDimension =
  • “d1”
  • “d2”
  • “d3”
wgpu.enums.TextureFormat =
  • “r8unorm”
  • “r8snorm”
  • “r8uint”
  • “r8sint”
  • “r16uint”
  • “r16sint”
  • “r16float”
  • “rg8unorm”
  • “rg8snorm”
  • “rg8uint”
  • “rg8sint”
  • “r32uint”
  • “r32sint”
  • “r32float”
  • “rg16uint”
  • “rg16sint”
  • “rg16float”
  • “rgba8unorm”
  • “rgba8unorm_srgb”
  • “rgba8snorm”
  • “rgba8uint”
  • “rgba8sint”
  • “bgra8unorm”
  • “bgra8unorm_srgb”
  • “rgb9e5ufloat”
  • “rgb10a2unorm”
  • “rg11b10ufloat”
  • “rg32uint”
  • “rg32sint”
  • “rg32float”
  • “rgba16uint”
  • “rgba16sint”
  • “rgba16float”
  • “rgba32uint”
  • “rgba32sint”
  • “rgba32float”
  • “stencil8”
  • “depth16unorm”
  • “depth24plus”
  • “depth24plus_stencil8”
  • “depth32float”
  • “depth32float_stencil8”
  • “bc1_rgba_unorm”
  • “bc1_rgba_unorm_srgb”
  • “bc2_rgba_unorm”
  • “bc2_rgba_unorm_srgb”
  • “bc3_rgba_unorm”
  • “bc3_rgba_unorm_srgb”
  • “bc4_r_unorm”
  • “bc4_r_snorm”
  • “bc5_rg_unorm”
  • “bc5_rg_snorm”
  • “bc6h_rgb_ufloat”
  • “bc6h_rgb_float”
  • “bc7_rgba_unorm”
  • “bc7_rgba_unorm_srgb”
  • “etc2_rgb8unorm”
  • “etc2_rgb8unorm_srgb”
  • “etc2_rgb8a1unorm”
  • “etc2_rgb8a1unorm_srgb”
  • “etc2_rgba8unorm”
  • “etc2_rgba8unorm_srgb”
  • “eac_r11unorm”
  • “eac_r11snorm”
  • “eac_rg11unorm”
  • “eac_rg11snorm”
  • “astc_4x4_unorm”
  • “astc_4x4_unorm_srgb”
  • “astc_5x4_unorm”
  • “astc_5x4_unorm_srgb”
  • “astc_5x5_unorm”
  • “astc_5x5_unorm_srgb”
  • “astc_6x5_unorm”
  • “astc_6x5_unorm_srgb”
  • “astc_6x6_unorm”
  • “astc_6x6_unorm_srgb”
  • “astc_8x5_unorm”
  • “astc_8x5_unorm_srgb”
  • “astc_8x6_unorm”
  • “astc_8x6_unorm_srgb”
  • “astc_8x8_unorm”
  • “astc_8x8_unorm_srgb”
  • “astc_10x5_unorm”
  • “astc_10x5_unorm_srgb”
  • “astc_10x6_unorm”
  • “astc_10x6_unorm_srgb”
  • “astc_10x8_unorm”
  • “astc_10x8_unorm_srgb”
  • “astc_10x10_unorm”
  • “astc_10x10_unorm_srgb”
  • “astc_12x10_unorm”
  • “astc_12x10_unorm_srgb”
  • “astc_12x12_unorm”
  • “astc_12x12_unorm_srgb”
wgpu.enums.TextureSampleType =
  • “float”
  • “unfilterable_float”
  • “depth”
  • “sint”
  • “uint”
wgpu.enums.TextureViewDimension =
  • “d1”
  • “d2”
  • “d2_array”
  • “cube”
  • “cube_array”
  • “d3”
wgpu.enums.VertexFormat =
  • “uint8x2”
  • “uint8x4”
  • “sint8x2”
  • “sint8x4”
  • “unorm8x2”
  • “unorm8x4”
  • “snorm8x2”
  • “snorm8x4”
  • “uint16x2”
  • “uint16x4”
  • “sint16x2”
  • “sint16x4”
  • “unorm16x2”
  • “unorm16x4”
  • “snorm16x2”
  • “snorm16x4”
  • “float16x2”
  • “float16x4”
  • “float32”
  • “float32x2”
  • “float32x3”
  • “float32x4”
  • “uint32”
  • “uint32x2”
  • “uint32x3”
  • “uint32x4”
  • “sint32”
  • “sint32x2”
  • “sint32x3”
  • “sint32x4”
wgpu.enums.VertexStepMode =
  • “vertex”
  • “instance”

Structs

The sructs in wgpu-py are represented as Python dictionaries. Fields that have default values (as indicated below) may be omitted.

wgpu.structs.BindGroupDescriptor =
wgpu.structs.BindGroupEntry =
  • binding :: int
  • resource :: Union[GPUExternalTexture, GPUSampler, GPUTextureView, structs.BufferBinding]
wgpu.structs.BindGroupLayoutDescriptor =
wgpu.structs.BindGroupLayoutEntry =
wgpu.structs.BlendComponent =
wgpu.structs.BlendState =
wgpu.structs.BufferBinding =
  • buffer :: GPUBuffer
  • offset :: int = 0
  • size :: int = None
wgpu.structs.BufferBindingLayout =
wgpu.structs.BufferDescriptor =
  • label :: str = None
  • size :: int
  • usage :: flags.BufferUsage
  • mappedAtCreation :: bool = false
wgpu.structs.CanvasConfiguration =
wgpu.structs.Color =
  • r :: float
  • g :: float
  • b :: float
  • a :: float
wgpu.structs.ColorTargetState =
wgpu.structs.CommandBufferDescriptor =
  • label :: str = None
wgpu.structs.CommandEncoderDescriptor =
  • label :: str = None
wgpu.structs.ComputePassDescriptor =
wgpu.structs.ComputePassTimestampWrite =
wgpu.structs.ComputePipelineDescriptor =
wgpu.structs.DepthStencilState =
wgpu.structs.DeviceDescriptor =
wgpu.structs.Extent3D =
  • width :: int
  • height :: int = 1
  • depthOrArrayLayers :: int = 1
wgpu.structs.ExternalTextureDescriptor =
  • label :: str = None
  • source :: object
  • colorSpace :: str = “srgb”
wgpu.structs.FragmentState =
  • module :: GPUShaderModule
  • entryPoint :: str
  • constants :: Dict[str, float] = None
  • targets :: List[structs.ColorTargetState]
wgpu.structs.ImageCopyBuffer =
  • offset :: int = 0
  • bytesPerRow :: int = None
  • rowsPerImage :: int = None
  • buffer :: GPUBuffer
wgpu.structs.ImageCopyExternalImage =
  • source :: Union[memoryview, object]
  • origin :: Union[List[int], structs.Origin2D] = {}
  • flipY :: bool = false
wgpu.structs.ImageCopyTexture =
wgpu.structs.ImageDataLayout =
  • offset :: int = 0
  • bytesPerRow :: int = None
  • rowsPerImage :: int = None
wgpu.structs.MultisampleState =
  • count :: int = 1
  • mask :: int = 0xFFFFFFFF
  • alphaToCoverageEnabled :: bool = false
wgpu.structs.Origin2D =
  • x :: int = 0
  • y :: int = 0
wgpu.structs.Origin3D =
  • x :: int = 0
  • y :: int = 0
  • z :: int = 0
wgpu.structs.PipelineErrorInit =
wgpu.structs.PipelineLayoutDescriptor =
  • label :: str = None
  • bindGroupLayouts :: List[GPUBindGroupLayout]
wgpu.structs.PrimitiveState =
wgpu.structs.ProgrammableStage =
  • module :: GPUShaderModule
  • entryPoint :: str
  • constants :: Dict[str, float] = None
wgpu.structs.QuerySetDescriptor =
wgpu.structs.QueueDescriptor =
  • label :: str = None
wgpu.structs.RenderBundleDescriptor =
  • label :: str = None
wgpu.structs.RenderBundleEncoderDescriptor =
  • label :: str = None
  • colorFormats :: List[enums.TextureFormat]
  • depthStencilFormat :: enums.TextureFormat = None
  • sampleCount :: int = 1
  • depthReadOnly :: bool = false
  • stencilReadOnly :: bool = false
wgpu.structs.RenderPassColorAttachment =
wgpu.structs.RenderPassDepthStencilAttachment =
  • view :: GPUTextureView
  • depthClearValue :: float = 0
  • depthLoadOp :: enums.LoadOp = None
  • depthStoreOp :: enums.StoreOp = None
  • depthReadOnly :: bool = false
  • stencilClearValue :: int = 0
  • stencilLoadOp :: enums.LoadOp = None
  • stencilStoreOp :: enums.StoreOp = None
  • stencilReadOnly :: bool = false
wgpu.structs.RenderPassDescriptor =
wgpu.structs.RenderPassLayout =
wgpu.structs.RenderPassTimestampWrite =
wgpu.structs.RenderPipelineDescriptor =
wgpu.structs.RequestAdapterOptions =
wgpu.structs.SamplerBindingLayout =
wgpu.structs.SamplerDescriptor =
wgpu.structs.ShaderModuleCompilationHint =
wgpu.structs.ShaderModuleDescriptor =
wgpu.structs.StencilFaceState =
wgpu.structs.StorageTextureBindingLayout =
class wgpu.structs.Struct(name, **kwargs)
wgpu.structs.TextureBindingLayout =
wgpu.structs.TextureDescriptor =
wgpu.structs.TextureViewDescriptor =
wgpu.structs.UncapturedErrorEventInit =
  • error :: GPUError
wgpu.structs.VertexAttribute =
wgpu.structs.VertexBufferLayout =
wgpu.structs.VertexState =
  • module :: GPUShaderModule
  • entryPoint :: str
  • constants :: Dict[str, float] = None
  • buffers :: List[structs.VertexBufferLayout] = []

List of GPU classes

GPU The entrypoint to the wgpu API.
GPUAdapterInfo Represents information about an adapter.
GPUAdapter Represents an abstract wgpu implementation.
GPUBindGroup Represents a group of resource bindings (buffer, sampler, texture-view).
GPUBindGroupLayout Defines the interface between a set of resources bound in a GPUBindGroup.
GPUBindingCommandsMixin Mixin for classes that defines bindings.
GPUBuffer Represents a block of memory that can be used in GPU operations.
GPUCanvasContext Represents a context to configure a canvas.
GPUCommandBuffer Stores a series of commands generated by a GPUCommandEncoder.
GPUCommandEncoder Object to record a series of commands.
GPUCommandsMixin Mixin for classes that encode commands.
GPUCompilationInfo TODO
GPUCompilationMessage An object that contains information about a problem with shader compilation.
GPUComputePassEncoder Object to records commands for a compute pass.
GPUComputePipeline Represents a single pipeline for computations (no rendering).
GPUDebugCommandsMixin Mixin for classes that support debug groups and markers.
GPUDevice The top-level interface through which GPU objects are created.
GPUDeviceLostInfo An object that contains information about the device being lost.
GPUError A generic GPU error.
GPUExternalTexture Ignore this - specific to browsers.
GPUInternalError An error raised for implementation-specific reasons.
GPUObjectBase The base class for all GPU objects.
GPUOutOfMemoryError An error raised when the GPU is out of memory.
GPUPipelineBase A mixin class for render and compute pipelines.
GPUPipelineError An error raised when a pipeline could not be created.
GPUPipelineLayout Describes the layout of a pipeline, as a list of GPUBindGroupLayout objects.
GPUQuerySet TODO
GPUQueue Object to submit command buffers to.
GPURenderBundle TODO: not yet wrapped.
GPURenderBundleEncoder TODO: not yet wrapped
GPURenderCommandsMixin Mixin for classes that provide rendering commands.
GPURenderPassEncoder Object to records commands for a render pass.
GPURenderPipeline Represents a single pipeline to draw something.
GPUSampler Defines how a texture (view) must be sampled by the shader.
GPUShaderModule Represents a programmable shader.
GPUTexture Represents a 1D, 2D or 3D color image object.
GPUTextureView Represents a way to represent a GPUTexture.
GPUUncapturedErrorEvent TODO
GPUValidationError An error raised when the pipeline could not be validated.

GUI API

You can use vanilla wgpu for compute tasks and to render offscreen. To render to a window on screen we need a canvas. Since the Python ecosystem provides many different GUI toolkits, wgpu implements a base canvas class, and has builtin support for a few GUI toolkits. At the moment these include GLFW, Jupyter, Qt, and wx.

The Canvas base classes

WgpuCanvasInterface The minimal interface to be a valid canvas.
WgpuCanvasBase A canvas class that provides a basis for all GUI toolkits.
WgpuAutoGui Mixin class for canvases implementing autogui.
WgpuOffscreenCanvas Base class for off-screen canvases.

For each supported GUI toolkit there is a module that implements a WgpuCanvas class, which inherits from WgpuCanvasBase, providing a common API. The GLFW, Qt, and Jupyter backends also inherit from WgpuAutoGui to include support for events (interactivity). In the next sections we demonstrates the different canvas classes that you can use.

The auto GUI backend

The default approach for examples and small applications is to use the automatically selected GUI backend. At the moment this selects either the GLFW, Qt, or Jupyter backend, depending on the environment.

To implement interaction, the canvas has a WgpuAutoGui.handle_event() method that can be overloaded. Alternatively you can use it’s WgpuAutoGui.add_event_handler() method. See the event spec for details about the event objects.

Also see the triangle auto example and cube example.

from wgpu.gui.auto import WgpuCanvas, run, call_later

canvas = WgpuCanvas(title="Example")
canvas.request_draw(your_draw_function)

run()

Support for GLFW

GLFW is a lightweight windowing toolkit. Install it with pip install glfw. The preferred approach is to use the auto backend, but you can replace from wgpu.gui.auto with from wgpu.gui.glfw to force using GLFW.

from wgpu.gui.glfw import WgpuCanvas, run, call_later

canvas = WgpuCanvas(title="Example")
canvas.request_draw(your_draw_function)

run()

Support for Qt

There is support for PyQt5, PyQt6, PySide2 and PySide6. The wgpu library detects what library you are using by looking what module has been imported. For a toplevel widget, the gui.qt.WgpuCanvas class can be imported. If you want to embed the canvas as a subwidget, use gui.qt.WgpuWidget instead.

Also see the Qt triangle example and Qt triangle embed example.

# Import any of the Qt libraries before importing the WgpuCanvas.
# This way wgpu knows which Qt library to use.
from PySide6 import QtWidgets
from wgpu.gui.qt import WgpuCanvas

app = QtWidgets.QApplication([])

# Instantiate the canvas
canvas = WgpuCanvas(title="Example")

# Tell the canvas what drawing function to call
canvas.request_draw(your_draw_function)

app.exec_()

Support for wx

There is support for embedding a wgpu visualization in wxPython. For a toplevel widget, the gui.wx.WgpuCanvas class can be imported. If you want to embed the canvas as a subwidget, use gui.wx.WgpuWidget instead.

Also see the wx triangle example and wx triangle embed example.

import wx
from wgpu.gui.wx import WgpuCanvas

app = wx.App()

# Instantiate the canvas
canvas = WgpuCanvas(title="Example")

# Tell the canvas what drawing function to call
canvas.request_draw(your_draw_function)

app.MainLoop()

Support for offscreen

You can also use a “fake” canvas to draw offscreen and get the result as a numpy array. Note that you can render to a texture without using any canvas object, but in some cases it’s convenient to do so with a canvas-like API.

from wgpu.gui.offscreen import WgpuCanvas

# Instantiate the canvas
canvas = WgpuCanvas(size=(500, 400), pixel_ratio=1)

# ...

# Tell the canvas what drawing function to call
canvas.request_draw(your_draw_function)

# Perform a draw
array = canvas.draw()  # numpy array with shape (400, 500, 4)

Support for Jupyter lab and notebook

WGPU can be used in Jupyter lab and the Jupyter notebook. This canvas is based on jupyter_rfb, an ipywidget subclass implementing a remote frame-buffer. There are also some wgpu examples.

# from wgpu.gui.jupyter import WgpuCanvas  # Direct approach
from wgpu.gui.auto import WgpuCanvas  # Approach compatible with desktop usage

canvas = WgpuCanvas()

# ... wgpu code

canvas  # Use as cell output

Utils

The wgpu library provides a few utilities. Note that the functions below need to be explictly imported.

Get default device

wgpu.utils.get_default_device()

Get a wgpu device object. If this succeeds, it’s likely that the WGPU lib is usable on this system. If not, this call will probably exit (Rust panic). When called multiple times, returns the same global device object (useful for e.g. unit tests).

Compute with buffers

wgpu.utils.compute_with_buffers(input_arrays, output_arrays, shader, n=None)

Apply the given compute shader to the given input_arrays and return output arrays. Both input and output arrays are represented on the GPU using storage buffer objects.

Parameters:
  • input_arrays (dict) – A dict mapping int bindings to arrays. The array can be anything that supports the buffer protocol, including bytes, memoryviews, ctypes arrays and numpy arrays. The type and shape of the array does not need to match the type with which the shader will interpret the buffer data (though it probably makes your code easier to follow).
  • output_arrays (dict) – A dict mapping int bindings to output shapes. If the value is int, it represents the size (in bytes) of the buffer. If the value is a tuple, its last element specifies the format (see below), and the preceding elements specify the shape. These are used to cast() the memoryview object before it is returned. If the value is a ctypes array type, the result will be cast to that instead of a memoryview. Note that any buffer that is NOT in the output arrays dict will be considered readonly in the shader.
  • shader (str or bytes) – The shader as a string of WGSL code or SpirV bytes.
  • n (int, tuple, optional) – The dispatch counts. Can be an int or a 3-tuple of ints to specify (x, y, z). If not given or None, the length of the first output array type is used.
Returns:

A dict mapping int bindings to memoryviews.

Return type:

output (dict)

The format characters to cast a memoryview are hard to remember, so here’s a refresher:

  • “b” and “B” are signed and unsiged 8-bit ints.
  • “h” and “H” are signed and unsiged 16-bit ints.
  • “i” and “I” are signed and unsiged 32-bit ints.
  • “e” and “f” are 16-bit and 32-bit floats.

Shadertoy

from wgpu.utils.shadertoy import Shadertoy

Indices and tables