WGPU API

This document describes the wgpu API. It is basically a Pythonic version of the WebGPU API. It exposes an API for performing operations, such as rendering and computation, on a Graphics Processing Unit.

Warning

The WebGPU API is still being developed and occasionally there are backwards incompatible changes. Since we mostly follow the WebGPU API, there may be backwards incompatible changes to wgpu-py too. This will be so until the WebGPU API settles as a standard.

How to read this API

The classes in this API all have a name staring with “GPU”, this helps discern them from flags and enums. These classes are never instantiated directly; new objects are returned by certain methods.

Most methods in this API have no positional arguments; each argument must be referenced by name. Some argument values must be a dict, these can be thought of as “nested” arguments.

Many arguments (and dict fields) must be a flags or enums. Flags are integer bitmasks that can be orred together. Enum values are strings in this API.

Some arguments have a default value. Most do not.

Selecting the backend

Before you can use this API, you have to select a backend. Eventually there may be multiple backends, but at the moment there is only one backend, which is based on the Rust libary wgpu-native. You select the backend by importing it:

import wgpu.backends.rs

The wgpu-py package comes with the wgpu-native library. If you want to use your own version of that library instead, set the WGPU_LIB_PATH environment variable.

Adapter

To start using the GPU for computations or rendering, a device object is required. One first requests an adapter, which represens a GPU implementation on the current system. The device can then be requested from the adapter.

wgpu.request_adapter(**parameters)

Get a GPUAdapter, the object that represents an abstract wgpu implementation, from which one can request a GPUDevice.

Parameters:
  • canvas (WgpuCanvasInterface) – The canvas that the adapter should be able to render to (to create a swap chain for, to be precise). Can be None if you’re not rendering to screen (or if you’re confident that the returned adapter will work just fine).
  • powerPreference (PowerPreference) – “high-performance” or “low-power”
wgpu.request_adapter_async(**parameters)

Async version of request_adapter().

class wgpu.GPUAdapter

An adapter represents both an instance of a hardware accelerator (e.g. GPU or CPU) and an implementation of WGPU on top of that accelerator. If an adapter becomes unavailable, it becomes invalid. Once invalid, it never becomes valid again.

extensions

A tuple that represents the extensions supported by the adapter.

name

A human-readable name identifying the adapter.

request_device(**parameters)

Request a GPUDevice from the adapter.

Parameters:
  • label (str) – A human readable label. Optional.
  • extensions (list of str) – the extensions that you need. Default [].
  • limits (dict) – the various limits that you need. Default {}.
request_device_async(**parameters)

Async version of request_device().

Device

The device is the central object; most other GPU objects are created from it. It is recommended to request a device object once, or perhaps twice. But not for every operation (e.g. in unit tests). Also see wgpu.utils.get_default_device().

class wgpu.GPUObject

The base class for all GPU objects (the device and all objects belonging to a device).

label

A human-readable name identifying the GPU object.

class wgpu.GPUDevice

Subclass of GPUObject

A device is the logical instantiation of an adapter, through which internal objects are created. It can be shared across threads. A device is the exclusive owner of all internal objects created from it: when the device is lost, all objects created from it become invalid.

Create a device using GPUAdapter.request_device() or GPUAdapter.request_device_async().

configure_swap_chain(canvas, format, usage=None)

Get a GPUSwapChain object for the given canvas. In the WebGPU spec this is a method of the canvas. In wgpu-py it’s a method of the device.

Parameters:
  • canvas (WgpuCanvasInterface) – An object implementing the canvas interface.
  • format (TextureFormat) – The texture format, e.g. “bgra8unorm-srgb”.
  • usage (TextureUsage) – Default TextureUsage.OUTPUT_ATTACHMENT.
create_bind_group(**parameters)

Create a GPUBindGroup object, which can be used in pass.set_bind_group() to attach a group of resources.

Parameters:
  • label (str) – A human readable label. Optional.
  • layout (GPUBindGroupLayout) – The layout (abstract representation) for this bind group.
  • entries (list of dict) – A list of dicts, see below.

Example entry dicts:

# For a sampler
{
    "binding" : 0,  # slot
    "resource": a_sampler,
}
# For a texture view
{
    "binding" : 0,  # slot
    "resource": a_texture_view,
}
# For a buffer
{
    "binding" : 0,  # slot
    "resource": {
        "buffer": a_buffer,
        "offset": 0,
        "size": 812,
    }
}
create_bind_group_layout(**parameters)

Create a GPUBindGroupLayout object. One or more such objects are passed to create_pipeline_layout() to specify the (abstract) pipeline layout for resources. See the docs on bind groups for details.

Parameters:
  • label (str) – A human readable label. Optional.
  • entries (list of dict) – A list of layout entry dicts.

Example entry dict:

# Buffer
{
    "binding": 0,
    "visibility": wgpu.ShaderStage.COMPUTE,
    "type": wgpu.BindingType.storage_buffer,
    "has_dynamic_offset": False,  # optional
},
# Sampler
{
    "binding": 1,
    "visibility": wgpu.ShaderStage.COMPUTE,
    "type": wgpu.BindingType.sampler,
},
# Sampled texture
{
    "binding": 2,
    "visibility": wgpu.ShaderStage.FRAGMENT,
    "type": wgpu.BindingType.sampled_texture,
    "view_dimension": wgpu.TextureViewDimension.d2,
    "texture_component_type": wgpu.TextureComponentType.float,
    "multisampled": False,  # optional
},
# Storage texture
{
    "binding": 2,
    "visibility": wgpu.ShaderStage.FRAGMENT,
    "type": wgpu.BindingType.readonly_storage_texture,
    "view_dimension": wgpu.TextureViewDimension.d2,
    "texture_component_type": wgpu.TextureComponentType.float,
    "storage_texture_format": wgpu.TextureFormat.r32float,
    "multisampled": False,  # optional
},

About has_dynamic_offset: For uniform-buffer, storage-buffer, and readonly-storage-buffer bindings, it indicates whether the binding has a dynamic offset. One offset must be passed to set_bind_group for each dynamic binding in increasing order of binding number.

create_buffer(**parameters)

Create a GPUBuffer object.

Parameters:
  • label (str) – A human readable label. Optional.
  • size (int) – The size of the buffer in bytes.
  • usage (BufferUsageFlags) – The ways in which this buffer will be used.
create_buffer_mapped(**parameters)

Create a GPUBuffer object that is mapped from the start. It must be unmapped before using it in a pipeline.

Parameters:
  • label (str) – A human readable label. Optional.
  • size (int) – The size of the buffer in bytes.
  • usage (BufferUsageFlags) – The ways in which this buffer will be used.
create_buffer_mapped_async(**parameters)

Async version of create_buffer_mapped().

create_command_encoder(**parameters)

Create a GPUCommandEncoder object. A command encoder is used to record commands, which can then be submitted at once to the GPU.

Parameters:label (str) – A human readable label. Optional.
create_compute_pipeline(**parameters)

Create a GPUComputePipeline object.

Parameters:
  • label (str) – A human readable label. Optional.
  • layout (GPUPipelineLayout) – object created with create_pipeline_layout().
  • compute_stage (dict) – E.g. {"module": shader_module, entry_point="main"}.
create_pipeline_layout(**parameters)

Create a GPUPipelineLayout object, which can be used in create_render_pipeline() or create_compute_pipeline().

Parameters:
  • label (str) – A human readable label. Optional.
  • bind_group_layouts (list) – A list of GPUBindGroupLayout objects.
create_render_bundle_encoder(**parameters)

Create a GPURenderBundle object.

TODO: not yet available in wgpu-native

create_render_pipeline(**parameters)

Create a GPURenderPipeline object.

Parameters:
  • label (str) – A human readable label. Optional.
  • layout (GPUPipelineLayout) – A layout created with create_pipeline_layout().
  • vertex_stage (dict) – E.g. {"module": shader_module, entry_point="main"}
  • fragment_stage (dict) – E.g. {"module": shader_module, entry_point="main"}. Default None.
  • primitive_topology (PrimitiveTopology) – The topology, e.g. triangles or lines.
  • rasterization_state (dict) – Specify rasterization rules. See below. Default None.
  • color_states (list of dict) – Specify color blending rules. See below.
  • depth_stencil_state (dict) – Specify texture for depth and stencil. See below. Default None.
  • vertex_state (dict) – Specify index and vertex buffer info. See below.
  • sample_count (int) – Set higher than one for subsampling. Default 1.
  • sample_mask (int) – Sample bitmask. Default all ones.
  • alpha_to_coverage_enabled (bool) – Wheher to anable alpha coverage. Default False.

In the example dicts below, the values that are marked as optional, the shown value is the default.

Example rasterization state dict:

{
    "front_face": wgpu.FrontFace.ccw,  # optional
    "cull_mode": wgpu.CullMode.none,  # optional
    "depth_bias": 0,  # optional
    "depth_bias_slope_scale": 0.0,  # optional
    "depth_bias_clamp": 0.0  # optional
}

Example color state dict:

{
    "format": wgpu.TextureFormat.bgra8unorm_srgb,
    "alpha_blend": (
        wgpu.BlendFactor.One,
        wgpu.BlendFactor.zero,
        wgpu.BlendOperation.add,
    ),
    "color_blend": (
        wgpu.BlendFactor.One,
        wgpu.BlendFactor.zero,
        gpu.BlendOperation.add,
    ),
    "write_mask": wgpu.ColorWrite.ALL  # optional
}

Example depth-stencil state dict:

{
    "format": wgpu.TextureFormat.depth24plus_stencil8,
    "depth_write_enabled": False,  # optional
    "depth_compare": wgpu.CompareFunction.always,  # optional
    "stencil_front": {  # optional
        "compare": wgpu.CompareFunction.equal,
        "fail_op": wgpu.StencilOperation.keep,
        "depth_fail_op": wgpu.StencilOperation.keep,
        "pass_op": wgpu.StencilOperation.keep,
    },
    "stencil_back": {  # optional
        "compare": wgpu.CompareFunction.equal,
        "fail_op": wgpu.StencilOperation.keep,
        "depth_fail_op": wgpu.StencilOperation.keep,
        "pass_op": wgpu.StencilOperation.keep,
    },
    "stencil_read_mask": 0xFFFFFFFF,  # optional
    "stencil_write_mask": 0xFFFFFFFF,  # optional
}

Example vertex state dict:

{
    "indexFormat": wgpu.IndexFormat.uint32,
    "vertexBuffers": [
        {
            "array_stride": 8,
            "step_mode": wgpu.InputStepMode.vertex,  # optional
            "attributes": [
                {
                    "format": wgpu.VertexFormat.float2,
                    "offset": 0,
                    "shader_location": 0,
                },
                ...
            ],
        },
        ...
    ]
}
create_sampler(**parameters)

Create a GPUSampler object. Samplers specify how a texture is sampled.

Parameters:
  • label (str) – A human readable label. Optional.
  • address_mode_u (AddressMode) – What happens when sampling beyond the x edge. Default “clamp-to-edge”.
  • address_mode_v (AddressMode) – What happens when sampling beyond the y edge. Default “clamp-to-edge”.
  • address_mode_w (AddressMode) – What happens when sampling beyond the z edge. Default “clamp-to-edge”.
  • mag_filter (FilterMode) – Interpolation when zoomed in. Default ‘nearest’.
  • min_filter (FilterMode) – Interpolation when zoomed out. Default ‘nearest’.
  • mipmap_filter – (FilterMode): Interpolation between mip levels. Default ‘nearest’.
  • lod_min_clamp (float) – The minimum level of detail. Default 0.
  • lod_max_clamp (float) – The maxium level of detail. Default inf.
  • compare (CompareFunction) – The sample compare operation for depth textures. Only specify this for depth textures. Default None.
create_shader_module(**parameters)

Create a GPUShaderModule object from shader source.

Currently, only SpirV is supported. One can compile glsl shaders to SpirV ahead of time, or use the python-shader package to write shaders in Python.

Parameters:
  • label (str) – A human readable label. Optional.
  • code (bytes) – The shadercode, as binary SpirV, or an object implementing to_spirv() or to_bytes().
create_texture(**parameters)

Create a GPUTexture object.

Parameters:
  • label (str) – A human readable label. Optional.
  • size (tuple or dict) – The texture size with fields (width, height, depth).
  • mip_level_count (int) – The number of mip leveles. Default 1.
  • sample_count (int) – The number of samples. Default 1.
  • dimension (TextureDimension) – The dimensionality of the texture. Default 2d.
  • format (TextureFormat) – What channels it stores and how.
  • usage (TextureUsageFlags) – The ways in which the texture will be used.
default_queue

The default GPUQueue for this device.

extensions

A tuple of strings representing the extensions with which this device was created.

get_swap_chain_preferred_format(canvas)

Get the preferred swap chain format. In the WebGPU spec this is a method of the canvas. In wgpu-py it’s a method of the device.

limits

A dict exposing the limits with which this device was created.

Buffers and textures

Buffers and textures are used to provide your shaders with data.

class wgpu.GPUBuffer

Subclass of GPUObject

A GPUBuffer represents a block of memory that can be used in GPU operations. Data is stored in linear layout, meaning that each byte of the allocation can be addressed by its offset from the start of the buffer, subject to alignment restrictions depending on the operation.

Create a buffer using GPUDevice.create_buffer(), GPUDevice.create_buffer_mapped() or GPUDevice.create_buffer_mapped_async().

One can sync data in a buffer by mapping it (or by creating a mapped buffer) and then setting/getting the values in the mapped array. Alternatively, one can tell the GPU (via the command encoder) to copy data between buffers and textures.

destroy()

An application that no longer requires a buffer can choose to destroy it. Note that this is automatically called when the Python object is cleaned up by the garbadge collector.

map_read()

Make the buffer memory accessable to the CPU for reading. Sets the mapping property and returns the mapped memory as a ctypes array.

map_read_async()

Async version of map_read().

map_write()

Make the buffer memory accessable to the CPU for writing. Sets the mapping property and returns the mapped memory as a ctypes array.

map_write_async()

Async version of map_write().

mapping

The mapped memory of the buffer, exposed as a ctypes array. Is only not None when the buffer is mapped. Can be cast to a ctypes array of appropriate type using your_array_type.from_buffer(b.mapping). Or use something like np.frombuffer(b.mapping, np.float32) to map it to a numpy array of appropriate dtype and shape.

size

The length of the GPUBuffer allocation in bytes.

state
  • “mapped” when the buffer is available for CPU operations.
  • “unmapped” when the buffer is available for GPU operations.
  • “destroyed”, when the buffer is no longer available for any operations except destroy.
Type:The current state of the GPUBuffer
unmap()

Unmap the buffer so that it can be used in a GPU pipeline.

usage

The allowed usages (int bitmap) for this GPUBuffer, specifying e.g. whether the buffer may be used as a vertex buffer, uniform buffer, target or source for copying data, etc.

class wgpu.GPUTexture

Subclass of GPUObject

A texture represents a 1D, 2D or 3D color image object. It also can have mipmaps (different levels of varying detail), and arrays. The texture represents the “raw” data. A GPUTextureView is used to define how the texture data should be interpreted.

Create a texture using GPUDevice.create_texture().

create_view(**parameters)

Create a GPUTextureView object.

If no aguments are given, a default view is given, with the same format and dimension as the texture.

Parameters:
  • label (str) – A human readable label. Optional.
  • format (TextureFormat) – What channels it stores and how.
  • dimension (TextureViewDimension) – The dimensionality of the texture view.
  • aspect (TextureAspect) – Whether this view is used for depth, stencil, or all. Default all.
  • base_mip_level (int) – The starting mip level. Default 0.
  • mip_level_count (int) – The number of mip levels. Default 0.
  • base_array_layer (int) – The starting array layer. Default 0.
  • array_layer_count (int) – The number of array layers. Default 0.
destroy()

An application that no longer requires a texture can choose to destroy it. Note that this is automatically called when the Python object is cleaned up by the garbadge collector.

class wgpu.GPUTextureView

Subclass of GPUObject

A texture view represents a way to represent a GPUTexture.

Create a texture view using GPUTexture.create_view().

class wgpu.GPUSampler

Subclass of GPUObject

A sampler specifies how a texture (view) must be sampled by the shader, in terms of subsampling, sampling between mip levels, and sampling out of the image boundaries.

Create a sampler using GPUDevice.create_sampler().

Bind groups

Shaders need access to resources like buffers, texture views, and samplers. The access to these resources occurs via so called bindings. There are integer slots, which you specify both via the API and in the shader, to bind the resources to the shader.

Bindings are organized into bind groups, which are essentially a list of bindings. E.g. in Python shaders the slot of each resource is specified as a two-tuple (e.g. (1, 3)) specifying the bind group and binding slot respectively.

Further, in wgpu you need to specify a binding layout, providing meta-information about the binding (type, texture dimension etc.).

One uses device.create_bind_group() to create a group of bindings using the actual buffers/textures/samplers.

One uses device.create_bind_group_layout() to specify more information about these bindings, and device.create_pipeline_layout() to pack one or more bind group layouts together, into a complete layout description for a pipeline.

class wgpu.GPUBindGroupLayout

Subclass of GPUObject

A bind group layout defines the interface between a set of resources bound in a GPUBindGroup and their accessibility in shader stages.

Create a bind group layout using GPUDevice.create_bind_group_layout().

class wgpu.GPUBindGroup

Subclass of GPUObject

A bind group represents a group of bindings, the shader slot, and a resource (sampler, texture-view, buffer).

Create a bind group using GPUDevice.create_bind_group().

class wgpu.GPUPipelineLayout

Subclass of GPUObject

A pipeline layout describes the layout of a pipeline, as a list of GPUBindGroupLayout objects.

Create a pipeline layout using GPUDevice.create_pipeline_layout().

Shaders and pipelines

The wgpu API knows three kinds of shaders: compute, vertex and fragment. Pipelines define how the shader is run, and with what resources.

class wgpu.GPUShaderModule

Subclass of GPUObject

A shader module represents a programmable shader.

Create a shader module using GPUDevice.create_shader_module().

class wgpu.GPUComputePipeline

Subclass of GPUObject

A compute pipeline represents a single pipeline for computations (no rendering).

Create a compute pipeline using GPUDevice.create_compute_pipeline().

class wgpu.GPURenderPipeline

Subclass of GPUObject

A render pipeline represents a single pipeline to draw something using a vertex and a fragment shader. The render target can come from a window on the screen or from an in-memory texture (off-screen rendering).

Create a render pipeline using GPUDevice.create_render_pipeline().

Command buffers and encoders

class wgpu.GPUCommandBuffer

Subclass of GPUObject

A command buffer stores a series of commands, generated by a GPUCommandEncoder, to be submitted to a GPUQueue.

Create a command buffer using GPUCommandEncoder.finish().

class wgpu.GPUCommandEncoder

Subclass of GPUObject

A command encoder is used to record a series of commands. When done, call finish() to obtain a GPUCommandBuffer object.

Create a command encoder using GPUDevice.create_command_encoder().

begin_compute_pass(**parameters)

Record the beginning of a compute pass. Returns a GPUComputePassEncoder object.

Parameters:label (str) – A human readable label. Optional.
begin_render_pass(**parameters)

Record the beginning of a render pass. Returns a GPURenderPassEncoder object.

Parameters:
  • label (str) – A human readable label. Optional.
  • color_attachements (list of dict) – List of color attachement dicts. See below.
  • depth_stencil_attachment (dict) – A depth stencil attachement dict. See below. Default None.
  • occlusion_query_set – Default None. TODO NOT IMPLEMENTED in wgpu-native.

Example color attachement:

{
    "attachement": texture_view,
    "resolve_target": None,  # optional
    "load_value": (0, 0, 0, 0),  # LoadOp.load or a color
    "store_op": wgpu.StoreOp.store,  # optional
}

Example depth stencil attachement:

{
    "attachment": texture_view,
    "depth_load_value": 0.0,
    "depth_store_op": wgpu.StoreOp.store,
    "stencil_load_value": wgpu.LoadOp.load,
    "stencil_store_op": wgpu.StoreOp.store,
}
copy_buffer_to_buffer(source, source_offset, destination, destination_offset, size)

Copy the contents of a buffer to another buffer.

Parameters:
  • source (GPUBuffer) – The source buffer.
  • source_offset (int) – The byte offset.
  • destination (GPUBuffer) – The target buffer.
  • destination_offset (int) – The byte offset in the destination buffer.
  • size (int) – The number of bytes to copy.
copy_buffer_to_texture(source, destination, copy_size)

Copy the contents of a buffer to a texture (view).

Parameters:
  • source (GPUBuffer) – A dict with fields: buffer, offset, bytes_per_row, rows_per_image.
  • destination (GPUTexture) – A dict with fields: texture, mip_level, array_layer, origin.
  • copy_size (int) – The number of bytes to copy.
copy_texture_to_buffer(source, destination, copy_size)

Copy the contents of a texture (view) to a buffer.

Parameters:
  • source (GPUTexture) – A dict with fields: texture, mip_level, array_layer, origin.
  • destination (GPUBuffer) – A dict with fields: buffer, offset, bytes_per_row, rows_per_image.
  • copy_size (int) – The number of bytes to copy.
copy_texture_to_texture(source, destination, copy_size)

Copy the contents of a texture (view) to another texture (view).

Parameters:
  • source (GPUTexture) – A dict with fields: texture, mip_level, array_layer, origin.
  • destination (GPUTexture) – A dict with fields: texture, mip_level, array_layer, origin.
  • copy_size (int) – The number of bytes to copy.
finish(**parameters)

Finish recording. Returns a GPUCommandBuffer to submit to a GPUQueue.

Parameters:label (str) – A human readable label. Optional.
insert_debug_marker(marker_label)

TODO: not yet available in wgpu-native

pop_debug_group()

TODO: not yet available in wgpu-native

push_debug_group(group_label)

TODO: not yet available in wgpu-native

class wgpu.GPUProgrammablePassEncoder

Subclass of GPUObject

Base class for the different pass encoder classes.

insert_debug_marker(marker_label)

Insert the given message into the debug message queue.

pop_debug_group()

Pop the active debug group.

push_debug_group(group_label)

Push a named debug group into the command stream.

set_bind_group(index, bind_group, dynamic_offsets_data, dynamic_offsets_data_start, dynamic_offsets_data_length)

Associate the given bind group (i.e. group or resources) with the given slot/index.

Parameters:
  • index (int) – The slot to bind at.
  • bind_group (GPUBindGroup) – The bind group to bind.
  • dynamic_offsets_data (list of int) – A list of offsets (one for each bind group).
  • dynamic_offsets_data_start (int) – Not used.
  • dynamic_offsets_data_length (int) – Not used.
class wgpu.GPUComputePassEncoder

Subclass of GPUProgrammablePassEncoder

A compute-pass encoder records commands related to a compute pass.

Create a compute pass encoder using GPUCommandEncoder.begin_compute_pass().

dispatch(x, y=1, z=1)

Run the compute shader.

Parameters:
  • x (int) – The number of cycles in index x.
  • y (int) – The number of cycles in index y. Default 1.
  • z (int) – The number of cycles in index z. Default 1.
dispatch_indirect(indirect_buffer, indirect_offset)

Like dispatch(), but the function arguments are in a buffer.

Parameters:
  • indirect_buffer (GPUBuffer) – The buffer that contains the arguments.
  • indirect_offset (int) – The byte offset at which the arguments are.
end_pass()

Record the end of the compute pass.

set_pipeline(pipeline)

Set the pipeline for this compute pass.

Parameters:pipeline (GPUComputePipeline) – The pipeline to use.
class wgpu.GPURenderEncoderBase

Subclass of GPUProgrammablePassEncoder

Base class for different render-pass encoder classes.

draw(vertex_count, instance_count=1, first_vertex=0, first_instance=0)

Run the render pipeline without an index buffer.

Parameters:
  • vertex_count (int) – The number of vertices to draw.
  • instance_count (int) – The number of instances to draw. Default 1.
  • first_vertex (int) – The vertex offset. Default 0.
  • first_instance (int) – The instance offset. Default 0.
draw_indexed(index_count, instance_count=1, first_index=0, base_vertex=0, first_instance=0)

Run the render pipeline using an index buffer.

Parameters:
  • index_count (int) – The number of indices to draw.
  • instance_count (int) – The number of instances to draw. Default 1.
  • first_index (int) – The index offset. Default 0.
  • base_vertex (int) – A number added to each index in the index buffer. Default 0.
  • first_instance (int) – The instance offset. Default 0.
draw_indexed_indirect(indirect_buffer, indirect_offset)

Like draw_indexed(), but the function arguments are in a buffer.

Parameters:
  • indirect_buffer (GPUBuffer) – The buffer that contains the arguments.
  • indirect_offset (int) – The byte offset at which the arguments are.
draw_indirect(indirect_buffer, indirect_offset)

Like draw(), but the function arguments are in a buffer.

Parameters:
  • indirect_buffer (GPUBuffer) – The buffer that contains the arguments.
  • indirect_offset (int) – The byte offset at which the arguments are.
set_index_buffer(buffer, offset=0, size=0)

Set the index buffer for this render pass.

Parameters:
  • buffer (GPUBuffer) – The buffer that contains the indices.
  • offset (int) – The byte offset in the buffer. Default 0.
  • size (int) – The number of bytes to use. Default 0.
set_pipeline(pipeline)

Set the pipeline for this render pass.

Parameters:pipeline (GPURenderPipeline) – The pipeline to use.
set_vertex_buffer(slot, buffer, offset=0, size=0)

Associate a vertex buffer with a bind slot.

Parameters:
  • slot (int) – The binding slot for the vertex buffer.
  • buffer (GPUBuffer) – The buffer that contains the vertex data.
  • offset (int) – The byte offset in the buffer. Default 0.
  • size (int) – The number of bytes to use. Default 0.
class wgpu.GPURenderPassEncoder

Subclass of GPURenderEncoderBase

A render-pass encoder records commands related to a render pass.

Create a render pass encoder using GPUCommandEncoder.begin_render_pass().

end_pass()

Record the end of the render pass.

execute_bundles(bundles)

TODO: not yet available in wgpu-native

set_blend_color(color)

Set the blend color for the render pass.

Parameters:color (tuple or dict) – A color with fields (r, g, b, a).
set_scissor_rect(x, y, width, height)

Set the scissor rectangle for this render pass. The scene is rendered as usual, but is only applied to this sub-rectangle.

Parameters:
  • x (int) – Horizontal coordinate.
  • y (int) – Vertical coordinate.
  • width (int) – Horizontal size.
  • height (int) – Vertical size.
set_stencil_reference(reference)

Set the reference stencil value for this render pass.

Parameters:reference (int) – The reference value.
set_viewport(x, y, width, height, min_depth, max_depth)

Set the viewport for this render pass. The whole scene is rendered to this sub-rectangle.

Parameters:
  • x (int) – Horizontal coordinate.
  • y (int) – Vertical coordinate.
  • width (int) – Horizontal size.
  • height (int) – Vertical size.
  • min_depth (int) – Clipping in depth.
  • max_depth (int) – Clipping in depth.
class wgpu.GPURenderBundle

Subclass of GPUObject

TODO: not yet available in wgpu-native

class wgpu.GPURenderBundleEncoder

Subclass of GPURenderEncoderBase

TODO: not yet available in wgpu-native

finish(**parameters)

Finish recording and return a GPURenderBundle.

Parameters:label (str) – A human readable label. Optional.

Queue and swap chain

class wgpu.GPUQueue

Subclass of GPUObject

A queue can be used to submit command buffers to.

You can obtain a queue object via the GPUDevice.default_queue property.

copy_image_bitmap_to_texture(source, destination, copy_size)

TODO: not yet available in wgpu-native

submit(command_buffers)

Submit a GPUCommandBuffer to the queue.

Parameters:command_buffers (list) – The GPUCommandBuffer objects to add.
class wgpu.GPUSwapChain

Subclass of GPUObject

A swap chain is a placeholder for a texture to be presented to the screen, so that you can provide the corresponding texture view as a color attachement to GPUCommandEncoder.begin_render_pass(). The texture view can be obtained by using the swap-chain in a with-statement. The swap-chain is presented to the screen when the context exits.

Example:

with swap_chain as texture_view:
    ...
    command_encoder.begin_render_pass(
        color_attachments=[
            {
                "attachment": texture_view,
                ...
            }
        ],
        ...
    )

You can obtain a swap chain using device.configure_swap_chain().

get_current_texture()

WebGPU defines this method, but we deviate from the spec here: you should use the swap-chain object as a context manager to obtain a texture view to render to.