ManagedArray

class cubie.batchsolving.arrays.BaseArrayManager.ManagedArray(dtype: type = <class 'numpy.float32'>, stride_order: tuple[str, ...]=NOTHING, default_shape: tuple[int | None, ...]=NOTHING, memory_type: str = 'device', is_chunked: bool = True, array: Any, ~numpy.dtype[~numpy._typing._array_like._ScalarType_co]] | ~numba.cuda.simulator.cudadrv.devicearray.FakeCUDAArray | None=None, chunked_shape: tuple[int, ...] | None=None, chunk_length: int | None = None, num_chunks: int = 1, num_runs: int = 1)[source]

Bases: object

Metadata wrapper for a single managed array.

property array: ndarray[Any, dtype[_ScalarType_co]] | FakeCUDAArray | None

Return the attached array reference.

chunk_length: int | None
chunk_slice(chunk_index: int) ndarray | FakeCUDAArray[source]

Return a slice of the array for the specified chunk index.

Parameters:

chunk_index – Zero-based index of the chunk to slice.

Returns:

View or slice of the array for the specified chunk.

Return type:

Union[ndarray, DeviceNDArrayBase]

Raises:

TypeError – If chunk_index is not an integer.

Notes

When chunking is inactive (is_chunked=False or _chunk_axis_index=None), returns the full array. Otherwise computes slice based on stored chunk parameters and _chunk_axis_index.

chunked_shape: tuple[int, ...] | None
default_shape: tuple[int | None, ...]
dtype: type
is_chunked: bool
memory_type: str
property needs_chunked_transfer: bool

Return True if this array requires chunked transfers.

Chunked transfers are needed when the array’s full shape differs from its per-chunk shape. This comparison replaces complex is_chunked flag logic.

num_chunks: int
num_runs: int
property shape: tuple[int | None, ...]

Return the current shape of the array.

stride_order: tuple[str, ...]