OutputArrays

class cubie.batchsolving.arrays.BatchOutputArrays.OutputArrays(precision: type = <class 'numpy.float32'>, chunks: int = 0, stream_group: str = 'default', memory_proportion: float | None = None, memory_manager: MemoryManager = MemoryManager(totalmem=8589934592, registry={}, stream_groups=StreamGroups(groups={}, streams={}), _mode='passive', _allocator=<class 'cubie.cuda_simsafe.FakeNumbaCUDAMemoryManager'>, _auto_pool=[], _manual_pool=[], _queued_allocations={}), num_runs: int = 1, sizes: BatchOutputSizes = NOTHING, host: OutputArrayContainer = NOTHING)[source]

Bases: BaseArrayManager

Manage batch integration output arrays between host and device.

This class manages the allocation, transfer, and synchronization of output arrays generated during batch integration operations. It handles state trajectories, observables, summary statistics, and per-run status codes.

Parameters:

Notes

This class is initialized with a BatchOutputSizes instance (which is drawn from a solver instance using the from_solver factory method), which sets the allowable 3D array sizes from the ODE system’s data and run settings. Once initialized, the object can be updated with a solver instance to update the expected sizes, check the cache, and allocate if required.

device: OutputArrayContainer
property device_iteration_counters: ndarray[Any, dtype[_ScalarType_co]] | FakeCUDAArray | None

Device iteration counters output array.

property device_observable_summaries: ndarray[Any, dtype[_ScalarType_co]] | FakeCUDAArray | None

Device observable summary output array.

property device_observables: ndarray[Any, dtype[_ScalarType_co]] | FakeCUDAArray | None

Device observables output array.

property device_state: ndarray[Any, dtype[_ScalarType_co]] | FakeCUDAArray | None

Device state output array.

property device_state_summaries: ndarray[Any, dtype[_ScalarType_co]] | FakeCUDAArray | None

Device state summary output array.

property device_status_codes: ndarray[Any, dtype[_ScalarType_co]] | FakeCUDAArray | None

Device status code output array.

finalise(chunk_index: int) None[source]

Queue device-to-host transfers for a chunk.

Parameters:

chunk_index – Indices for the chunk being finalized.

Returns:

Queues async transfers. For chunked mode, submits writeback tasks to the watcher thread for non-blocking completion.

Return type:

None

Notes

Host slices are made contiguous before transfer to ensure compatible strides with device arrays. For chunked mode, data is transferred to pooled pinned buffers and submitted to the watcher thread for async writeback. For non-chunked mode, the writeback call is made immediately (but will happen asynchronously).

classmethod from_solver(solver_instance: BatchSolverKernel) OutputArrays[source]

Create an OutputArrays instance from a solver.

Does not allocate arrays, just sets up size specifications.

Parameters:

solver_instance – The solver instance to extract configuration from.

Returns:

A new OutputArrays instance configured for the solver.

Return type:

OutputArrays

host: OutputArrayContainer
initialise(chunk_index: int) None[source]

Initialize device arrays before kernel execution.

Parameters:

chunk_index – Indices for the chunk being initialized.

Returns:

This method performs no operations by default.

Return type:

None

Notes

No initialization to zeros is needed unless chunk calculations in time leave a dangling sample at the end, which is possible but not expected.

property iteration_counters: ndarray[Any, dtype[_ScalarType_co]] | FakeCUDAArray | None

Host iteration counters output array.

property observable_summaries: ndarray[Any, dtype[_ScalarType_co]] | FakeCUDAArray | None

Host observable summary output array.

property observables: ndarray[Any, dtype[_ScalarType_co]] | FakeCUDAArray | None

Host observables output array.

reset() None[source]

Clear all cached arrays and reset allocation tracking.

Extends the base reset to also clear the buffer pool, shut down the watcher thread, and clear any pending buffers.

Returns:

Nothing is returned.

Return type:

None

property state: ndarray[Any, dtype[_ScalarType_co]] | FakeCUDAArray | None

Host state output array.

property state_summaries: ndarray[Any, dtype[_ScalarType_co]] | FakeCUDAArray | None

Host state summary output array.

property status_codes: ndarray[Any, dtype[_ScalarType_co]] | FakeCUDAArray | None

Host status code output array.

update(solver_instance: BatchSolverKernel) None[source]

Update output arrays from solver instance.

Parameters:

solver_instance – The solver instance providing configuration and sizing information.

Returns:

This method updates cached arrays in place.

Return type:

None

update_from_solver(solver_instance: BatchSolverKernel) Dict[str, ndarray[Any, dtype[floating]]][source]

Update sizes and precision from solver, returning new host arrays.

Only creates new pinned arrays when existing arrays do not match the expected shape and dtype. This avoids expensive pinned memory allocation on repeated solver runs with identical configurations.

Parameters:

solver_instance – The solver instance to update from.

Returns:

Host arrays with updated shapes for update_host_arrays. Arrays that already match are still included for consistency.

Return type:

dict[str, numpy.ndarray]

wait_pending(timeout: float | None = None) None[source]

Wait for all pending async writebacks to complete.

Parameters:

timeout – Maximum seconds to wait. None waits indefinitely.

Returns:

Blocks until all pending operations complete.

Return type:

None

Notes

Only applies to chunked mode with watcher-based writebacks.