StarPU Handbook - StarPU Extensions
|
Hierarchical Dags | This section describes how users can define hierarchical dags |
FFT Support | |
Fortran Support | This section describes the Fortran API |
Threads | This section describes the thread facilities provided by StarPU. The thread function are either implemented on top of the pthread library or the SimGrid library when the simulated performance mode is enabled (SimGrid Support) |
Bitmap | This is the interface for the bitmap utilities provided by StarPU |
Theoretical Lower Bound on Execution Time | Compute theoretical upper computation efficiency bound corresponding to some actual execution |
CUDA Extensions | |
Data Partition | |
Data Management | Data management facilities provided by StarPU. We show how to use existing data interfaces in Data Interfaces, but developers can design their own data interfaces if required |
Data Interfaces | Data management is done at a high-level in StarPU: rather than accessing a mere list of contiguous buffers, the tasks may manipulate data that are described by a high-level construct which we call data interface |
Out Of Core | |
Running Drivers | |
Expert Mode | |
FxT Support | |
Initialization and Termination | |
Versioning | |
Miscellaneous Helpers | |
HIP Extensions | |
Maxeler FPGA Extensions | |
OpenCL Extensions | |
OpenMP Runtime Support | This section describes the interface provided for implementing OpenMP runtimes on top of StarPU |
Using Parallel Workers | |
Performance Monitoring Counters | This section describes the interface to access performance monitoring counters |
Performance Steering Knobs | This section describes the interface to access performance steering counters |
Performance Model | |
Profiling | |
Profiling Tool | |
Random Functions | |
Modularized Scheduler Interface | |
Scheduling Contexts | StarPU permits on one hand grouping workers in combined workers in order to execute a parallel task and on the other hand grouping tasks in bundles that will be executed by a single specified worker. In contrast when we group workers in scheduling contexts we submit starpu tasks to them and we schedule them with the policy assigned to the context. Scheduling contexts can be created, deleted and modified dynamically |
Scheduling Policy | TODO. While StarPU comes with a variety of scheduling policies (see TaskSchedulingPolicy), it may sometimes be desirable to implement custom policies to address specific problems. The API described below allows users to write their own scheduling policy |
Sink | |
Standard Memory Library | |
Task Bundles | |
Codelet And Tasks | This section describes the interface to manipulate codelets and tasks |
Transactions | |
Explicit Dependencies | |
Task Lists | |
Task Insert Utility | |
Tree | This section describes the tree facilities provided by StarPU |
Toolbox | The following macros allow to make GCC extensions portable, and to have a code which can be compiled with any C compiler |
Workers’ Properties | |
Parallel Tasks | |
MPI Support | |
MPI Fault Tolerance Support | |
Scheduling Context Hypervisor - Regular usage | |
Scheduling Context Hypervisor - Linear Programming | |
Scheduling Context Hypervisor - Building a new resizing policy | |
Interoperability Support | This section describes the interface supplied by StarPU to interoperate with other runtime systems |
Heteroprio Scheduler | This is the interface for the heteroprio scheduler |
Scheduler Toolbox | This is the interface for the scheduler toolbox |