StarPU Handbook - StarPU Installation
Modules
Here is a list of all modules:
 Hierarchical DagsThis section describes how users can define hierarchical dags
 FFT Support
 Fortran SupportThis section describes the Fortran API
 ThreadsThis section describes the thread facilities provided by StarPU. The thread function are either implemented on top of the pthread library or the SimGrid library when the simulated performance mode is enabled (SimGridSupport)
 BitmapThis is the interface for the bitmap utilities provided by StarPU
 Theoretical Lower Bound on Execution TimeCompute theoretical upper computation efficiency bound corresponding to some actual execution
 CUDA Extensions
 Data Partition
 Data ManagementData management facilities provided by StarPU. We show how to use existing data interfaces in Data Interfaces, but developers can design their own data interfaces if required
 Data InterfacesData management is done at a high-level in StarPU: rather than accessing a mere list of contiguous buffers, the tasks may manipulate data that are described by a high-level construct which we call data interface
 Out Of Core
 Running Drivers
 Expert Mode
 FxT Support
 Initialization and Termination
 Versioning
 Miscellaneous Helpers
 HIP Extensions
 Maxeler FPGA Extensions
 OpenCL Extensions
 OpenMP Runtime SupportThis section describes the interface provided for implementing OpenMP runtimes on top of StarPU
 Using Parallel Workers
 Performance Monitoring CountersThis section describes the interface to access performance monitoring counters
 Performance Steering KnobsThis section describes the interface to access performance steering counters
 Performance Model
 Profiling
 Profiling Tool
 Random Functions
 Modularized Scheduler Interface
 Scheduling ContextsStarPU permits on one hand grouping workers in combined workers in order to execute a parallel task and on the other hand grouping tasks in bundles that will be executed by a single specified worker. In contrast when we group workers in scheduling contexts we submit starpu tasks to them and we schedule them with the policy assigned to the context. Scheduling contexts can be created, deleted and modified dynamically
 Scheduling PolicyTODO. While StarPU comes with a variety of scheduling policies (see TaskSchedulingPolicy), it may sometimes be desirable to implement custom policies to address specific problems. The API described below allows users to write their own scheduling policy
 Sink
 Standard Memory Library
 Task Bundles
 Codelet And TasksThis section describes the interface to manipulate codelets and tasks
 Transactions
 Explicit Dependencies
 Task Lists
 Task Insert Utility
 TreeThis section describes the tree facilities provided by StarPU
 ToolboxThe following macros allow to make GCC extensions portable, and to have a code which can be compiled with any C compiler
 Workers’ Properties
 Parallel Tasks
 MPI Support
 MPI Fault Tolerance Support
 Scheduling Context Hypervisor - Regular usage
 Scheduling Context Hypervisor - Linear Programming
 Scheduling Context Hypervisor - Building a new resizing policy
 Interoperability SupportThis section describes the interface supplied by StarPU to interoperate with other runtime systems
 Heteroprio SchedulerThis is the interface for the heteroprio scheduler
 Scheduler ToolboxThis is the interface for the scheduler toolbox