Usage examples
The JCGLTimersType
interfaces expose functions that allocate and evaluate timer
queries. A timer query is, at the most basic level, a way of
having the GPU record the current time in a manner that can be
retrieved by the CPU later on.
To allocate a new timer query:
A timer query must be asynchronously started and stopped, and
can then later be queried. For example, to measure the time it
takes to execute a series of OpenGL commands:
The timerQueryBegin/timerQueryFinish commands execute
asynchronously, recording the current time on the GPU as the GPU
reaches each command in the queue. The application can then ask the
GPU if the timer commands have finished executing, and return the
recorded time values if they have:
At most one timer query can be executing at any one time; calls to
timerQueryBegin/timerQueryFinish cannot
be nested.
Calling timerQueryResultGet implies
synchronization between the CPU and GPU and should therefore be called
after all other rendering operations for the frame have completed. As
timer queries are most often used to implement OpenGL profiling, this
is not usually an onerous restriction. Generally, applications will
allocate many timers for their rendering pipelines, update timers during
rendering of a frame, and then query all timers at the end of the frame
to measure the time taken by each part of the pipeline.