In this article I list metrics and alerts one should have when monitoring a GPU cluster to ensure efficient utilization of resources.
GPU cluster monitoring is critical for organizations to optimally utilize the limited capacity they have.
Without monitoring it is easy for users to leave jobs running that do not use GPU resources, or do not use them efficiently.
In some cases GPU clusters use certain technologies that require the users to provide images with specific libraries, and not including those dependencies can result in significantly worse compute performance.
- Allocated GPUs
- Used to determine who (or which project) has GPU allocated (i.e., currently assigned to a running workload)
- GPU utilization
- Used to determine whether the GPU is partially or fully used, and if it is partially used, to potentially identify the causes
- GPU memory utilization
- Used to determine if the GPU memory is partially or fully used
- Used to identify out of memory issues and potential memory leaks
- InfiniBand receive/transmit bytes
- Used to determine if a workload is making use of the technology
- Job launch wait duration
- Used to determine when there's queueing of jobs due to compute being exhausted and how long it takes for jobs to start
- Job duration
- Used to gather statistics about the type of workload running on the cluster in order to make informed decisions
- Allocated GPUs are used
- Used to detect jobs that may ask multiple GPUs but end up using 1 or only a few of them
- GPU utilization below threshold (<10%)
- Used to detect workloads that do not make full use of the GPU or are allocated to an oversized GPU
- GPU utilization above threshold (>90%)
- Used to detect when the GPU is saturated
- GPU utilization range above threshold (>25%)
- Used to detect uneven distribution of GPU compute workload
- GPU memory utilization below threshold (<10%)
- Used to detect workloads that do not make full use of the GPU or are allocated to an oversized GPU
- GPU memory utilization above threshold (>95%)
- Used to detect when a job is about to run out of GPU memory
- InfiniBand receive/transmit > 0 when running multi-node workloads
- Used to identify workloads that are not properly configured to use InfiniBand
All articles on this blog originate from my mind.
Most articles are written by me, but some are partially or entirely AI/LLMs‑generated.
Those articles will be tagged accordingly:
- No tag for completely original content
partially-ai-generated
for articles with one or many AI-generated sentences or with some feedback provided by AIfully-ai-generated
when all the content is AI-generated
I also use additional tags in relation to AI usage, namely:
ai-feedback
for articles that were edited following AI feedback
I tag the articles with the LLMs that were involved.
Look for tags starting with llm=
.
I use a variety of LLM providers (in order of frequency of use):
- It's important to know what your goals are.
- It's important to understand why they are your goals.
- It's important to determine which goals are more important than others (goals priority).
- It's important to know which goals are dependent on other goals (goals decomposition and dependency).
- To reach a goal, you must first acquire the tools (knowledge, resources) to get to your objective.
-
It's important to know when to drop/abandon goals.
-
Sources of inefficiency
- Repeating the same task without sufficient experience.
-
Always try to figure out the most optimal path toward a goal
- Observe others successful at achieving the goal you want to achieve.
- Determine the differences between your state and theirs (what they know, what resources are available to them, etc.).
- How to determine when it is not possible to reach a goal at a given moment in time?
- Not enough time available
- Too costly
- Dependencies not resolved/ready
The workstack is a very simple idea I had while working. It is based on the concept of a stack as the name clearly implies. As you work, you, like a computer, process things one at a time and as new things need to be done, you either throw them in a todo list (a queue), or you start doing them right away (you stack them).
The workstack is a way to record notes about what you work on. As you work on something, you can either work on them to completion, or be interrupted by the necessity of working on another task. In the first case, tasks are simply written one after the other with their begin and end time. In the second case, items are also indented, such that it is possible to observe when a task forced you to "switch context".
An example of this note taking format is as follow.
2018-05-18
Task 1 10:00-10:30
Task 2 10:35-10:50
Task 3 11:00-...
Task 4 11:05-11:15
Task 6 11:17-...
Task 7 11:20-...
Task 5 (not begun)
In this case, the person started working on tasks 1 and 2, then began working on task 3. As he began his work, he noticed that something else was necessary, which spawned task 4. While he was working on task 4, he observed something that could be done, but didn't have to be done right away, which spawned task 5. As he completed task 4, he returned to task 3, but noticed that something else also had to be done, which effectively spawned task 6. During task 6, something else also interrupted him, which forced him to work on task 7. In this case, it could have been a coworker asking you for help on something. Task 5 could be a coworker asking for help as soon as you're available, but not wanting to interrupt you.
Conceptually, you would want to always complete a stack of operations before moving to a new task. However, it is highly common while programming that a programmer will start going down such stack while working on code and then will not end up climbing back the stack, effectively not completing all he started working on.
This format thus allows a programmer (or anyone working on tasks that can spawn other tasks) to better track what they were doing and what they did and did not complete.
- https://en.wikipedia.org/wiki/Stack_(abstract_data_type)
- https://en.wikipedia.org/wiki/Context_switch
- Parnin, C., & Rugaber, S. (2011). "Resumption strategies for interrupted programming tasks." Software Quality Journal, 19(1), 5-34. https://doi.org/10.1007/s11219-010-9104-9