options

exec - 2025-10-16 14:56:04 - MAQAO 2025.1.2

Help is available by moving the cursor above any symbol or by checking MAQAO website.

Global Metrics

Total Time (s)32.24
Max (Thread Active Time) (s)14.75
Average Active Time (s)14.49
Activity Ratio (%)95.1
Average number of active threads86.303
Affinity Stability (%)98.3
GFLOPS134.128
Time in analyzed loops (%)2.07
Time in analyzed innermost loops (%)1.88
Time in user code (%)17.3
Compilation Options Score (%)97.7
Array Access Efficiency (%)83.9
Potential Speedups
Perfect Flow Complexity1.00
Perfect OpenMP/MPI/Pthread/TBB3.75
Perfect OpenMP/MPI/Pthread/TBB + Perfect Load Distribution5.52
No Scalar IntegerPotential Speedup1.00
Nb Loops to get 80%3
FP VectorisedPotential Speedup1.00
Nb Loops to get 80%5
Fully VectorisedPotential Speedup1.01
Nb Loops to get 80%9
FP Arithmetic OnlyPotential Speedup1.01
Nb Loops to get 80%5

CQA Potential Speedups Summary

Average Active Threads Count

FLOPS Breakdown

Loop Based Profile

Innermost Loop Based Profile

Application Categorization

Compilation Options

Source ObjectIssue
libllama.so
hashtable.h
unicode.cpp
llama-vocab.cpp
hashtable_policy.h
libggml-cpu.so
binary-ops.cpp
common.h
sgemm.cpp
amx.cpp
quants.c
mmq.cpp
vec.cpp
ggml-cpu.c
ops.cpp
libggml-base.so
-g is missing for some functions (possibly ones added by the compiler), it is needed to have more accurate reports. Other recommended flags are: -O2/-O3, -march=(target)
-O2, -O3 or -Ofast is missing.
-march=(target) is missing.
exec
-g is missing for some functions (possibly ones added by the compiler), it is needed to have more accurate reports. Other recommended flags are: -O2/-O3, -march=(target)
-O2, -O3 or -Ofast is missing.
-march=(target) is missing.

Loop Path Count Profile

Cumulated Speedup If No Scalar Integer

Cumulated Speedup If FP Vectorized

Cumulated Speedup If Fully Vectorized

Cumulated Speedup If FP Arithmetic Only

Experiment Summary

Experiment Name
Application/beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/run/binaries/icx_10/exec
Timestamp2025-10-16 14:56:04 Universal Timestamp1760619364
Number of processes observed1 Number of threads observed192
Experiment TypeMPI; OpenMP;
Machineisix06.benchmarkcenter.megware.com
Model NameIntel(R) Xeon(R) 6972P
Architecturex86_64 Micro ArchitectureGRANITE_RAPIDS
Cache Size491520 KB Number of Cores96
OS VersionLinux 5.14.0-570.39.1.el9_6.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Sep 4 05:08:52 EDT 2025
Architecture used during static analysisx86_64 Micro Architecture used during static analysisGRANITE_RAPIDS
Frequency Driverintel_pstate Frequency Governorperformance
Huge Pagesalways Hyperthreadingon
Number of sockets2 Number of cores per socket96
Compilation Optionsexec: N/A
libggml-base.so: N/A
libggml-cpu.so: clang based Intel(R) oneAPI DPC++/C++ Compiler 2025.1.0 (2025.1.0.20250317) /cluster/intel/oneapi/2025.1.0/compiler/2025.1/bin/compiler/clang --driver-mode=g++ --intel -D GGML_BACKEND_BUILD -D GGML_BACKEND_SHARED -D GGML_SCHED_MAX_COPIES=4 -D GGML_SHARED -D GGML_USE_CPU_REPACK -D GGML_USE_LLAMAFILE -D GGML_USE_OPENMP -D _GNU_SOURCE -D _XOPEN_SOURCE=600 -D ggml_cpu_EXPORTS -I /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/build/llama.cpp/ggml/src/.. -I /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/build/llama.cpp/ggml/src/. -I /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu -I /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/build/llama.cpp/ggml/src/../include -O2 -x GRANITERAPIDS -fno-vectorize -fno-slp-vectorize -fno-iopenmp-simd -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -fno-finite-math-only -D NDEBUG -std=gnu++17 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -fno-associative-math -fiopenmp -MD -MT ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o -MF ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o.d -o ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o -c /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu/amx/mmq.cpp -fveclib=SVML
libllama.so: clang based Intel(R) oneAPI DPC++/C++ Compiler 2025.1.0 (2025.1.0.20250317) /cluster/intel/oneapi/2025.1.0/compiler/2025.1/bin/compiler/clang --driver-mode=g++ --intel -D GGML_BACKEND_SHARED -D GGML_SHARED -D GGML_USE_BLAS -D GGML_USE_CPU -D LLAMA_BUILD -D LLAMA_SHARED -D llama_EXPORTS -I /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/build/llama.cpp/src/. -I /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/build/llama.cpp/src/../include -I /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/build/llama.cpp/ggml/src/../include -O2 -x GRANITERAPIDS -fno-vectorize -fno-slp-vectorize -fno-iopenmp-simd -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -fno-finite-math-only -D NDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -MD -MT src/CMakeFiles/llama.dir/llama-vocab.cpp.o -MF src/CMakeFiles/llama.dir/llama-vocab.cpp.o.d -o src/CMakeFiles/llama.dir/llama-vocab.cpp.o -c /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/build/llama.cpp/src/llama-vocab.cpp -fveclib=SVML
Comments

Configuration Summary

Dataset
Run Command<executable> -m meta-llama-3.1-8b-instruct-Q8_0.gguf -t 192 -n 0 -p 512 -r 3
MPI Commandmpirun -n <number_processes>
Number Processes1
Number Nodes1
Number Processes per Node1
FilterNot Used
Profile StartNot Used
Profile StopNot Used
Maximal Path Number4
×