options

Stylizer

/home_nfs/users/bmorvana/hpcw_ecrad_nvhpc_MAQAO2/maqao_acfl.24.10.1_ecrad-small/home_nfs/users/bmorvana/hpcw_ecrad_nvhpc_MAQAO2/maqao_gcc.15.1.0_ecrad-small/home_nfs/users/bmorvana/hpcw_ecrad_nvhpc_MAQAO2/maqao_nvhpc.25.7_ecrad-small

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

Not available for this run

[ 0 / 0 ] Fastmath not used

Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.

Not available for this run

[ 2.97 / 3 ] Architecture specific option -mcpu is used

[ 0 / 3 ] Compilation of some functions is not optimized for the target processor

Architecture specific options are needed to produce efficient code for a specific processor ( -mcpu=native ).

[ 0 / 3 ] Compilation of some functions is not optimized for the target processor

Architecture specific options are needed to produce efficient code for a specific processor ( -mcpu=native ).

[ 0 / 3 ] Most of time spent in analyzed modules comes from functions without compilation information

Functions without compilation information (typically not compiled with -g) cumulate 100.00% of the time spent in analyzed modules. Check that -g is present. Remark: if -g is indeed used, this can also be due to some compiler built-in functions (typically math) or statically linked libraries. This warning can be ignored in that case.

[ 2.40 / 3 ] Most of time spent in analyzed modules comes from functions without compilation information

Functions without compilation information (typically not compiled with -g) cumulate 0.00% of the time spent in analyzed modules. Check that -g is present. Remark: if -g is indeed used, this can also be due to some compiler built-in functions (typically math) or statically linked libraries. This warning can be ignored in that case.

[ 0 / 3 ] Most of time spent in analyzed modules comes from functions without compilation information

Functions without compilation information (typically not compiled with -g) cumulate 100.00% of the time spent in analyzed modules. Check that -g is present. Remark: if -g is indeed used, this can also be due to some compiler built-in functions (typically math) or statically linked libraries. This warning can be ignored in that case.

[ 0 / 4 ] Application profile is too short (0.40 s)

If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop.

[ 0 / 4 ] Application profile is too short (0.10 s)

If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop.

[ 0 / 4 ] Application profile is too short (0.23 s)

If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop.

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.02 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.16 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 0 / 2 ] Too much execution time spent in category "Others" (78.68 %)

If the category "Others" represents more than 20% of the execution time, it means that the application profile misses a representative part of the application. Examine functions details to properly identify “Others” category components. Rerun after adding most represented library names (e.g. more than 20% of coverage) to external_libraries (the names can be directly provided by ONE View)

[ 3 / 3 ] Optimization level option is correctly used

[ 3 / 3 ] Optimization level option is correctly used

[ 0 / 3 ] Some functions are compiled with a low optimization level (O0 or O1)

To have better performances, it is advised to help the compiler by using a proper optimization level (-O2 of higher). Warning, depending on compilers, faster optimization levels can decrease numeric accuracy.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

Strategizer

/home_nfs/users/bmorvana/hpcw_ecrad_nvhpc_MAQAO2/maqao_acfl.24.10.1_ecrad-small/home_nfs/users/bmorvana/hpcw_ecrad_nvhpc_MAQAO2/maqao_gcc.15.1.0_ecrad-small/home_nfs/users/bmorvana/hpcw_ecrad_nvhpc_MAQAO2/maqao_nvhpc.25.7_ecrad-small

[ 0 / 4 ] CPU activity is below 90% (20.48%)

CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 0 / 4 ] CPU activity is below 90% (12.10%)

CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 2 / 4 ] CPU activity is below 90% (52.27%)

CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 0 / 4 ] Affinity stability is lower than 90% (26.03%)

Threads are often migrating to other CPU cores/threads. For OpenMP, typically set (OMP_PLACES=cores OMP_PROC_BIND=close) or (OMP_PLACES=threads OMP_PROC_BIND=spread). With OpenMPI + OpenMP, use --bind-to core --map-by node:PE=$OMP_NUM_THREADS --report-bindings. With IntelMPI + OpenMP, set I_MPI_PIN_DOMAIN=omp:compact or I_MPI_PIN_DOMAIN=omp:scatter and use -print-rank-map.

[ 0 / 4 ] Affinity stability is lower than 90% (23.76%)

Threads are often migrating to other CPU cores/threads. For OpenMP, typically set (OMP_PLACES=cores OMP_PROC_BIND=close) or (OMP_PLACES=threads OMP_PROC_BIND=spread). With OpenMPI + OpenMP, use --bind-to core --map-by node:PE=$OMP_NUM_THREADS --report-bindings. With IntelMPI + OpenMP, set I_MPI_PIN_DOMAIN=omp:compact or I_MPI_PIN_DOMAIN=omp:scatter and use -print-rank-map.

[ 3 / 4 ] Affinity stability is lower than 90% (80.66%)

Threads are often migrating to other CPU cores/threads. For OpenMP, typically set (OMP_PLACES=cores OMP_PROC_BIND=close) or (OMP_PLACES=threads OMP_PROC_BIND=spread). With OpenMPI + OpenMP, use --bind-to core --map-by node:PE=$OMP_NUM_THREADS --report-bindings. With IntelMPI + OpenMP, set I_MPI_PIN_DOMAIN=omp:compact or I_MPI_PIN_DOMAIN=omp:scatter and use -print-rank-map.

[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (6.29%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (65.47%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (13.04%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.64%) lower than cumulative innermost loop coverage (5.65%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (9.66%) lower than cumulative innermost loop coverage (55.81%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (1.45%) lower than cumulative innermost loop coverage (11.59%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 0 / 4 ] A significant amount of threads are idle (79.84%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 0 / 4 ] A significant amount of threads are idle (88.47%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 1 / 4 ] A significant amount of threads are idle (54.93%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (5.65%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (55.81%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (11.59%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 2 / 2 ] Less than 10% (0.29%) is spend in Libm/SVML (special functions)

[ 0 / 2 ] More than 10% (20.02%) is spend in Libm/SVML (special functions)

The application is heavily using special math functions (powers, exp, sin etc…) proper library version have to be used. Exact accuracy needs have to be evaluated. Perform input value profiling, first count how many different input values. If the Arm Performance Libraries (ArmPL) are installed, load it and link to it with -lamath -lm.

[ 2 / 2 ] Less than 10% (0.63%) is spend in Libm/SVML (special functions)

[ 0 / 4 ] Loop profile is flat

No hotspot found in the application (greatest loop coverage is 0.49%), and the twenty hottest loops cumulated coverage is lower than 20% of the application profiled time (3.96%)

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (5.73%), representing an hotspot for the application

[ 0 / 4 ] Loop profile is flat

No hotspot found in the application (greatest loop coverage is 2.39%), and the twenty hottest loops cumulated coverage is lower than 20% of the application profiled time (8.92%)

Optimizer

Analysisr0r1r2
Loop Computation IssuesPresence of expensive FP instructions677
Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA332
Presence of a large number of scalar integer instructions352
Control Flow IssuesPresence of calls103
Presence of 2 to 4 paths010
Non-innermost loop011
Data Access IssuesPresence of constant non-unit stride data access224
Presence of indirect access246
Vectorization RoadblocksPresence of calls103
Presence of 2 to 4 paths010
Presence of more than 4 paths112
Non-innermost loop011
Presence of constant non-unit stride data access224
Presence of indirect access246
×