Help is available by moving the cursor above any symbol or by checking MAQAO website.
[ 4 / 4 ] Application profile is long enough (60.56 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 3 / 3 ] Optimization level option is correctly used
[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer
-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling.
[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.
[ 3 / 3 ] Architecture specific option -mcpu is used
[ 0 / 2 ] Too much execution time spent in category "Others" (39.47 %)
If the category "Others" represents more than 20% of the execution time, it means that the application profile misses a representative part of the application. Examine functions details to properly identify “Others” category components. Rerun after adding most represented library names (e.g. more than 20% of coverage) to external_libraries (the names can be directly provided by ONE View)
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 0 ] Fastmath not used
Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.
[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (8.43%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 1 / 4 ] A significant amount of threads are idle (67.69%)
On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 1 / 4 ] CPU activity is below 90% (32.31%)
CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 4 / 4 ] Loop profile is not flat
At least one loop coverage is greater than 4% (7.74%), representing an hotspot for the application
[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (8.42%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 4 / 4 ] Affinity is good (99.91%)
Threads are not migrating to CPU cores: probably successfully pinned
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 3 / 3 ] Functions mostly use all threads
Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (0.00%)
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.01%) lower than cumulative innermost loop coverage (8.42%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 5 - kmeans-gcc-O3 | Execution Time: 7 % - Vectorization Ratio: 10.00 % - Vector Length Use: 23.75 % | |
►Loop Computation Issues | 6 | |
○ | [SA] Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA - Reorganize arithmetic expressions to exhibit potential for FMA. This issue costs 4 points. | 4 |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
►Data Access Issues | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Vectorization Roadblocks | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Loop 13 - kmeans-gcc-O3 | Execution Time: 0 % - Vectorization Ratio: 40.00 % - Vector Length Use: 27.50 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Data Access Issues | 6 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
►Vectorization Roadblocks | 7 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
[ 4 / 4 ] Application profile is long enough (60.11 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 3 / 3 ] Optimization level option is correctly used
[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer
-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling.
[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.
[ 3 / 3 ] Architecture specific option -mcpu is used
[ 0 / 2 ] Too much execution time spent in category "Others" (38.95 %)
If the category "Others" represents more than 20% of the execution time, it means that the application profile misses a representative part of the application. Examine functions details to properly identify “Others” category components. Rerun after adding most represented library names (e.g. more than 20% of coverage) to external_libraries (the names can be directly provided by ONE View)
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 0 ] Fastmath not used
Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.
[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (8.50%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 1 / 4 ] A significant amount of threads are idle (51.89%)
On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 1 / 4 ] CPU activity is below 90% (48.11%)
CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 4 / 4 ] Loop profile is not flat
At least one loop coverage is greater than 4% (7.84%), representing an hotspot for the application
[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (8.49%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 4 / 4 ] Affinity is good (99.87%)
Threads are not migrating to CPU cores: probably successfully pinned
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 3 / 3 ] Functions mostly use all threads
Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (0.00%)
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.01%) lower than cumulative innermost loop coverage (8.49%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 5 - kmeans-gcc-O3 | Execution Time: 7 % - Vectorization Ratio: 10.00 % - Vector Length Use: 23.75 % | |
►Loop Computation Issues | 6 | |
○ | [SA] Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA - Reorganize arithmetic expressions to exhibit potential for FMA. This issue costs 4 points. | 4 |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
►Data Access Issues | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Vectorization Roadblocks | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Loop 13 - kmeans-gcc-O3 | Execution Time: 0 % - Vectorization Ratio: 40.00 % - Vector Length Use: 27.50 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Data Access Issues | 6 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
►Vectorization Roadblocks | 7 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
[ 4 / 4 ] Application profile is long enough (60.01 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 3 / 3 ] Optimization level option is correctly used
[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer
-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling.
[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.
[ 3 / 3 ] Architecture specific option -mcpu is used
[ 0 / 2 ] Too much execution time spent in category "Others" (39.37 %)
If the category "Others" represents more than 20% of the execution time, it means that the application profile misses a representative part of the application. Examine functions details to properly identify “Others” category components. Rerun after adding most represented library names (e.g. more than 20% of coverage) to external_libraries (the names can be directly provided by ONE View)
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 0 ] Fastmath not used
Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.
[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (8.63%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 2 / 4 ] A significant amount of threads are idle (35.95%)
On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 2 / 4 ] CPU activity is below 90% (64.05%)
CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 4 / 4 ] Loop profile is not flat
At least one loop coverage is greater than 4% (7.83%), representing an hotspot for the application
[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (8.62%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 4 / 4 ] Affinity is good (99.83%)
Threads are not migrating to CPU cores: probably successfully pinned
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 3 / 3 ] Functions mostly use all threads
Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (0.00%)
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.01%) lower than cumulative innermost loop coverage (8.62%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 5 - kmeans-gcc-O3 | Execution Time: 7 % - Vectorization Ratio: 10.00 % - Vector Length Use: 23.75 % | |
►Loop Computation Issues | 6 | |
○ | [SA] Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA - Reorganize arithmetic expressions to exhibit potential for FMA. This issue costs 4 points. | 4 |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
►Data Access Issues | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Vectorization Roadblocks | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Loop 13 - kmeans-gcc-O3 | Execution Time: 0 % - Vectorization Ratio: 40.00 % - Vector Length Use: 27.50 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Data Access Issues | 6 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
►Vectorization Roadblocks | 7 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
[ 4 / 4 ] Application profile is long enough (60.00 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 3 / 3 ] Optimization level option is correctly used
[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer
-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling.
[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.
[ 3 / 3 ] Architecture specific option -mcpu is used
[ 0 / 2 ] Too much execution time spent in category "Others" (38.70 %)
If the category "Others" represents more than 20% of the execution time, it means that the application profile misses a representative part of the application. Examine functions details to properly identify “Others” category components. Rerun after adding most represented library names (e.g. more than 20% of coverage) to external_libraries (the names can be directly provided by ONE View)
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 0 ] Fastmath not used
Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.
[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (8.78%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 3 / 4 ] A significant amount of threads are idle (22.83%)
On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 3 / 4 ] CPU activity is below 90% (77.17%)
CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 4 / 4 ] Loop profile is not flat
At least one loop coverage is greater than 4% (7.87%), representing an hotspot for the application
[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (8.77%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 4 / 4 ] Affinity is good (99.80%)
Threads are not migrating to CPU cores: probably successfully pinned
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 3 / 3 ] Functions mostly use all threads
Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (0.00%)
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.01%) lower than cumulative innermost loop coverage (8.77%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 5 - kmeans-gcc-O3 | Execution Time: 7 % - Vectorization Ratio: 10.00 % - Vector Length Use: 23.75 % | |
►Loop Computation Issues | 6 | |
○ | [SA] Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA - Reorganize arithmetic expressions to exhibit potential for FMA. This issue costs 4 points. | 4 |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
►Data Access Issues | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Vectorization Roadblocks | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Loop 13 - kmeans-gcc-O3 | Execution Time: 0 % - Vectorization Ratio: 40.00 % - Vector Length Use: 27.50 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Data Access Issues | 6 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
►Vectorization Roadblocks | 7 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
[ 4 / 4 ] Application profile is long enough (60.06 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 3 / 3 ] Optimization level option is correctly used
[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer
-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling.
[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.
[ 3 / 3 ] Architecture specific option -mcpu is used
[ 0 / 2 ] Too much execution time spent in category "Others" (38.51 %)
If the category "Others" represents more than 20% of the execution time, it means that the application profile misses a representative part of the application. Examine functions details to properly identify “Others” category components. Rerun after adding most represented library names (e.g. more than 20% of coverage) to external_libraries (the names can be directly provided by ONE View)
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 0 ] Fastmath not used
Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.
[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (9.64%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 3 / 4 ] A significant amount of threads are idle (13.67%)
On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 3 / 4 ] CPU activity is below 90% (86.33%)
CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 4 / 4 ] Loop profile is not flat
At least one loop coverage is greater than 4% (7.66%), representing an hotspot for the application
[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (9.50%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 4 / 4 ] Affinity is good (99.77%)
Threads are not migrating to CPU cores: probably successfully pinned
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 3 / 3 ] Functions mostly use all threads
Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (0.00%)
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.14%) lower than cumulative innermost loop coverage (9.50%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 5 - kmeans-gcc-O3 | Execution Time: 7 % - Vectorization Ratio: 10.00 % - Vector Length Use: 23.75 % | |
►Loop Computation Issues | 6 | |
○ | [SA] Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA - Reorganize arithmetic expressions to exhibit potential for FMA. This issue costs 4 points. | 4 |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
►Data Access Issues | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Vectorization Roadblocks | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Loop 13 - kmeans-gcc-O3 | Execution Time: 0 % - Vectorization Ratio: 40.00 % - Vector Length Use: 27.50 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Data Access Issues | 6 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
►Vectorization Roadblocks | 7 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
[ 4 / 4 ] Application profile is long enough (59.94 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 3 / 3 ] Optimization level option is correctly used
[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer
-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling.
[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.
[ 3 / 3 ] Architecture specific option -mcpu is used
[ 0 / 2 ] Too much execution time spent in category "Others" (37.89 %)
If the category "Others" represents more than 20% of the execution time, it means that the application profile misses a representative part of the application. Examine functions details to properly identify “Others” category components. Rerun after adding most represented library names (e.g. more than 20% of coverage) to external_libraries (the names can be directly provided by ONE View)
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 0 ] Fastmath not used
Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.
[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (12.08%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 4 / 4 ] Threads activity is good
On average, more than 91.37% of observed threads are actually active
[ 4 / 4 ] CPU activity is good
CPU cores are active 91.37% of time
[ 4 / 4 ] Loop profile is not flat
At least one loop coverage is greater than 4% (7.50%), representing an hotspot for the application
[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (11.75%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 4 / 4 ] Affinity is good (99.75%)
Threads are not migrating to CPU cores: probably successfully pinned
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 3 / 3 ] Functions mostly use all threads
Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (0.00%)
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.33%) lower than cumulative innermost loop coverage (11.75%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 5 - kmeans-gcc-O3 | Execution Time: 7 % - Vectorization Ratio: 10.00 % - Vector Length Use: 23.75 % | |
►Loop Computation Issues | 6 | |
○ | [SA] Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA - Reorganize arithmetic expressions to exhibit potential for FMA. This issue costs 4 points. | 4 |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
►Data Access Issues | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Vectorization Roadblocks | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Loop 13 - kmeans-gcc-O3 | Execution Time: 0 % - Vectorization Ratio: 40.00 % - Vector Length Use: 27.50 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Data Access Issues | 6 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
►Vectorization Roadblocks | 7 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
[ 4 / 4 ] Application profile is long enough (59.96 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 3 / 3 ] Optimization level option is correctly used
[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer
-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling.
[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.
[ 3 / 3 ] Architecture specific option -mcpu is used
[ 0 / 2 ] Too much execution time spent in category "Others" (36.02 %)
If the category "Others" represents more than 20% of the execution time, it means that the application profile misses a representative part of the application. Examine functions details to properly identify “Others” category components. Rerun after adding most represented library names (e.g. more than 20% of coverage) to external_libraries (the names can be directly provided by ONE View)
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 0 ] Fastmath not used
Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.
[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (16.08%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 4 / 4 ] Threads activity is good
On average, more than 93.34% of observed threads are actually active
[ 4 / 4 ] CPU activity is good
CPU cores are active 93.34% of time
[ 4 / 4 ] Loop profile is not flat
At least one loop coverage is greater than 4% (7.59%), representing an hotspot for the application
[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (15.20%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 4 / 4 ] Affinity is good (99.74%)
Threads are not migrating to CPU cores: probably successfully pinned
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 3 / 3 ] Functions mostly use all threads
Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (0.00%)
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.89%) lower than cumulative innermost loop coverage (15.20%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 5 - kmeans-gcc-O3 | Execution Time: 7 % - Vectorization Ratio: 10.00 % - Vector Length Use: 23.75 % | |
►Loop Computation Issues | 6 | |
○ | [SA] Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA - Reorganize arithmetic expressions to exhibit potential for FMA. This issue costs 4 points. | 4 |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
►Data Access Issues | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Vectorization Roadblocks | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Loop 13 - kmeans-gcc-O3 | Execution Time: 0 % - Vectorization Ratio: 40.00 % - Vector Length Use: 27.50 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Data Access Issues | 6 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
►Vectorization Roadblocks | 7 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
[ 4 / 4 ] Application profile is long enough (60.67 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 3 / 3 ] Optimization level option is correctly used
[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer
-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling.
[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.
[ 3 / 3 ] Architecture specific option -mcpu is used
[ 0 / 2 ] Too much execution time spent in category "Others" (28.46 %)
If the category "Others" represents more than 20% of the execution time, it means that the application profile misses a representative part of the application. Examine functions details to properly identify “Others” category components. Rerun after adding most represented library names (e.g. more than 20% of coverage) to external_libraries (the names can be directly provided by ONE View)
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 0 ] Fastmath not used
Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.
[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (34.17%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 4 / 4 ] Threads activity is good
On average, more than 106.79% of observed threads are actually active
[ 0 / 4 ] CPU activity is below 90% (8.22%)
CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 4 / 4 ] Loop profile is not flat
At least one loop coverage is greater than 4% (25.77%), representing an hotspot for the application
[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (31.87%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 0 / 4 ] Affinity stability is lower than 90% (17.66%)
Threads are often migrating to other CPU cores/threads. For OpenMP, typically set (OMP_PLACES=cores OMP_PROC_BIND=close) or (OMP_PLACES=threads OMP_PROC_BIND=spread). With OpenMPI + OpenMP, use --bind-to core --map-by node:PE=$OMP_NUM_THREADS --report-bindings. With IntelMPI + OpenMP, set I_MPI_PIN_DOMAIN=omp:compact or I_MPI_PIN_DOMAIN=omp:scatter and use -print-rank-map.
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 0 / 3 ] Too many functions do not use all threads
Functions running on a reduced number of threads (typically sequential code) cover at least 10% of application walltime (94.24%). Check both "Max Inclusive Time Over Threads" and "Nb Threads" in Functions or Loops tabs and consider parallelizing sequential regions or improving parallelization of regions running on a reduced number of threads
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (2.30%) lower than cumulative innermost loop coverage (31.87%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 5 - kmeans-gcc-O3 | Execution Time: 5 % - Vectorization Ratio: 10.00 % - Vector Length Use: 23.75 % | |
►Loop Computation Issues | 6 | |
○ | [SA] Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA - Reorganize arithmetic expressions to exhibit potential for FMA. This issue costs 4 points. | 4 |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
►Data Access Issues | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Vectorization Roadblocks | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Loop 13 - kmeans-gcc-O3 | Execution Time: 0 % - Vectorization Ratio: 40.00 % - Vector Length Use: 27.50 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Data Access Issues | 6 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
►Vectorization Roadblocks | 7 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |