- April 1, 2017 at 9:47 pm #3129
Profiling is a measure of execution time at method level (functional statistics) as well as run-time level information collection such as consumption of memory, processor, threads and number of classes (non-functional statistics) loaded over a period of time the application is running. It falls under performance analysis (functional and non-functional statistics collection) of the application in question as run by one user.
The way we would usually use profiler is as follows:
1. Start the profiler, fire up our application using the profiler.
2. Use our application for some time or just the features in our application that we have identified as bottlenecks and would like to optimize.
3. Once our application is closed (or sometimes even before that), the profiler can present us a breakdown of execution times per function. Some will also allow us to get a breakdown of execution times per line or function within one of these functions so we can see where CPU most time was used up using a top-down approach.
4. Usually some functions in our application will take an unusually long time to execute. After looking at the profiling results, we should be able to identify them and eliminate performance problems.
In Map Reduce job execution, we can do the profiling task by enabling the profiling in the JobConf (in Hadoop 1.x) or Job (in Hadoop 2.x) class. By default profiling of MR job is disabled.
Set whether the system should collect profiler information for some of the tasks in this job? The information is stored in the user log directory.
There are some other methods in JobConf or Job Class as follows:
Set the profiler configuration arguments.
setProfileTaskRange(boolean isMap, String newValue)
Set the ranges of maps or reduces to profile.
Get whether the task profiling is enabled.
Get the profiler configuration arguments.
Get the range of maps or reduces to profile.