Hadoop provides the ability to browse the logs of the user applications through YARN Web UI. System logs, Standard error logs and Standard Output messages can be accessed from Tools –> Local Logs Path on YARN Web UI.
Tracing Logs for failed/killed Jobs:
In the below screen, we are running aggregatewordcount map reduce program for building aggregate of counts of words in input text file. This example can be found in hadoop-mapreduce-examples-2.3.0.jar in share/hadoop/mapreduce directory.
Actually the program aggregatewordcount expects a input file in sequence file format instead of text format, but we have taken text input file to make the job to fail.
Here the advantage of accessing logs is error messages are available on terminal as long as it is open but even after some time closing the terminal, the same logs can be browsed through Web UI. To check the logs of the above failed job through Web UI.
2. Search for userlogs/ directory under /logs/ directory and open it.
The above directory contains logs for all the applications run by the user. For checking logs of our failed job with id application_*_0009, open its corresponding log directory. and open any container for browsing the actual syslogs .
3. Open the syslogs for detailed log information.
i) Job ID initialization and status transition from NEW to INITED can be seen as shown in below.
ii) Later the job status is changed from INITED to SETUP and SETUP to RUNNING as shown in above screen.
iii) Once the job starts running, its Map and Reduce tasks will be initiated. And Map and Reduce tasks status are changed from NEW to SCHEDULED.
iv) Exception messages listed as
From the above messages we can understand that aggregatewordcount program expects a sequence file as input instead of normal text file.
Below is the snapshot of final status of the job:
Thus we can analyze the logs to find out the status of the map/reduce tasks, job status transitions, java exception messages and any kind of informational or warning messages by tracing syslogs of applications.