You are getting above exception because your output directory (/data/test_txt2/out) is already created/existing in the HDFS file system

Just remember while running map reduce job do mention the output directory which is already their in HDFS. Please refer to the following instruction which would help you to resolve this exception

To run a map reduce job you have to write a command similar to below command

$hadoop jar {name_of_the_jar_file.jar} {package_name_of_jar} {hdfs_file_path_on_which_you_want_to_perform_map_reduce} {output_directory_path}

Example:- hadoop jar facebookCrawler.jar com.wagh.wordcountjob.WordCount /home/facebook/facebook-cocacola-page.txt /home/facebook/crawler-output

Just pay attention on the {output_directory_path} i.e. /home/facebook/crawler-output . If you have already created this directory structure in your HDFS than Hadoop EcoSystem will throw the exception “org.apache.hadoop.mapred.FileAlreadyExistsException”.

Solution:- Always specify the output directory name at run time(i.e Hadoop will create the directory automatically for you. You need not to worry about the output directory creation). As mentioned in the above example the same command can be run in following manner –

“hadoop jar facebookCrawler.jar com.wagh.wordcountjob.WordCount /home/facebook/facebook-cocacola-page.txt /home/facebook/crawler-output-1”

So output directory {crawler-output-1} will be created at runtime by Hadoop eco system.

For more details you can refer to : – http://techpost360.blogspot.com/2015/09/hadoop-file-already-exists-exception.html