1. Can we deploye job tracker other than name node?
Ans: Yes
2. What are the four modules that make up the Apache Hadoop framework?
Ans: 1.Name Node
2. Data Node
3. Job Tracker
4. Task Tracker
3. Where are Hadoop’s configuration files located?
Ans: Conf folder
4. List Hadoop’s three configuration files.
Ans: core-site.xml, hdfs-site.xml, mapred-site.xml
5. What are “slaves” and “masters” in Hadoop?
Ans: Masters – Name Node and Job Tracker.
Slaves – Data Node and Task Taker
6. How many datanodes can run on a single Hadoop cluster?
Ans: One
7. What is job tracker in Hadoop?
Ans: Job tacker assigns the task to task tracker
8. How many job tracker processes can run on a single Hadoop cluster?
Ans: one
9. What sorts of actions does the job tracker process perform?
Ans: Once Job Tacker receives the input program file from client, it interacts with Name node to find out which Data Nodes are having the data blocks. Once the Job tacker receives the metadata information of Data node, it assigns the task to a task tracker residing in the respective Data nodes. If task tracker is processing slow or down, it will assigns the same task to difeerent task tracker and makes sure that the task is performed.