Q2) When we run pig in local mode will it convert to query in mapreduce or not?
Ans:-Well,make things clear local or hdfs mode is given to us for selecting the file either from the local or hdfs mode.
Secondly at the end at the end the hadoop system will look for the byte codes,if it unavailable it gives an exception error so when the user uses pig latin to query instead of java the query is compiled internally & the code which user writes gets internally converted into MR programmes ,jobtracker status id will be generated & jar file will be created.
Q3)How physical translator work at the time of the compilation of pig query?
Ans:-Whenever the coder writes the pig latin programmes,the compilation stages goes through 6 stages,
d)Logical to physical translator
e)Physical to MR translator
When the coder loads the script file from the system(local/hdfs) ,the interprator will check each & every line of the code for the operators,if any error found it ends the programme otherwise a logical plan for the programme is generated,make sure for each line of the script the logical plan is being generated & it grows considerably because each statement have its logical plan.
The logical plan is not for data to be processed,whereas in the case of physical plan it is a series of map reduce jobs,it describes the physical operators PIG will use to execute the script.
Q4) Limitations in PIG?
Ans:-Storage layer Limitation:- Pig serves as a scripting language it does-not have the features of storing the data,it is a scripting language that can run on the hadoop after compilation into MR jobs just like map reduce jobs.
b) Not so much effective compared to spark when we feed data via JSON
c) PIG runs in pipeline no conditionals such as (if..then)