SCALA- MULTITHREADING

  • date 20th January, 2022 |
  • by Prwatech |
  • 0 Comments

Multithreading is designed for multitasking. 

Life Cycle of Thread: 

Run-Runnable (means about to lunch)-running -block (block means hold) then again go back to runnable then running -terminated. 

Thread is a class: 

  • extend Thread 
  • implemented from runnable trait’ 

Example: 

package com.multi.thread 

 

 

class MultiThreadClas extends Thread { 

  override def run() ={ //run method is already define in thread method so need to override 

    println("Thread is running") 

  } 

 

} 

object Multi { //create an object 

  def main(args: Array[String]) = { 

    val me = new MultiThreadClas() //create an instance from MultiThreadClas 

    me.start() 

  } 

}

 

Output: 

Thread is running 

 

Process finished with exit code 0 

Explanation: 

In the above example, a start is a method it automatically goes and calls the run method, and whatever we write there start method prints them. 

There is another option to implement this thread 

Example: 

package com.multi.thread 

 

 

class MultiThreadClass2 extends Runnable{    //Runnable is one thread in scala 

override def run() ={ 

  println("this will also run and implemented from trait") 

} 

} 

object Mutlti2 { 

  def main(args: Array[String]) = { 

    val mult2 = new MultiThreadClass2() 

    val e = new Thread(mult2) //create one thread from mult2 and pass this mult2 this is called implementation 

    e.start() 

  } 

}

 

Output: 

this will also run and implemented from trait 

 

Process finished with exit code 0 

 

There is another method sleep  

sleep is one method here mills means milliseconds .as here mentioned 1000ms so they will wait for 1 second and then print it. we have created two threads using single class MultiThreadClass2 

So t1.start() and t2.start() calling so 1 will be printed first and t2 will also print 1 and then 2 2 so every print will come after 1 second. The result will be given below. 

1 

1 

2 

2 

3 

3 

4 

4 

5 

5 

6 

6 

7 

7 

Example: 

package com.multi.thread 

 

class MultiThreadClass3 extends Thread{ 

 

override def run(): Unit ={ 

    for(i<- 1 to 7) { 

      println(i) 

      Thread.sleep(1000) 

    } 

    } 

  } 

object mainObc{ 

  def main(args:Array[String])={ 

    var t1 = new MultiThreadClass3() 

    var t2 =new MultiThreadClass3 

    t1.start() 

    t2.start() 

  } 

}

 

  Output: 

1 

1 

2 

2 

3 

3 

4 

4 

5 

5 

6 

6 

7 

7 

Explanation:  In the above example code. Why we are getting two times because we are running two threads.  

Another example of the sleep method  

Example:  

package com.multi.thread 

 

 

class MultiThreadClass3 extends Thread{ 

  override def run(): Unit ={ 

    for(i<- 1 to 7) { 

      println(i) 

      Thread.sleep(1000) 

    } 

    } 

  } 

object mainObc { 

  def main(args: Array[String]) = { 

    var t1 = new MultiThreadClass3() 

    var t2 = new MultiThreadClass3 

    var t3 = new MultiThreadClass3 

    t1.start() 

    t2.start() 

    t3.start() 

  } 

}

 

Output: 

1 

1 

1 

2 

2 

2 

3 

3 

3 

4 

4 

4 

5 

5 

5 

6 

6 

6 

7 

7 

7 

 

Process finished with exit code 0 

Explanation 

1 

1 

2 

2 

3 

3 

4 

4 

5 

5 

6 

6 

7 

7 

after this, I don’t know which thread is running at what time. I don’t know which one is first and which one is second. first and second, are unknown to me. after one second’s priority will change. So to know which one is first and second there is one more method is join. 

join method: In the join method, I’ll have to create one thing extra. We have to create one more instance and from this thread, we will use it. join as in the below program I used join with only the t1 so it will print all the thread of t1 first and then the same thing will happen with t2, and t2 as previously as here join method has used for this t1 only. We can also do the same things for t2 and t3. 

Example1 join has used with t1 only 

package com.multi.thread 

 

class MultiThreadClass3 extends Thread{ 

  override def run(): Unit ={ 

    for(i<- 1 to 7) { 

      println(i) 

      Thread.sleep(1000) 

    } 

    } 

  } 

object mainObc { 

  def main(args: Array[String]) = { 

    var t1 = new MultiThreadClass3() 

    var t2 = new MultiThreadClass3 

    var t3 = new MultiThreadClass3 

    t1.start() 

    t1.join() 

    t2.start() 

    t3.start() 

  } 

}

 

Output: 

1 

2 

3 

4 

5 

6 

7 

1 

1 

2 

2 

3 

3 

4 

4 

5 

5 

6 

6 

7 

7 

 

Process finished with exit code 0 

Example2 join has been used with all of the t1, t2, and t3  

package com.multi.thread 

 

class MultiThreadClass3 extends Thread{ 

  override def run(): Unit ={ 

    for(i<- 1 to 7) { 

      println(i) 

      Thread.sleep(1000) 

    } 

    } 

  } 

object mainObc { 

  def main(args: Array[String]) = { 

    var t1 = new MultiThreadClass3() 

    var t2 = new MultiThreadClass3 

    var t3 = new MultiThreadClass3 

    t1.start() 

    t1.join() 

    t2.start() 

    t2.join() 

    t3.start() 

    t3.join() 

  } 

}

 

Output: 

1 

2 

3 

4 

5 

6 

7 

1 

2 

3 

4 

5 

6 

7 

1 

2 

3 

4 

5 

6 

7 

 

Process finished with exit code 0 

getName: getName means two functions getName and setName it’s like encapsulation has getter and setter methods. getName means get the name and setName means to set the Name. 

Example: 

package com.multi.thread 

 

class MultiThreadClass4 () extends Thread { 

  override def run(): Unit ={ 

    for(i<-0 to 4) { 

      println(this.getName() + " _ " + i) 

      Thread.sleep(1000) 

    } 

    } 

  } 

object ThreadObject { 

  def main(args: Array[String]) = { 

    var t1 = new MultiThreadClass4() 

    var t2 = new MultiThreadClass4() 

    var t3 = new MultiThreadClass4() 

    var t4 = new MultiThreadClass4() 

    t1.setName("This is the first Thread") 

    t2.setName("This is the second Thread") 

    t3.setName("This is the third Thread") 

    t1.start() 

    t2.start() 

    t3.start() 

  } 

}

 

Output: 

This is the second Thread _ 0 

This is the third Thread _ 0 

This is the first Thread _ 0 

This is the third Thread _ 1 

This is the second Thread _ 1 

This is the first Thread _ 1 

This is the third Thread _ 2 

This is the second Thread _ 2 

This is the first Thread _ 2 

This is the second Thread _ 3 

This is the third Thread _ 3 

This is the first Thread _ 3 

This is the second Thread _ 4 

This is the third Thread _ 4 

This is the first Thread _ 4 

 

Process finished with exit code 0 

Explanation: This work is based on the system time and also on the clock system clock. We can see this by changing the time. By changing the time order will be changed. The following program shows: 

Example: 

package com.multi.thread 

 

class MultiThreadClass4 () extends Thread { 

  override def run(): Unit ={ 

    for(i<-0 to 4) { 

      println(this.getName() + " _ " + i) 

      Thread.sleep(4000) 

    } 

    } 

  } 

object ThreadObject { 

  def main(args: Array[String]) = { 

    var t1 = new MultiThreadClass4() 

    var t2 = new MultiThreadClass4() 

    var t3 = new MultiThreadClass4() 

    var t4 = new MultiThreadClass4() 

    t1.setName("This is the first Thread") 

    t2.setName("This is the second Thread") 

    t3.setName("This is the third Thread") 

    t1.start() 

    t2.start() 

    t3.start() 

  } 

}

 

 

Output: 

This is the first Thread _ 0 

This is the second Thread _ 0 

This is the third Thread _ 0 

This is the third Thread _ 1 

This is the first Thread _ 1 

This is the second Thread _ 1 

This is the third Thread _ 2 

This is the second Thread _ 2 

This is the first Thread _ 2 

This is the second Thread _ 3 

This is the third Thread _ 3 

This is the first Thread _ 3 

This is the third Thread _ 4 

This is the second Thread _ 4 

This is the first Thread _ 4 

 

Process finished with exit code 0 

How to set priority: 

By using the getpriority method. We can set the priority that will give the output according to the priority which we have set first. 

Example: 

package com.multi.thread 

 

 

class MultiThreadClass4 () extends Thread { 

  override def run(): Unit ={ 

    for(i<-0 to 4) { 

      println(this.getName()) 

      println(this.getPriority()) 

      Thread.sleep(200) 

    } 

    } 

  } 

object ThreadObject { 

  def main(args: Array[String]) = { 

    var t1 = new MultiThreadClass4() 

    var t2 = new MultiThreadClass4() 

    var t3 = new MultiThreadClass4() 

    var t4 = new MultiThreadClass4() 

    t1.setName("This is the first Thread") 

    t2.setName("This is the second Thread") 

    t1.setPriority(Thread.MAX_PRIORITY) 

    t2.setPriority(Thread.MIN_PRIORITY) 

    t1.start() 

    t2.start() 

  } 

}

 

Output:  

This is the first Thread 

This is the second Thread 

10 

1 

This is the first Thread 

10 

This is the second Thread 

1 

This is the first Thread 

10 

This is the second Thread 

1 

This is the first Thread 

10 

This is the second Thread 

1 

This is the second Thread 

1 

This is the first Thread 

10 

 

Process finished with exit code 0 

We can create two functions and can call using this multi threading class: 

package com.multi.thread 

 

class MultiThreading5 extends Thread {    // created one class extending thread

 

override def run(): Unit = { 

    for (i <- 0 to 7) { 

      println(i) 

      Thread.sleep(1000) 

    } 

  } 

 

  def task(): Unit = { 

    for (i <- 0 to 7) { 

      println(i) 

      Thread.sleep(200) 

    } 

  } 

} 

object MainOBC { 

  def main(args: Array[String]) = { 

    var x1 = new MultiThreading5() 

    x1.start() 

    x1.task() 

  }

 

 

  Output: 

0 

0 

1 

2 

3 

4 

1 

5 

6 

7 

2 

3 

4 

5 

6 

7 

 

Process finished with exit code 0 

Explanation: It is coming based on the frequency like we can manage streaming. 

Threads have many options. 

Spark: 

Scala framework:  

  • Scala framework is called spark. 
  • Spark is currently having 3.x generation. 

Spark: 

Spark is created to complete the incapability of MapReduce. As MapReduce is not designed to handle real-time mean near real-time data. So in the year of 2006 people from UC Berkley. They started to talk about real-time processing to the UC Berkley started to the introduction of EMA which is a MapReduce. He found the gap in between after that they said we can create one new framework which we can deal with the real-time dataset of near real-time data i.e. streaming data. Mainly in 2006, the thought came and it came from AMPLab. There was one lab in Berkley that lab later got many appreciations and the many communities accepted. It was acquired sometime later by data bricks company the again donated to Apache they started to handle it. 

Spark is designed by Matei Zaharia at UC Berkeley’s AMPLab. Spark is built on Scala. It also supports Scala, java, python, R, SQL. 

Why we need spark:  

for this, we can compare spark with MapReduce 

Comparison between Spark and MapReduce 
Sr. No  MapReduce   Spark 
1.  It came in 2008  The spark term comes from the shark. shark means fast to attach. It came in 2012. 
2.  It can process 1 TB of data   It can handle 4.27 TB of data. 
3.  It has 910 nodes to handle it take one minute i.e.60 seconds of time   It has 209 nodes to handle it take one minute i.e.60 seconds of time 
4.  It has been designed for batch/historical data (means store last year data )batch  It has been designed to handle near-real-time data 
5.  Disk computation model  In memory model 
6.  Low-level programming model.  A high level programming model 

 

In memory model: In memory model means when I say there using heavily cache caching mode of the cache all the data to their memory for intermediate query or interactive query and quite fast 100 times faster than MapReduce 

High-level programming model. Programming language means we talk about Java Python means we are talking about API somebody has created an application programming interface we are just using API. Somebody has created something and we are using it. Mostly it is a readymade design. 

Note:  

  • Spark is not only dependent on Hadoop. 
  • Spark can work with standalone mode. 

Spark Mode: 

  • Standalone mode: this means we can download a spark in our system without Hadoop without any dependency and we can work with it. We don’t need any prerequisites. 
  • HDFS (cluster mode): it needs Hadoop(HDFS). HDFS to store data. Because this is a centralized repository we call data from one system and we extract everything in HDFS. 
  • AWS (Amazon web services) we can read S3 bucket from AWS. 

SPARK CORE: This means all are the core fundamentals, the CPU requirement, the resource requirement everything we will write in this spark. 

Spark core contains: 

  • Spark SQL 
  • Spark Streaming 
  • Spark graphic  
  • Spark MLLib 

Spark SQL: It gives the data frame. A Data frame is a collection of structured data. SQL can query on a DBMS data set directly. 

Spark streaming: Streaming means it’s near-real-time data. 

spark graphic: Social media use this 

Spark MLLib:   Machine learning library. 

 

 

 

0
0

Quick Support

image image