As our societies are becoming more and more digital, computers are involved in nearly all business operations. This leads to more data stored and processed, and applications needed to operate on very large data structures. As businesses and organizations evolve, they base their decisions on the processing of enormous databases with often hundreds of millions of records.

Although modern computers offer good processing speed and high amounts of RAM at low cost, they require vast amounts of time to solve these large space problems. A ubiquitous solution for performance problems is to use several machines working together, a.k.a. parallelism. Some basic knowledge of parallel programming must be a requirement for any contemporary IT professional. You will certainly face tasks related to this in any Computer Science degree program.

The case for parallelism

Moore’s law was formulated as early as 1965 by IBM researcher Gordon Moore, and it states that the available computing power on a single machine will double every two years[1]. Most experts were skeptical about his statement, but it held remarkably well over the subsequent IT boom for the following 50 years[2]. This evolution was instrumental in bringing PCs into the lives and homes of the population. The last year or so has, however, seen a decline in the rate, and most industry watchers now declare the law dead[3]. Computing power will naturally continue to grow, but likely not at the same pace.  On top of this, the amount of data used in modern organizations is growing. More complex operations and more data being stored and rendered at higher resolutions all lead to more time- and memory-consuming programs. A way to make them faster is to utilize several CPUs.

Parallel programming realms

One usually makes a distinction between shared memory and distributed memory applications. In shared memory, all machines read and write from the same RAM; See Figure 1:

Figure 1. Shared memory model.

The shared memory model allows for less communication than does distributed memory, but the risk of overwriting data or dealing with the same records at the same time means that a protocol for avoiding simultaneous accesses must be implemented. This is usually done with a locking procedure.  One of the most common packages for shared memory is Open MP[4].  Alternatively, one can use modern multicore machines and let several of the CPUs work on a problem. For example, in Java one makes use of threading to achieve this[5]. Protection of shared resources is achieved with locking, a technique where the programmer reserves a part of the memory space for a given thread[6].

The other situation occurs when each CPU has their own memory space, also called a distributed model.  See Figure 2:

Figure 2. Distributed memory.

With the distributed memory model, the problem of concurrent accesses is no longer present, but there is a need to synchronise results; so there is added overhead in the form of communication between the processors. The most prevalent distributed memory package goes by the name of MPI (Message Passing Interface).

 

Benchmarks

The common success criterion is called speedup, defined as how much faster the program runs in parallel, or the inverse of the relationship between the processing time on a single machine and that when using multiple CPUs[7]. In an ideal world, one would get perfectly linear speedup, but this doesn’t happen in reality, especially because of the overhead. There is a theoretical limit to the speedup called Amdahl’s Law, explained in detail by Jenkov[8].

 

References

[1] Moore, Gordon E: Cramming more Components onto Integrated Circuits. Electronics Magazine, 1965.

[2] Sneed, A.: Moore's Law Keeps Going, Defying Expectations. Scientific American, 2015.

[3] Waldrop, Mitchell M.: The Chips Are Down for Moores Law. Nature, 2016.

[4] www.openmp.org. The OpenMP API, 2016

[5] Singh, C.: Multithreading in java with examples. Beginners Book, www.beginnersbook.com 

[6] Friesen, J.: Understanding Java threads, Part 2: Thread synchronization. Java World, 2002.

[7] Parallel Computing http://mathworld.wolfram.com/ParallelComputing.html Wolfram Mathworld

[8] Jenkov, J.: Amdahs Law. http://tutorials.jenkov.com/java-concurrency/amdahlslaw.html 

 

To fulfill our tutoring mission of online education, our college homework help and online tutoring centers are standing by 24/7, ready to assist college students who need homework help with all aspects of parallel computing. Our computer science tutors can help with all your projects, large or small, and we challenge you to find better online parallel computing tutoring anywhere.

Get College Homework Help.

Are you sure you don't want to upload any files?

Fast tutor response requires as much info as possible.

Decision:
Upload a file
Continue without uploading

SUBMIT YOUR HOMEWORK
We couldn't find that subject.
Please select the best match from the list below.
For faster response, you may skip assigning directly to a tutor to receive the first tutor available.
That tutor may not be available for several hours. Please try another tutor if you're in a hurry.

We'll send you an email right away. If it's not in your inbox, check your spam folder.

  • 1
  • 2
  • 3
Get help from a qualified tutor

Latest News

Read All News
September 16, 2019

Question of the Week

If an address bus needs to be able to address four devices, how many conductors will be required?  What if each of those devices also needs to be able to talk back to the I/O control device? The solution of the previous question of the week can be seen below. The maximum aggregate I/O transfer rate of the system is equal to: 700Kbystes/s + 700 Kbytes...
READ MORE
August 16, 2019

Machine Learning

24HourAnswers now offers college homework help and online tutoring in Machine Learning. To learn more about the subject or request a session with one of our tutors, please view our subject page Machine Learning  
READ MORE
Live Chats