QuestionQuestion

1- Serial Implementation
Note: do all this work on Neumann.
Note the following
# We used the qualifier -std=c++0x – this tells the compiler to use the C++11
standard.
# Note that the randoms program creates an object of the class std::mt19937, and passes a seed value to the constructor. Different seeds produce different sequences – this will be important when we parallelize the program.
# The function randone takes a reference to a std::mt19937 as input, and returns a floating point random number between 0 and 1. The mt19937 class overloads the () operator, so it can be called as a function (this is called a function object or functor). The () operator returns a random 32-bit integer – the max() function gives its maximum possible value.
# adapt randoms.cpp to calculate π using Monte-carlo integration. You need to do the following:
# Copy randoms.cpp to a suitably named new file (e.g. pimonte.cpp).
# Seed the random number generator, and initialize an integer variable with the total number of trials: try 1,000,000 to start with.
# Initialize an integer variable (to zero) for the number of hits.
# In a for loop, perform the trials – pick two random numbers between 0 and 1 (using randone) – call these x and y.
# If x*x + y*y <= 1, increment your hit count.
# At the end of the loop, print out the estimate of π. This will be something like:
float pi = 4.0f * (float)nHits / (float)nTrials;

2- Parallelizing using MPI
For testing the code with a few nodes, you can use mpirun from the command line on the head node (that’s the cluster node you log in to). Note that you can ask for as many processors as you want, but once you go over the number of processors on the head node (12) the processes will no longer all run in parallel. For example, the command
mpirun -np 8 ./randoms will run the code in interactive mode on 8 nodes. Don’t use this for the timing tests we will be doing later – the results would be unreliable because you won’t have exclusive access to the processors as you do on the compute nodes.
Here’s what you need to do:
# Again, start with a new file (e.g. MPI_pimonte.cpp).
# Make sure you have the mpi.h include, MPI_Init and MPI_Finalize in place
# Make sure you have a properly configured shell script for running on the queue, although you can use mpirun while you are getting the code working (see above)
# Make sure each process seeds the random number generator with a different value – use the process rank to do this.
# Use MPI_Send and MPI_Receive to transmit the counts to process 0 – each process apart from 0 will send a single MPI_INT to process 0. Process 0 will loop through all the other ranks picking up the messages using MPI_Recv.
# Rank 0 should output the results.
3- Timing using MPI_Wtime
The MPI function MPI_Wtime() returns a timer in seconds –Now time the execution using of your code using MPI_Wtime and conduct the following experiment.

# Time the execution using 1, 2, 4 and 8 processes – your MPI code should work with one node, if not think again about how it was implemented. You should set the number of trials sufficiently large so that the execution on one node takes a few seconds.
#Calculate the parallel efficiency for each number of processes > 1.
# Find the API call which gives the accuracy of the MPI_Wtime function, and find out the value for the MPI implementation on Neumann.
Notes:
# Your working MPI-parallelized Monte-Carlo code.
# A text file containing the results of the timing experiments, and your calculated parallel efficiencies, from timing runs using the batch queues on 1, 2, 4, and 8 processors, along with what you found for the accuracy of MPI_Wtime().

The source code randoms.cpp
#include <iostream>
#include <random>

using namespace std;

float randone( mt19937 &gen )
{
return (float)gen() / (float)gen.max();
}

int main()
{
int seed = 12345;
// C++ 11 Mersenne Twister random number generator
mt19937 generator( seed );

for ( int i = 0; i < 10; i++ )
{
float x = randone( generator );
cout << x << "\n";
}

return 0;
}

Solution PreviewSolution Preview

This material may consist of step-by-step explanations on how to solve a problem or examples of proper writing, including the use of citations, references, bibliographies, and formatting. This material is made available for the sole purpose of studying and learning - misuse is strictly forbidden.

#include <cstdlib>
#include <random>
#include <iostream>
#include <mpi.h>
using namespace std;

const long TRIALS = 100000000;

float randone(mt19937 &gen) {
    return (float) gen() / (float) gen.max();
}

/**
* return number of hit
* @param nTrials number of trial
* @param generator random generator
* @return
*/
long hits(long nTrials, mt19937 &generator) {
    long nHits = 0;

    float x, y;

    for (long i = 0; i < nTrials; i++) {
       x = randone(generator);
       y = randone(generator);
       if (x * x + y * y <= 1.0) {// it's in the 1st quarter of the circle
            // increase hit point
            nHits++;
       }
    }

    return nHits;
}...

By purchasing this solution you'll be able to access the following files:
Solution.zip.

$88.00
for this solution

or FREE if you
register a new account!

PayPal, G Pay, ApplePay, Amazon Pay, and all major credit cards accepted.

Find A Tutor

View available Parallel Computing Tutors

Get College Homework Help.

Are you sure you don't want to upload any files?

Fast tutor response requires as much info as possible.

Decision:
Upload a file
Continue without uploading

SUBMIT YOUR HOMEWORK
We couldn't find that subject.
Please select the best match from the list below.

We'll send you an email right away. If it's not in your inbox, check your spam folder.

  • 1
  • 2
  • 3
Live Chats