Return to “Everything & Anything”

Post

Re: Things That Made You Happy Today

#1081
What made me happy today, was playing stronghold crusader with a friend of mine and my base erupting into flames of sheer excellence.

Seriously the hole grain->Mill->Baker-Block turned into the 7th lvl of hell in mere seconds. It was so bad even my wells catched fire, so i decided to just roll with it and pretended to be Nero.
More people want exploding kittens than exploding ships. Somehow, this makes me happy.
- credits go to dwmagus
Post

Re: Things That Made You Happy Today

#1089
IronDuke wrote:Today marks the date that I have become entirely uninvested emotionally in Limit Theory. :D I couldn't care less whether it releases or not. :monkey:

--IronDuke
Definitely makes browsing the forums more fun, huh? I honestly haven't done more than poke my head into General for months now. Really the only places I frequent here are Off Topic, Polls (for now,) and Creative Writing (gotta get my REKT fix, man!) :ghost:
Image
Image
Post

Re: Things That Made You Happy Today

#1094
F4wk35 wrote:Good for you, that it worked out, Grumblesaur.

Even though I have absolutely no clue what that Line does. XD
Gazz wrote:I can only tell that the bcast (bait cast) must be about fishing in an area defined in a world.
MPI (Message Passing Interface) is a parallel processing library. It generates multiple processes from the same code, and MPI_Bcast is one of several functions that facilitates the passing of data between the processes.

This is an extraordinarily roundabout way to approach parallel processing because it's way easier to have multiple threads running within the same process and address space than have to shove things byte-by-byte through a function to get it to pop out the other side.

This particular line of code appears in a for() loop I was using to have the processes share their data.

Code: Select all

for (int i = 0; i < HEIGHT; i++) {
	strcpy(buffer, forest[i].c_str());
	MPI_Bcast(buffer, WIDTH, MPI_CHAR, (i / (HEIGHT / size)), MPI_COMM_WORLD);
	forest[i] = std::string(buffer);
}
In turn, this for() loop is part of a forest fire cellular automaton, like Conway's Game of Life, but more flammable and less Turing-complete. Each process has an array of `std::string` called `forest`, which is a way of storing character data in C++. Because of the way variable declarations work in MPI, all processes have their own copies of variables with the same names, so this code is valid for each process.

The way this automaton is parallelized (mind you, this is dumb, but it's for an assignment) is that every process has their `HEIGHT`-row-long `forest` (where `HEIGHT` is 40 and constant), but each one is assigned some chunk of the forest to work on. When executed as four processes, one process would take rows 0-9, the next would take 10-19, etc. After one generation of the simulation, this for() loop occurs to communicate the information the processes need for their boundary cases.

The call to `strcpy()` is a way of getting a mutable character array to pass along to the other process. (I could have made the `forest` a 2-dimensional array of characters, but that caused output printing issues, so I used `std::string` and some stupid type conversion instead.)

The call to MPI_Bcast sends data `buffer` with `WIDTH` number of elements of type `MPI_CHAR` from process `(i / (HEIGHT / size))` (an expression which is needed here because sending the variable `rank` resolves to a different value in each process) in the process group `MPI_COMM_WORLD`.

`forest = std::string(buffer);` grabs the data that was received and updates that process' `forest` at the current row.
Shameless Self-Promotion 0/ magenta 0/ Forum Rules & Game FAQ
Post

Re: Things That Made You Happy Today

#1095
Grumblesaur wrote: This is an extraordinarily roundabout way to approach parallel processing because it's way easier to have multiple threads running within the same process and address space than have to shove things byte-by-byte through a function to get it to pop out the other side.
but thats hard to generalise to distributed-memory systems and to make lock-free ;)

Online Now

Users browsing this forum: No registered users and 2 guests

cron