• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 10:06
CET 16:06
KST 00:06
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Season 3 - RO16 Groups C & D Preview0RSL Season 3 - RO16 Groups A & B Preview2TL.net Map Contest #21: Winners12Intel X Team Liquid Seoul event: Showmatches and Meet the Pros10[ASL20] Finals Preview: Arrival13
Community News
[TLMC] Fall/Winter 2025 Ladder Map Rotation12Weekly Cups (Nov 3-9): Clem Conquers in Canada4SC: Evo Complete - Ranked Ladder OPEN ALPHA8StarCraft, SC2, HotS, WC3, Returning to Blizzcon!45$5,000+ WardiTV 2025 Championship7
StarCraft 2
General
Mech is the composition that needs teleportation t RotterdaM "Serral is the GOAT, and it's not close" RSL Season 3 - RO16 Groups C & D Preview [TLMC] Fall/Winter 2025 Ladder Map Rotation TL.net Map Contest #21: Winners
Tourneys
RSL Revival: Season 3 Sparkling Tuna Cup - Weekly Open Tournament Constellation Cup - Main Event - Stellar Fest Tenacious Turtle Tussle Master Swan Open (Global Bronze-Master 2)
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 499 Chilling Adaptation Mutation # 498 Wheel of Misfortune|Cradle of Death Mutation # 497 Battle Haredened Mutation # 496 Endless Infection
Brood War
General
FlaSh on: Biggest Problem With SnOw's Playstyle BW General Discussion What happened to TvZ on Retro? Brood War web app to calculate unit interactions [ASL20] Ask the mapmakers — Drop your questions
Tourneys
[Megathread] Daily Proleagues Small VOD Thread 2.0 [BSL21] RO32 Group D - Sunday 21:00 CET [BSL21] RO32 Group C - Saturday 21:00 CET
Strategy
PvZ map balance Current Meta Simple Questions, Simple Answers How to stay on top of macro?
Other Games
General Games
Path of Exile Stormgate/Frost Giant Megathread Nintendo Switch Thread Clair Obscur - Expedition 33 Beyond All Reason
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread SPIRED by.ASL Mafia {211640}
Community
General
Things Aren’t Peaceful in Palestine US Politics Mega-thread Russo-Ukrainian War Thread Artificial Intelligence Thread Canadian Politics Mega-thread
Fan Clubs
White-Ra Fan Club The herO Fan Club!
Media & Entertainment
Movie Discussion! [Manga] One Piece Anime Discussion Thread Korean Music Discussion Series you have seen recently...
Sports
2024 - 2026 Football Thread Formula 1 Discussion NBA General Discussion MLB/Baseball 2023 TeamLiquid Health and Fitness Initiative For 2023
World Cup 2022
Tech Support
SC2 Client Relocalization [Change SC2 Language] Linksys AE2500 USB WIFI keeps disconnecting Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List
Blogs
Dyadica Gospel – a Pulp No…
Hildegard
Coffee x Performance in Espo…
TrAiDoS
Saturation point
Uldridge
DnB/metal remix FFO Mick Go…
ImbaTosS
Reality "theory" prov…
perfectspheres
Customize Sidebar...

Website Feedback

Closed Threads



Active: 2279 users

The Big Programming Thread - Page 822

Forum Index > General Forum
Post a Reply
Prev 1 820 821 822 823 824 1032 Next
Thread Rules
1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution.
2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20)
3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible.
4. Use [code] tags to format code blocks.
Manit0u
Profile Blog Joined August 2004
Poland17432 Posts
December 29 2016 08:04 GMT
#16421

Can you think of other ways to do it? The only way I could think of was basically just versions of DFS that involved backtracking based on checking a list to see if combinations had been completed already but clearly that would be insanely inefficient.

Like, someone highly upvoted on stackexchange said this (it's for string permutations but I imagine the logic is the same yes?)


The problem with this approach is that it becomes unfeasible as soon as you hit > 10 nodes (since then your results will go into hundreds of millions really fast).
Time is precious. Waste it wisely.
Acrofales
Profile Joined August 2010
Spain18117 Posts
December 29 2016 08:48 GMT
#16422
Just as an aside, because if brute force search is being used for TS, performance is clearly not the main goal, but computing and storing all permutations in memory is not going to work well even for medium sized graphs. Number of permutations grows very very quickly.
spinesheath
Profile Blog Joined June 2009
Germany8679 Posts
December 29 2016 09:11 GMT
#16423
You don't need to store any permutations in memory aside from the current optimum. So you should write an enumerator that has a way to calculate the next permutation when asked for, but never stores more than maybe the last permutation.

Then you consume the output of the enumerator one at a time, check if it is better than the last, and discard it if it is not.

Performance will still be awful of course.
If you have a good reason to disagree with the above, please tell me. Thank you.
Manit0u
Profile Blog Joined August 2004
Poland17432 Posts
Last Edited: 2016-12-29 12:50:34
December 29 2016 12:23 GMT
#16424
On December 29 2016 18:11 spinesheath wrote:
You don't need to store any permutations in memory aside from the current optimum. So you should write an enumerator that has a way to calculate the next permutation when asked for, but never stores more than maybe the last permutation.

Then you consume the output of the enumerator one at a time, check if it is better than the last, and discard it if it is not.

Performance will still be awful of course.


Well, you're going to have n! permutations (where n = number of nodes) so even checking them one by one can take forever (since even with n = 12 you'll be a little shy of 480 million permutations). It's not that much different from the traveling salesman. You could perhaps do like the bees do (seriously, people were trying to solve traveling salesman problem by tracking bees and checking how they optimize their pollen-gathering routes) and apply heuristics: check 40-50 permutations (or any number that doesn't hit performance too much) and pick the one you want from this pool. It won't be the best but might just be good enough.

Edit:

Today I was playing around with memoization a bit and it seems that it even works for PHP


function fib($n)
{
if ($n < 2) {
return $n;
}

return fib($n - 1) + fib($n - 2);
}

$cache = [0, 1];

function fib_memo($n)
{
global $cache;

if (isset($cache[$n]) && $cache[$n] === $n) {
return $cache[$n];
}

return $cache[$n] = fib_memo($n - 1) + fib_memo($n - 2);
}

// checked fib(38) and fib_memo(38), to keep it simple
// Results:
// fib used 15342 ms for its computations
// It spent 32 ms in system calls
// ------------------------------------------
// fib_memo used 8453 ms for its computations
// It spent 32 ms in system calls
Time is precious. Waste it wisely.
mantequilla
Profile Blog Joined June 2012
Turkey779 Posts
December 29 2016 14:43 GMT
#16425
in javascript you do this kind of things often:


someServiceCall()
.success(
function(parameters) {
//do something
}
)
.fail(
function(parameters) {
//do something else
}
);

or kinda equivalent:

someServiceCall(
function successCallback(parameters) {
//dance
},
function failureCallback(parameters) {
//sovb
}
)


I wanna do a similar thing in java. Since you cannot pass functions as parameters, I assume you need to pass an anonymous class. Creating an anonymous class for every time you are calling a service will end up with many many classes, don't know if that will cause problems though I don't want to create big problems for a little syntactic sugar.

What I am doing right now is, returning a Result object and deciding if its a success


public Result serviceMethod() ....

...

Result r = serviceMethod();

if(r.isSuccess()) {
//blabla
} else {
//blabla
}


do you guys know a better way?
Age of Mythology forever!
Deleted User 101379
Profile Blog Joined August 2010
4849 Posts
December 29 2016 14:53 GMT
#16426
On December 29 2016 23:43 mantequilla wrote:
in javascript you do this kind of things often:


someServiceCall()
.success(
function(parameters) {
//do something
}
)
.fail(
function(parameters) {
//do something else
}
);

or kinda equivalent:

someServiceCall(
function successCallback(parameters) {
//dance
},
function failureCallback(parameters) {
//sovb
}
)


I wanna do a similar thing in java. Since you cannot pass functions as parameters, I assume you need to pass an anonymous class. Creating an anonymous class for every time you are calling a service will end up with many many classes, don't know if that will cause problems though I don't want to create big problems for a little syntactic sugar.

What I am doing right now is, returning a Result object and deciding if its a success


public Result serviceMethod() ....

...

Result r = serviceMethod();

if(r.isSuccess()) {
//blabla
} else {
//blabla
}


do you guys know a better way?


The keyword you are looking for is Lambda. On mobile, so can't give links, but Google should be able to help you.
spinesheath
Profile Blog Joined June 2009
Germany8679 Posts
December 29 2016 14:54 GMT
#16427
For the matter of passing functions as parameters: in C# you do that with the Action and Func classes which basically wrap a method in an object. So I googled for the Java equivalent. I think this should cover that topic:
http://stackoverflow.com/questions/1184418/javas-equivalents-of-func-and-action
If you have a good reason to disagree with the above, please tell me. Thank you.
RoomOfMush
Profile Joined March 2015
1296 Posts
Last Edited: 2016-12-29 15:36:43
December 29 2016 15:34 GMT
#16428
On December 29 2016 23:43 mantequilla wrote:
I wanna do a similar thing in java. Since you cannot pass functions as parameters, I assume you need to pass an anonymous class. Creating an anonymous class for every time you are calling a service will end up with many many classes, don't know if that will cause problems though I don't want to create big problems for a little syntactic sugar.

In Java you can do this:
	public void doStuff(Consumer<SomeParameterType> successCallback, Consumer<SomeParameterType> failureCallback) {
// ...
}

public void otherFunc() {
doStuff(
// success callback
param -> { // the "param" part is just the name of the parameter. You can call it anything you like.
// do stuff
}, // <- notice the comma here!
// failure callback
param -> {
// do other stuff
}
);
}


By the way:
On December 29 2016 23:43 mantequilla wrote:
Creating an anonymous class for every time you are calling a service will end up with many many classes

The annonymous class is created at compile time and it is tiny. Has no real ill effect. Big java frameworks like Swing or JavaFX have thousands of these.

Just in case you are wondering: Lambdas are not neccessarily implemented as annonymous inner classes. The oracle JVM for example implements them as private static methods at the moment.
Manit0u
Profile Blog Joined August 2004
Poland17432 Posts
December 29 2016 15:39 GMT
#16429
You could just use Scala which is the new, better Java. Java is garbage.

But if you have to use Java, you should read this: http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/Lambda-QuickStart/index.html
Time is precious. Waste it wisely.
3FFA
Profile Blog Joined February 2010
United States3931 Posts
December 30 2016 04:05 GMT
#16430
I have to agree, Java is OK for beginning to learn but even as I was learning Java my classmates and I had conversations about all the shortcomings of the language.
"As long as it comes from a pure place and from a honest place, you know, you can write whatever you want."
Manit0u
Profile Blog Joined August 2004
Poland17432 Posts
Last Edited: 2016-12-30 09:14:31
December 30 2016 06:37 GMT
#16431
My biggest gripe with it is that it's simply too verbose and I was never able to get the "blondes and brunettes" vibe with it.

Edit:

I have a question regarding ElasticSearch. I need to sort stuff by semantic versioning. In pgsql this is:


string_to_array(versions.name, '.')::int[] DESC


How can I do that in es?
Time is precious. Waste it wisely.
Targe
Profile Blog Joined February 2012
United Kingdom14103 Posts
December 30 2016 17:36 GMT
#16432
is anyone familiar with MPI?

im trying to determine whether the communication overhead is greater than the overhead of retrieving data from another node in a massively parallel computer

11/5/14 CATACLYSM | The South West's worst Falco main
Mr. Wiggles
Profile Blog Joined August 2010
Canada5894 Posts
December 30 2016 18:12 GMT
#16433
On December 31 2016 02:36 Targe wrote:
is anyone familiar with MPI?

im trying to determine whether the communication overhead is greater than the overhead of retrieving data from another node in a massively parallel computer


I've used MPI a little bit on relatively small (~8 machine) clusters as part of some course work.

Would you mind explaining your problem a little more? I'm not quite sure what you're asking from what you've written here.
you gotta dance
Targe
Profile Blog Joined February 2012
United Kingdom14103 Posts
Last Edited: 2016-12-30 19:01:29
December 30 2016 18:56 GMT
#16434
On December 31 2016 03:12 Mr. Wiggles wrote:
Show nested quote +
On December 31 2016 02:36 Targe wrote:
is anyone familiar with MPI?

im trying to determine whether the communication overhead is greater than the overhead of retrieving data from another node in a massively parallel computer


I've used MPI a little bit on relatively small (~8 machine) clusters as part of some course work.

Would you mind explaining your problem a little more? I'm not quite sure what you're asking from what you've written here.

im trying to decide how to approach a problem (need to write a program that repeatedly replaces the values in an array with the value of its 4 neighbours, with the exception of boundary values, until the values of the array settle to within a given precision)

i think i have access to 4 nodes (16 cores per node) and need to come up with and test a solution to the above problem

previously i wrote a program as a solution to the same problem but for an environment with shared memory rather than distributed memory

e: the idea is for the program to scale as best as possible with the number of threads so im interested in what has the most overhead
11/5/14 CATACLYSM | The South West's worst Falco main
Mr. Wiggles
Profile Blog Joined August 2010
Canada5894 Posts
December 30 2016 20:00 GMT
#16435
On December 31 2016 03:56 Targe wrote:
Show nested quote +
On December 31 2016 03:12 Mr. Wiggles wrote:
On December 31 2016 02:36 Targe wrote:
is anyone familiar with MPI?

im trying to determine whether the communication overhead is greater than the overhead of retrieving data from another node in a massively parallel computer


I've used MPI a little bit on relatively small (~8 machine) clusters as part of some course work.

Would you mind explaining your problem a little more? I'm not quite sure what you're asking from what you've written here.

im trying to decide how to approach a problem (need to write a program that repeatedly replaces the values in an array with the value of its 4 neighbours, with the exception of boundary values, until the values of the array settle to within a given precision)

i think i have access to 4 nodes (16 cores per node) and need to come up with and test a solution to the above problem

previously i wrote a program as a solution to the same problem but for an environment with shared memory rather than distributed memory

e: the idea is for the program to scale as best as possible with the number of threads so im interested in what has the most overhead


MPI itself doesn't have that much overhead, as the major implementations are mature and optimized. Most of your overhead is going to come from communication and synchronization costs in your program. This is all very workload dependent, so I can give my thoughts, but I can only really outline what to look at.

There's obviously going to be a cost if you have to transfer data back and forth between nodes, but depending on the length of the computation and how fast you can transfer data between nodes, this might be amortized. Similarly, any time your program has to perform some kind of synchronization/communication you're going to pay an overhead based on the communication latency between your nodes.

So, your observed speedups are going to depend a lot on how much synchronization and communication needs to occur between your nodes. If you can just partition your dataset between all the nodes and let them chug away, you're likely to see good speedups. If your nodes need to constantly communicate between each other, you might hit a bottleneck.

Depending on what synchronization primitives you're using in your shared-memory program, porting to MPI may be relatively straightforward. For example, if synchronization is barrier-based, pthread_barrier_wait() transfers directly to MPI_Barrier(). MPI provides some higher-level functions which can make porting a bit nicer, but is generally pretty low-level.

If you're just interested in making one problem instance run as fast as possible, looking at distributed memory frameworks makes sense. Depending on your workload, it might also make sense to just run four different instances on each available node. In this case, it depends on if you care about throughput or response time for a single problem instance.

All in all, I can only say that MPI doesn't have much inherent overhead, and that the choice to use it basically depends on your problem and what your workload looks like. If your problem exhibits coarse granularity and doesn't require much synchronization overhead, I'd say go for it and measure what speedups you see. If there's a large amount of synchronization required, then you might not see good speedups, even if you're giving additional resources to the program.
you gotta dance
Blisse
Profile Blog Joined July 2010
Canada3710 Posts
Last Edited: 2016-12-30 23:03:25
December 30 2016 22:24 GMT
#16436
On December 29 2016 23:43 mantequilla wrote:
+ Show Spoiler +
in javascript you do this kind of things often:


someServiceCall()
.success(
function(parameters) {
//do something
}
)
.fail(
function(parameters) {
//do something else
}
);

or kinda equivalent:

someServiceCall(
function successCallback(parameters) {
//dance
},
function failureCallback(parameters) {
//sovb
}
)


I wanna do a similar thing in java. Since you cannot pass functions as parameters, I assume you need to pass an anonymous class. Creating an anonymous class for every time you are calling a service will end up with many many classes, don't know if that will cause problems though I don't want to create big problems for a little syntactic sugar.

What I am doing right now is, returning a Result object and deciding if its a success


public Result serviceMethod() ....

...

Result r = serviceMethod();

if(r.isSuccess()) {
//blabla
} else {
//blabla
}


do you guys know a better way?



class Service {
void doWork(Callback callback) {
doWork();
if (work.isDone()) {
callback.onSuccess();
} else {
callback.onFailure();
}
}

interface Callback {
void onSuccess();
void onFailure();
}
}

int main() {
Service service = new Service();
service.doWork(new Service.Callback() {
@Override
void onSuccess() {
textView.setText("Done");
}

@Override
void onFailure() {
textView.setText("Failed");
}
})
}


This is the Java standard in my experience. Worrying about it is a bit of premature optimization. The cost overhead is negligible. The code overhead is pretty straightforward after using Java for a while.


But you're opening up a can of worms in terms of returning status.

In C you'll see a lot of return 0 for success, return non-0 as error codes.
With C++ you see C style status return as well as exceptions.
With Python you just throw exceptions.

In modern languages you get to choose between anonymous functions (lambdas, Func class, Action class) and messaging (events, publish-subscribe patterns).

They all basically work to varying degrees depending on your use cases.

I dislike returning result objects for status because I believe the coupling gets too tight when stuff starts changing.
There is no one like you in the universe.
Targe
Profile Blog Joined February 2012
United Kingdom14103 Posts
December 30 2016 22:46 GMT
#16437
On December 31 2016 05:00 Mr. Wiggles wrote:
Show nested quote +
On December 31 2016 03:56 Targe wrote:
On December 31 2016 03:12 Mr. Wiggles wrote:
On December 31 2016 02:36 Targe wrote:
is anyone familiar with MPI?

im trying to determine whether the communication overhead is greater than the overhead of retrieving data from another node in a massively parallel computer


I've used MPI a little bit on relatively small (~8 machine) clusters as part of some course work.

Would you mind explaining your problem a little more? I'm not quite sure what you're asking from what you've written here.

im trying to decide how to approach a problem (need to write a program that repeatedly replaces the values in an array with the value of its 4 neighbours, with the exception of boundary values, until the values of the array settle to within a given precision)

i think i have access to 4 nodes (16 cores per node) and need to come up with and test a solution to the above problem

previously i wrote a program as a solution to the same problem but for an environment with shared memory rather than distributed memory

e: the idea is for the program to scale as best as possible with the number of threads so im interested in what has the most overhead


MPI itself doesn't have that much overhead, as the major implementations are mature and optimized. Most of your overhead is going to come from communication and synchronization costs in your program. This is all very workload dependent, so I can give my thoughts, but I can only really outline what to look at.

There's obviously going to be a cost if you have to transfer data back and forth between nodes, but depending on the length of the computation and how fast you can transfer data between nodes, this might be amortized. Similarly, any time your program has to perform some kind of synchronization/communication you're going to pay an overhead based on the communication latency between your nodes.

So, your observed speedups are going to depend a lot on how much synchronization and communication needs to occur between your nodes. If you can just partition your dataset between all the nodes and let them chug away, you're likely to see good speedups. If your nodes need to constantly communicate between each other, you might hit a bottleneck.

Depending on what synchronization primitives you're using in your shared-memory program, porting to MPI may be relatively straightforward. For example, if synchronization is barrier-based, pthread_barrier_wait() transfers directly to MPI_Barrier(). MPI provides some higher-level functions which can make porting a bit nicer, but is generally pretty low-level.

If you're just interested in making one problem instance run as fast as possible, looking at distributed memory frameworks makes sense. Depending on your workload, it might also make sense to just run four different instances on each available node. In this case, it depends on if you care about throughput or response time for a single problem instance.

All in all, I can only say that MPI doesn't have much inherent overhead, and that the choice to use it basically depends on your problem and what your workload looks like. If your problem exhibits coarse granularity and doesn't require much synchronization overhead, I'd say go for it and measure what speedups you see. If there's a large amount of synchronization required, then you might not see good speedups, even if you're giving additional resources to the program.


sorry for being confusing, by overhead i meant to say the overhead of communication, your post is still informative though.

the problem i believe requires me to synchronise all threads with every iteration (with each loop threads need the new values updated by other threads or they wont be using the right values), for the shared memory implementation i had a barrier at the end of every loop and after the threads checked whether the required precision was reached.

i need to come up with a way of passing the updated information between the threads with as little overhead as possible
11/5/14 CATACLYSM | The South West's worst Falco main
spinesheath
Profile Blog Joined June 2009
Germany8679 Posts
December 30 2016 23:49 GMT
#16438
On December 31 2016 07:46 Targe wrote:
the problem i believe requires me to synchronise all threads with every iteration (with each loop threads need the new values updated by other threads or they wont be using the right values), for the shared memory implementation i had a barrier at the end of every loop and after the threads checked whether the required precision was reached.

i need to come up with a way of passing the updated information between the threads with as little overhead as possible

It sounds like you probably have some wiggle room.

I imagine this as a 2D Array, with 4 neighbours top/bottom/left/right. Naively you would probably split a 40x40 Array into 16 10x10 Arrays with an extra ring outside for synchronization, so 16 12x12 Arrays. But maybe you could also use 14x14 arrays and synchronize 2 rings every 2 iterations. Depending on the concrete environment that could improve performance.

I don't know if this is even remotely close to your actual problem, but I would suspect that there are some options whatever it is exactly. Look for less intuitive chunks to cut your problem into and compare them to what you have.
If you have a good reason to disagree with the above, please tell me. Thank you.
RoomOfMush
Profile Joined March 2015
1296 Posts
December 31 2016 09:49 GMT
#16439
On December 31 2016 07:46 Targe wrote:
Show nested quote +
On December 31 2016 05:00 Mr. Wiggles wrote:
On December 31 2016 03:56 Targe wrote:
On December 31 2016 03:12 Mr. Wiggles wrote:
On December 31 2016 02:36 Targe wrote:
is anyone familiar with MPI?

im trying to determine whether the communication overhead is greater than the overhead of retrieving data from another node in a massively parallel computer


I've used MPI a little bit on relatively small (~8 machine) clusters as part of some course work.

Would you mind explaining your problem a little more? I'm not quite sure what you're asking from what you've written here.

im trying to decide how to approach a problem (need to write a program that repeatedly replaces the values in an array with the value of its 4 neighbours, with the exception of boundary values, until the values of the array settle to within a given precision)

i think i have access to 4 nodes (16 cores per node) and need to come up with and test a solution to the above problem

previously i wrote a program as a solution to the same problem but for an environment with shared memory rather than distributed memory

e: the idea is for the program to scale as best as possible with the number of threads so im interested in what has the most overhead


MPI itself doesn't have that much overhead, as the major implementations are mature and optimized. Most of your overhead is going to come from communication and synchronization costs in your program. This is all very workload dependent, so I can give my thoughts, but I can only really outline what to look at.

There's obviously going to be a cost if you have to transfer data back and forth between nodes, but depending on the length of the computation and how fast you can transfer data between nodes, this might be amortized. Similarly, any time your program has to perform some kind of synchronization/communication you're going to pay an overhead based on the communication latency between your nodes.

So, your observed speedups are going to depend a lot on how much synchronization and communication needs to occur between your nodes. If you can just partition your dataset between all the nodes and let them chug away, you're likely to see good speedups. If your nodes need to constantly communicate between each other, you might hit a bottleneck.

Depending on what synchronization primitives you're using in your shared-memory program, porting to MPI may be relatively straightforward. For example, if synchronization is barrier-based, pthread_barrier_wait() transfers directly to MPI_Barrier(). MPI provides some higher-level functions which can make porting a bit nicer, but is generally pretty low-level.

If you're just interested in making one problem instance run as fast as possible, looking at distributed memory frameworks makes sense. Depending on your workload, it might also make sense to just run four different instances on each available node. In this case, it depends on if you care about throughput or response time for a single problem instance.

All in all, I can only say that MPI doesn't have much inherent overhead, and that the choice to use it basically depends on your problem and what your workload looks like. If your problem exhibits coarse granularity and doesn't require much synchronization overhead, I'd say go for it and measure what speedups you see. If there's a large amount of synchronization required, then you might not see good speedups, even if you're giving additional resources to the program.


sorry for being confusing, by overhead i meant to say the overhead of communication, your post is still informative though.

the problem i believe requires me to synchronise all threads with every iteration (with each loop threads need the new values updated by other threads or they wont be using the right values), for the shared memory implementation i had a barrier at the end of every loop and after the threads checked whether the required precision was reached.

i need to come up with a way of passing the updated information between the threads with as little overhead as possible

I assume the size of your problem is sufficiently big to even think about spreading it over multiple processors. In that case I would suggest calculating the values that need to be shared first, then start the communication while simultaneously working on the values that dont need to be shared. Ideally the messages from the other machines will arrive before you have finished with all the local calculations and your communication overhead becomes zero.
Targe
Profile Blog Joined February 2012
United Kingdom14103 Posts
December 31 2016 18:58 GMT
#16440
On December 31 2016 08:49 spinesheath wrote:
Show nested quote +
On December 31 2016 07:46 Targe wrote:
the problem i believe requires me to synchronise all threads with every iteration (with each loop threads need the new values updated by other threads or they wont be using the right values), for the shared memory implementation i had a barrier at the end of every loop and after the threads checked whether the required precision was reached.

i need to come up with a way of passing the updated information between the threads with as little overhead as possible

It sounds like you probably have some wiggle room.

I imagine this as a 2D Array, with 4 neighbours top/bottom/left/right. Naively you would probably split a 40x40 Array into 16 10x10 Arrays with an extra ring outside for synchronization, so 16 12x12 Arrays. But maybe you could also use 14x14 arrays and synchronize 2 rings every 2 iterations. Depending on the concrete environment that could improve performance.

I don't know if this is even remotely close to your actual problem, but I would suspect that there are some options whatever it is exactly. Look for less intuitive chunks to cut your problem into and compare them to what you have.


i dont know why i didnt think of splitting it into smaller arrays, ive been splitting the work for each thread by the first X elements in the array (where X is the total number of elements divided by the number of arrays), thats pretty eye opening for possible solutions thanks!

trying that route i would have to do some correctness testing to ensure that the correct results are achieved

On December 31 2016 18:49 RoomOfMush wrote:
Show nested quote +
On December 31 2016 07:46 Targe wrote:
On December 31 2016 05:00 Mr. Wiggles wrote:
On December 31 2016 03:56 Targe wrote:
On December 31 2016 03:12 Mr. Wiggles wrote:
On December 31 2016 02:36 Targe wrote:
is anyone familiar with MPI?

im trying to determine whether the communication overhead is greater than the overhead of retrieving data from another node in a massively parallel computer


I've used MPI a little bit on relatively small (~8 machine) clusters as part of some course work.

Would you mind explaining your problem a little more? I'm not quite sure what you're asking from what you've written here.

im trying to decide how to approach a problem (need to write a program that repeatedly replaces the values in an array with the value of its 4 neighbours, with the exception of boundary values, until the values of the array settle to within a given precision)

i think i have access to 4 nodes (16 cores per node) and need to come up with and test a solution to the above problem

previously i wrote a program as a solution to the same problem but for an environment with shared memory rather than distributed memory

e: the idea is for the program to scale as best as possible with the number of threads so im interested in what has the most overhead


MPI itself doesn't have that much overhead, as the major implementations are mature and optimized. Most of your overhead is going to come from communication and synchronization costs in your program. This is all very workload dependent, so I can give my thoughts, but I can only really outline what to look at.

There's obviously going to be a cost if you have to transfer data back and forth between nodes, but depending on the length of the computation and how fast you can transfer data between nodes, this might be amortized. Similarly, any time your program has to perform some kind of synchronization/communication you're going to pay an overhead based on the communication latency between your nodes.

So, your observed speedups are going to depend a lot on how much synchronization and communication needs to occur between your nodes. If you can just partition your dataset between all the nodes and let them chug away, you're likely to see good speedups. If your nodes need to constantly communicate between each other, you might hit a bottleneck.

Depending on what synchronization primitives you're using in your shared-memory program, porting to MPI may be relatively straightforward. For example, if synchronization is barrier-based, pthread_barrier_wait() transfers directly to MPI_Barrier(). MPI provides some higher-level functions which can make porting a bit nicer, but is generally pretty low-level.

If you're just interested in making one problem instance run as fast as possible, looking at distributed memory frameworks makes sense. Depending on your workload, it might also make sense to just run four different instances on each available node. In this case, it depends on if you care about throughput or response time for a single problem instance.

All in all, I can only say that MPI doesn't have much inherent overhead, and that the choice to use it basically depends on your problem and what your workload looks like. If your problem exhibits coarse granularity and doesn't require much synchronization overhead, I'd say go for it and measure what speedups you see. If there's a large amount of synchronization required, then you might not see good speedups, even if you're giving additional resources to the program.


sorry for being confusing, by overhead i meant to say the overhead of communication, your post is still informative though.

the problem i believe requires me to synchronise all threads with every iteration (with each loop threads need the new values updated by other threads or they wont be using the right values), for the shared memory implementation i had a barrier at the end of every loop and after the threads checked whether the required precision was reached.

i need to come up with a way of passing the updated information between the threads with as little overhead as possible

I assume the size of your problem is sufficiently big to even think about spreading it over multiple processors. In that case I would suggest calculating the values that need to be shared first, then start the communication while simultaneously working on the values that dont need to be shared. Ideally the messages from the other machines will arrive before you have finished with all the local calculations and your communication overhead becomes zero.


size of the problem is up to me (but yes i choose sizes large enough for splitting the work to be worth it)
ill look to see if what you suggested is possible, my curent understanding is that whilst sending/receiving a message a thread doesnt progress through its logic?

also as MPI requires both the sending thread and the receiving thread to call a send/receive function at the send time ill have to have some sort of master/slave system in place for synchronisation? but having 1 master for 10+ slaves seems like a major bottleneck as it will have to wait for each thread?
11/5/14 CATACLYSM | The South West's worst Falco main
Prev 1 820 821 822 823 824 1032 Next
Please log in or register to reply.
Live Events Refresh
Next event in 1h 54m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
SteadfastSC 350
TKL 341
Reynor 49
mcanning 46
StarCraft: Brood War
Britney 32905
Rain 4173
actioN 1647
Horang2 1446
Jaedong 1184
Shuttle 522
Stork 472
EffOrt 303
firebathero 261
Leta 114
[ Show more ]
Barracks 105
Shinee 97
ggaemo 96
Hyun 80
PianO 66
LaStScan 65
JYJ49
Shine 49
Mong 31
Rock 25
Movie 24
ToSsGirL 22
Bale 20
zelot 15
HiyA 10
soO 10
sorry 8
Sacsri 7
Dota 2
Gorgc5504
qojqva1773
Dendi1127
XcaliburYe145
LuMiX0
Counter-Strike
oskar115
Heroes of the Storm
Khaldor136
Other Games
B2W.Neo1782
DeMusliM470
Hui .333
Lowko306
Pyrionflax240
Fuzer 206
febbydoto6
Organizations
Dota 2
PGL Dota 2 - Main Stream8720
PGL Dota 2 - Secondary Stream3341
Other Games
EGCTV119
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 17 non-featured ]
StarCraft 2
• Berry_CruncH154
• StrangeGG 72
• IndyKCrew
• AfreecaTV YouTube
• intothetv
• Kozan
• sooper7s
• LaughNgamezSOOP
• Migwel
StarCraft: Brood War
• Michael_bg 3
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• C_a_k_e 2112
• Ler59
League of Legends
• Nemesis2754
Other Games
• WagamamaTV236
Upcoming Events
IPSL
1h 54m
ZZZero vs rasowy
Napoleon vs KameZerg
OSC
3h 54m
BSL 21
4h 54m
Tarson vs Julia
Doodle vs OldBoy
eOnzErG vs WolFix
StRyKeR vs Aeternum
Sparkling Tuna Cup
18h 54m
RSL Revival
18h 54m
Reynor vs sOs
Maru vs Ryung
Kung Fu Cup
20h 54m
Cure vs herO
Reynor vs TBD
WardiTV Korean Royale
20h 54m
BSL 21
1d 4h
JDConan vs Semih
Dragon vs Dienmax
Tech vs NewOcean
TerrOr vs Artosis
IPSL
1d 4h
Dewalt vs WolFix
eOnzErG vs Bonyth
Replay Cast
1d 7h
[ Show More ]
Wardi Open
1d 20h
Monday Night Weeklies
2 days
WardiTV Korean Royale
2 days
BSL: GosuLeague
3 days
The PondCast
3 days
Replay Cast
4 days
RSL Revival
4 days
BSL: GosuLeague
5 days
RSL Revival
5 days
WardiTV Korean Royale
5 days
RSL Revival
6 days
WardiTV Korean Royale
6 days
Liquipedia Results

Completed

Proleague 2025-11-14
Stellar Fest: Constellation Cup
Eternal Conflict S1

Ongoing

C-Race Season 1
IPSL Winter 2025-26
KCM Race Survival 2025 Season 4
SOOP Univ League 2025
YSL S2
BSL Season 21
CSCL: Masked Kings S3
SLON Tour Season 2
RSL Revival: Season 3
META Madness #9
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025

Upcoming

BSL 21 Non-Korean Championship
Acropolis #4
IPSL Spring 2026
HSC XXVIII
RSL Offline Finals
WardiTV 2025
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter 2026: Closed Qualifier
eXTREMESLAND 2025
ESL Impact League Season 8
SL Budapest Major 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.