• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 13:15
CET 19:15
KST 03:15
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Season 3 - RO16 Groups C & D Preview0RSL Season 3 - RO16 Groups A & B Preview2TL.net Map Contest #21: Winners12Intel X Team Liquid Seoul event: Showmatches and Meet the Pros10[ASL20] Finals Preview: Arrival13
Community News
[TLMC] Fall/Winter 2025 Ladder Map Rotation12Weekly Cups (Nov 3-9): Clem Conquers in Canada4SC: Evo Complete - Ranked Ladder OPEN ALPHA8StarCraft, SC2, HotS, WC3, Returning to Blizzcon!45$5,000+ WardiTV 2025 Championship7
StarCraft 2
General
Mech is the composition that needs teleportation t RotterdaM "Serral is the GOAT, and it's not close" RSL Season 3 - RO16 Groups C & D Preview [TLMC] Fall/Winter 2025 Ladder Map Rotation TL.net Map Contest #21: Winners
Tourneys
RSL Revival: Season 3 Sparkling Tuna Cup - Weekly Open Tournament Constellation Cup - Main Event - Stellar Fest Tenacious Turtle Tussle Master Swan Open (Global Bronze-Master 2)
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 499 Chilling Adaptation Mutation # 498 Wheel of Misfortune|Cradle of Death Mutation # 497 Battle Haredened Mutation # 496 Endless Infection
Brood War
General
FlaSh on: Biggest Problem With SnOw's Playstyle What happened to TvZ on Retro? SnOw's ASL S20 Finals Review BW General Discussion Brood War web app to calculate unit interactions
Tourneys
[Megathread] Daily Proleagues Small VOD Thread 2.0 [BSL21] RO32 Group D - Sunday 21:00 CET [BSL21] RO32 Group C - Saturday 21:00 CET
Strategy
PvZ map balance Current Meta Simple Questions, Simple Answers How to stay on top of macro?
Other Games
General Games
Path of Exile Stormgate/Frost Giant Megathread Nintendo Switch Thread Clair Obscur - Expedition 33 Beyond All Reason
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread SPIRED by.ASL Mafia {211640}
Community
General
Things Aren’t Peaceful in Palestine Russo-Ukrainian War Thread US Politics Mega-thread Artificial Intelligence Thread Canadian Politics Mega-thread
Fan Clubs
White-Ra Fan Club The herO Fan Club!
Media & Entertainment
Movie Discussion! [Manga] One Piece Anime Discussion Thread Korean Music Discussion Series you have seen recently...
Sports
2024 - 2026 Football Thread Formula 1 Discussion NBA General Discussion MLB/Baseball 2023 TeamLiquid Health and Fitness Initiative For 2023
World Cup 2022
Tech Support
SC2 Client Relocalization [Change SC2 Language] Linksys AE2500 USB WIFI keeps disconnecting Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List
Blogs
Dyadica Gospel – a Pulp No…
Hildegard
Coffee x Performance in Espo…
TrAiDoS
Saturation point
Uldridge
DnB/metal remix FFO Mick Go…
ImbaTosS
Reality "theory" prov…
perfectspheres
Customize Sidebar...

Website Feedback

Closed Threads



Active: 2028 users

The Big Programming Thread - Page 823

Forum Index > General Forum
Post a Reply
Prev 1 821 822 823 824 825 1032 Next
Thread Rules
1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution.
2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20)
3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible.
4. Use [code] tags to format code blocks.
Yurie
Profile Blog Joined August 2010
11929 Posts
January 01 2017 10:43 GMT
#16441
I have been preparing for the java exam in a weeks time and ran into a recursion I can't understand that is being used as an example. Below can of course be done better by not using recursion but it doesn't matter here.

public static int findMax(int[] v, int n) {
if (n==1) {
return v[0];
} else {
int temp = findMax(v, n-1);
if (v[n-1] > temp) {
return v[n-1];
} else {
return temp;
}
}
}


Where I get stuck in my understanding is that the int temp = findMax line should change temp to v[0]. After that it shouldn't return a max unless you get lucky. So for some reason the int temp = findMax line is bypassed to use the actual comparison before running again? I just don't get the logic behind the code, perhaps a printing error but I don't understand recursion well enough to know.
spinesheath
Profile Blog Joined June 2009
Germany8679 Posts
January 01 2017 11:01 GMT
#16442
On January 01 2017 19:43 Yurie wrote:
I have been preparing for the java exam in a weeks time and ran into a recursion I can't understand that is being used as an example. Below can of course be done better by not using recursion but it doesn't matter here.

public static int findMax(int[] v, int n) {
if (n==1) {
return v[0];
} else {
int temp = findMax(v, n-1);
if (v[n-1] > temp) {
return v[n-1];
} else {
return temp;
}
}
}


Where I get stuck in my understanding is that the int temp = findMax line should change temp to v[0]. After that it shouldn't return a max unless you get lucky. So for some reason the int temp = findMax line is bypassed to use the actual comparison before running again? I just don't get the logic behind the code, perhaps a printing error but I don't understand recursion well enough to know.

This method finds the max in the first n elements of the array v. Assume that findMax is correct for n-1 (same concept as proof by induction). The first if says: if we only have one element, obviously that is the max. Then in the else we first find the max of the first n-1 elements. As per our assumption, this returns the correct answer. So now that we know the max of the first n-1 elements, we only need to check it against the very last element, v[n-1], which wasn't looked at in findMax(v, n-1). If the very last element is bigger, we return that, else the max of the first n-1 elements.

That's the most important thing about recursion: Don't try to figure out what the recursion does in detail. Assume your method does the right thing for some different case (usually a smaller n of some sorts), then build the solution for your one current n using that assumption. All you have to do is cover the corner cases (like 0 or 1) and solve the n case using the n-1 case.

By the way, this method doesn't work too well if you call it with n < 1.
If you have a good reason to disagree with the above, please tell me. Thank you.
Yurie
Profile Blog Joined August 2010
11929 Posts
January 01 2017 11:14 GMT
#16443
On January 01 2017 20:01 spinesheath wrote:
Show nested quote +
On January 01 2017 19:43 Yurie wrote:
I have been preparing for the java exam in a weeks time and ran into a recursion I can't understand that is being used as an example. Below can of course be done better by not using recursion but it doesn't matter here.

public static int findMax(int[] v, int n) {
if (n==1) {
return v[0];
} else {
int temp = findMax(v, n-1);
if (v[n-1] > temp) {
return v[n-1];
} else {
return temp;
}
}
}


Where I get stuck in my understanding is that the int temp = findMax line should change temp to v[0]. After that it shouldn't return a max unless you get lucky. So for some reason the int temp = findMax line is bypassed to use the actual comparison before running again? I just don't get the logic behind the code, perhaps a printing error but I don't understand recursion well enough to know.

This method finds the max in the first n elements of the array v. Assume that findMax is correct for n-1 (same concept as proof by induction). The first if says: if we only have one element, obviously that is the max. Then in the else we first find the max of the first n-1 elements. As per our assumption, this returns the correct answer. So now that we know the max of the first n-1 elements, we only need to check it against the very last element, v[n-1], which wasn't looked at in findMax(v, n-1). If the very last element is bigger, we return that, else the max of the first n-1 elements.

That's the most important thing about recursion: Don't try to figure out what the recursion does in detail. Assume your method does the right thing for some different case (usually a smaller n of some sorts), then build the solution for your one current n using that assumption. All you have to do is cover the corner cases (like 0 or 1) and solve the n case using the n-1 case.

By the way, this method doesn't work too well if you call it with n < 1.


The part I bolded is the part I have a problem with understanding how it does. I assume I will have to write a recursion of similar level of complexity or higher on the exam thus need to understand them so I can do similar things. (Perhaps it is only recursion through binary trees but one can never know.)

I know it would not work with n < 1. As an example it doesn't really cover everything that could go wrong as that would make for very long examples. I prefer this style instead of them adding another if with an invalidinput or similar to it.
spinesheath
Profile Blog Joined June 2009
Germany8679 Posts
Last Edited: 2017-01-01 11:48:06
January 01 2017 11:44 GMT
#16444
On January 01 2017 20:14 Yurie wrote:
The part I bolded is the part I have a problem with understanding how it does. I assume I will have to write a recursion of similar level of complexity or higher on the exam thus need to understand them so I can do similar things. (Perhaps it is only recursion through binary trees but one can never know.)

The trick is that you don't have to understand how it does that. It's a call to your recursive method, but with a smaller n as input. You just assume it does what it promises to do.

If it helps, we can look at it from the other direction - from n=1 upwards: If n=1, you have an obvious solution. For n=2, we build our method around using the result of findMax(v, 2-1). Which we know to be correct. This makes findMax(v, 2) return the correct result. Next we determine that findMax(v, 3) is correct because findMax(v, 3-1) is correct. Continue that infinitely and you have proof by induction. Continue it until int.MaxValue and you know your recursion is correct.

But I find that the 0 to infinity approach isn't very helpful when writing recursive methods. It helps verify that the concept works. When I write recusive methods, I just assume that my method actually does what it promises to do. I don't care how it does, all that matters is that the result is correct. Then I wrap it with the scaffolding of taking care of corner cases and solving the n case by using the n-1 case.

Sometimes you have more complicated patterns, like solving both the 0 and 1 corner cases and then solving n by using n-2, or solving n by using both n-1 and n-2. Whatever it is, you firmly trust that your method does the right thing for any n smaller than the current n.

It seems like many people have trouble with this at first. I too think that it's a rather unintuitive way of thinking. But in the end it's really quite the consistent pattern. It just takes practice, I guess.
If you have a good reason to disagree with the above, please tell me. Thank you.
Acrofales
Profile Joined August 2010
Spain18117 Posts
January 01 2017 12:24 GMT
#16445
Have you seen proof by induction in logic classes? It is generally very similar. Basically the way recursion works is:

solve for a very trivial case. In your case, that is the maximum of 1 number.
solve the "inductive step": assume the smaller case is solved (in your situation, the line
temp = findmax(v, n-1)
, and solve for the current step.

Proof by induction is similar, but you prove some theorem holds in the trivial case and the inductive step, rather than solving some problem.

As an example of how this works, lets work through your code sample for the array v=[1,2,3,1] and n=2.

The first iteration, n != 1, so we end up in the else step. The first thing that your code then does is call findmax with: v as above, but with n -1, so in the recursive call, n=1.
In this iteration, we see that n==1, and thus we return v[0]. This means that in the original first call, temp is assigned the value of v[0], so temp =1. Now the rest of the else clause is executed: we test whether v[n -1] > temp. So n = 2 (remember, we're back in the first call), so we test wheter v[1] > temp. v[1] = 2, and temp = 1, so yes, v[1] > temp. We therefore return the value of v[1] (which is 2) and the function returns.

Now as an exercise for yourself, run through it for n=4. What happens?
Yurie
Profile Blog Joined August 2010
11929 Posts
Last Edited: 2017-01-01 13:22:58
January 01 2017 13:22 GMT
#16446
Thank you, both of you. I think I get it now. Do for n=0 or 1, whichever is the base case. Go another step and create that with a decrease in n as part of it. Then judge if that holds for future n and that you did not miss a base case and then it is recursive.

So for 4 it goes another step (n=3) and finds 3, setting temp to that. Next step it goes to n=4 and finds 1, which is smaller. Temp is still 3 and returned since there are no further recursions to do.
Manit0u
Profile Blog Joined August 2004
Poland17433 Posts
January 01 2017 15:49 GMT
#16447
On January 01 2017 22:22 Yurie wrote:
Thank you, both of you. I think I get it now. Do for n=0 or 1, whichever is the base case. Go another step and create that with a decrease in n as part of it. Then judge if that holds for future n and that you did not miss a base case and then it is recursive.

So for 4 it goes another step (n=3) and finds 3, setting temp to that. Next step it goes to n=4 and finds 1, which is smaller. Temp is still 3 and returned since there are no further recursions to do.


https://vimeo.com/24716767

This is a nice introduction to basic recursion.
Time is precious. Waste it wisely.
phar
Profile Joined August 2011
United States1080 Posts
January 01 2017 18:32 GMT
#16448
On December 29 2016 23:43 mantequilla wrote:

do you guys know a better way?

You could try to avoid using callbacks unless you're actually forced to (e.g. cus you're using JavaScript). Check out Futures instead. Arguably cleaner to use.
Who after all is today speaking about the destruction of the Armenians?
Mr. Wiggles
Profile Blog Joined August 2010
Canada5894 Posts
January 01 2017 22:43 GMT
#16449
On January 01 2017 03:58 Targe wrote:
Show nested quote +
On December 31 2016 08:49 spinesheath wrote:
On December 31 2016 07:46 Targe wrote:
the problem i believe requires me to synchronise all threads with every iteration (with each loop threads need the new values updated by other threads or they wont be using the right values), for the shared memory implementation i had a barrier at the end of every loop and after the threads checked whether the required precision was reached.

i need to come up with a way of passing the updated information between the threads with as little overhead as possible

It sounds like you probably have some wiggle room.

I imagine this as a 2D Array, with 4 neighbours top/bottom/left/right. Naively you would probably split a 40x40 Array into 16 10x10 Arrays with an extra ring outside for synchronization, so 16 12x12 Arrays. But maybe you could also use 14x14 arrays and synchronize 2 rings every 2 iterations. Depending on the concrete environment that could improve performance.

I don't know if this is even remotely close to your actual problem, but I would suspect that there are some options whatever it is exactly. Look for less intuitive chunks to cut your problem into and compare them to what you have.


i dont know why i didnt think of splitting it into smaller arrays, ive been splitting the work for each thread by the first X elements in the array (where X is the total number of elements divided by the number of arrays), thats pretty eye opening for possible solutions thanks!

trying that route i would have to do some correctness testing to ensure that the correct results are achieved

Show nested quote +
On December 31 2016 18:49 RoomOfMush wrote:
On December 31 2016 07:46 Targe wrote:
On December 31 2016 05:00 Mr. Wiggles wrote:
On December 31 2016 03:56 Targe wrote:
On December 31 2016 03:12 Mr. Wiggles wrote:
On December 31 2016 02:36 Targe wrote:
is anyone familiar with MPI?

im trying to determine whether the communication overhead is greater than the overhead of retrieving data from another node in a massively parallel computer


I've used MPI a little bit on relatively small (~8 machine) clusters as part of some course work.

Would you mind explaining your problem a little more? I'm not quite sure what you're asking from what you've written here.

im trying to decide how to approach a problem (need to write a program that repeatedly replaces the values in an array with the value of its 4 neighbours, with the exception of boundary values, until the values of the array settle to within a given precision)

i think i have access to 4 nodes (16 cores per node) and need to come up with and test a solution to the above problem

previously i wrote a program as a solution to the same problem but for an environment with shared memory rather than distributed memory

e: the idea is for the program to scale as best as possible with the number of threads so im interested in what has the most overhead


MPI itself doesn't have that much overhead, as the major implementations are mature and optimized. Most of your overhead is going to come from communication and synchronization costs in your program. This is all very workload dependent, so I can give my thoughts, but I can only really outline what to look at.

There's obviously going to be a cost if you have to transfer data back and forth between nodes, but depending on the length of the computation and how fast you can transfer data between nodes, this might be amortized. Similarly, any time your program has to perform some kind of synchronization/communication you're going to pay an overhead based on the communication latency between your nodes.

So, your observed speedups are going to depend a lot on how much synchronization and communication needs to occur between your nodes. If you can just partition your dataset between all the nodes and let them chug away, you're likely to see good speedups. If your nodes need to constantly communicate between each other, you might hit a bottleneck.

Depending on what synchronization primitives you're using in your shared-memory program, porting to MPI may be relatively straightforward. For example, if synchronization is barrier-based, pthread_barrier_wait() transfers directly to MPI_Barrier(). MPI provides some higher-level functions which can make porting a bit nicer, but is generally pretty low-level.

If you're just interested in making one problem instance run as fast as possible, looking at distributed memory frameworks makes sense. Depending on your workload, it might also make sense to just run four different instances on each available node. In this case, it depends on if you care about throughput or response time for a single problem instance.

All in all, I can only say that MPI doesn't have much inherent overhead, and that the choice to use it basically depends on your problem and what your workload looks like. If your problem exhibits coarse granularity and doesn't require much synchronization overhead, I'd say go for it and measure what speedups you see. If there's a large amount of synchronization required, then you might not see good speedups, even if you're giving additional resources to the program.


sorry for being confusing, by overhead i meant to say the overhead of communication, your post is still informative though.

the problem i believe requires me to synchronise all threads with every iteration (with each loop threads need the new values updated by other threads or they wont be using the right values), for the shared memory implementation i had a barrier at the end of every loop and after the threads checked whether the required precision was reached.

i need to come up with a way of passing the updated information between the threads with as little overhead as possible

I assume the size of your problem is sufficiently big to even think about spreading it over multiple processors. In that case I would suggest calculating the values that need to be shared first, then start the communication while simultaneously working on the values that dont need to be shared. Ideally the messages from the other machines will arrive before you have finished with all the local calculations and your communication overhead becomes zero.


size of the problem is up to me (but yes i choose sizes large enough for splitting the work to be worth it)
ill look to see if what you suggested is possible, my curent understanding is that whilst sending/receiving a message a thread doesnt progress through its logic?

also as MPI requires both the sending thread and the receiving thread to call a send/receive function at the send time ill have to have some sort of master/slave system in place for synchronisation? but having 1 master for 10+ slaves seems like a major bottleneck as it will have to wait for each thread?

Communications in MPI are done through what they call "communicators", which essentially define groups of processes which can communicate together.

If you wanted to do master/slave, you would just use MPI_COMM_WORLD (global communicator) to have communication which spans all MPI processes in the system. Alternatively, to perform communications between specific subsets of processes, you'd have to manually set up new communicators, either with MPI_Comm_split() or by manually creating groups and then calling MPI_Comm_create(). This might make sense depending on exactly how you split the work. For example, if you have several processes which need to communicate only amongst themselves when doing work, it probably makes sense to create a communicator for them.

Lastly, you can also do non-blocking communication in MPI if that makes more sense for your use case. this is done through the MPI_I* (MPI_Irecv(), MPI_Isend(), etc.) functions. These functions generally return a handle to a communication request which can be checked later with MPI_Wait()/MPI_Test() when you're ready to consume output.
you gotta dance
Targe
Profile Blog Joined February 2012
United Kingdom14103 Posts
January 01 2017 23:43 GMT
#16450
On January 02 2017 07:43 Mr. Wiggles wrote:
Show nested quote +
On January 01 2017 03:58 Targe wrote:
On December 31 2016 08:49 spinesheath wrote:
On December 31 2016 07:46 Targe wrote:
the problem i believe requires me to synchronise all threads with every iteration (with each loop threads need the new values updated by other threads or they wont be using the right values), for the shared memory implementation i had a barrier at the end of every loop and after the threads checked whether the required precision was reached.

i need to come up with a way of passing the updated information between the threads with as little overhead as possible

It sounds like you probably have some wiggle room.

I imagine this as a 2D Array, with 4 neighbours top/bottom/left/right. Naively you would probably split a 40x40 Array into 16 10x10 Arrays with an extra ring outside for synchronization, so 16 12x12 Arrays. But maybe you could also use 14x14 arrays and synchronize 2 rings every 2 iterations. Depending on the concrete environment that could improve performance.

I don't know if this is even remotely close to your actual problem, but I would suspect that there are some options whatever it is exactly. Look for less intuitive chunks to cut your problem into and compare them to what you have.


i dont know why i didnt think of splitting it into smaller arrays, ive been splitting the work for each thread by the first X elements in the array (where X is the total number of elements divided by the number of arrays), thats pretty eye opening for possible solutions thanks!

trying that route i would have to do some correctness testing to ensure that the correct results are achieved

On December 31 2016 18:49 RoomOfMush wrote:
On December 31 2016 07:46 Targe wrote:
On December 31 2016 05:00 Mr. Wiggles wrote:
On December 31 2016 03:56 Targe wrote:
On December 31 2016 03:12 Mr. Wiggles wrote:
On December 31 2016 02:36 Targe wrote:
is anyone familiar with MPI?

im trying to determine whether the communication overhead is greater than the overhead of retrieving data from another node in a massively parallel computer


I've used MPI a little bit on relatively small (~8 machine) clusters as part of some course work.

Would you mind explaining your problem a little more? I'm not quite sure what you're asking from what you've written here.

im trying to decide how to approach a problem (need to write a program that repeatedly replaces the values in an array with the value of its 4 neighbours, with the exception of boundary values, until the values of the array settle to within a given precision)

i think i have access to 4 nodes (16 cores per node) and need to come up with and test a solution to the above problem

previously i wrote a program as a solution to the same problem but for an environment with shared memory rather than distributed memory

e: the idea is for the program to scale as best as possible with the number of threads so im interested in what has the most overhead


MPI itself doesn't have that much overhead, as the major implementations are mature and optimized. Most of your overhead is going to come from communication and synchronization costs in your program. This is all very workload dependent, so I can give my thoughts, but I can only really outline what to look at.

There's obviously going to be a cost if you have to transfer data back and forth between nodes, but depending on the length of the computation and how fast you can transfer data between nodes, this might be amortized. Similarly, any time your program has to perform some kind of synchronization/communication you're going to pay an overhead based on the communication latency between your nodes.

So, your observed speedups are going to depend a lot on how much synchronization and communication needs to occur between your nodes. If you can just partition your dataset between all the nodes and let them chug away, you're likely to see good speedups. If your nodes need to constantly communicate between each other, you might hit a bottleneck.

Depending on what synchronization primitives you're using in your shared-memory program, porting to MPI may be relatively straightforward. For example, if synchronization is barrier-based, pthread_barrier_wait() transfers directly to MPI_Barrier(). MPI provides some higher-level functions which can make porting a bit nicer, but is generally pretty low-level.

If you're just interested in making one problem instance run as fast as possible, looking at distributed memory frameworks makes sense. Depending on your workload, it might also make sense to just run four different instances on each available node. In this case, it depends on if you care about throughput or response time for a single problem instance.

All in all, I can only say that MPI doesn't have much inherent overhead, and that the choice to use it basically depends on your problem and what your workload looks like. If your problem exhibits coarse granularity and doesn't require much synchronization overhead, I'd say go for it and measure what speedups you see. If there's a large amount of synchronization required, then you might not see good speedups, even if you're giving additional resources to the program.


sorry for being confusing, by overhead i meant to say the overhead of communication, your post is still informative though.

the problem i believe requires me to synchronise all threads with every iteration (with each loop threads need the new values updated by other threads or they wont be using the right values), for the shared memory implementation i had a barrier at the end of every loop and after the threads checked whether the required precision was reached.

i need to come up with a way of passing the updated information between the threads with as little overhead as possible

I assume the size of your problem is sufficiently big to even think about spreading it over multiple processors. In that case I would suggest calculating the values that need to be shared first, then start the communication while simultaneously working on the values that dont need to be shared. Ideally the messages from the other machines will arrive before you have finished with all the local calculations and your communication overhead becomes zero.


size of the problem is up to me (but yes i choose sizes large enough for splitting the work to be worth it)
ill look to see if what you suggested is possible, my curent understanding is that whilst sending/receiving a message a thread doesnt progress through its logic?

also as MPI requires both the sending thread and the receiving thread to call a send/receive function at the send time ill have to have some sort of master/slave system in place for synchronisation? but having 1 master for 10+ slaves seems like a major bottleneck as it will have to wait for each thread?

Communications in MPI are done through what they call "communicators", which essentially define groups of processes which can communicate together.

If you wanted to do master/slave, you would just use MPI_COMM_WORLD (global communicator) to have communication which spans all MPI processes in the system. Alternatively, to perform communications between specific subsets of processes, you'd have to manually set up new communicators, either with MPI_Comm_split() or by manually creating groups and then calling MPI_Comm_create(). This might make sense depending on exactly how you split the work. For example, if you have several processes which need to communicate only amongst themselves when doing work, it probably makes sense to create a communicator for them.

Lastly, you can also do non-blocking communication in MPI if that makes more sense for your use case. this is done through the MPI_I* (MPI_Irecv(), MPI_Isend(), etc.) functions. These functions generally return a handle to a communication request which can be checked later with MPI_Wait()/MPI_Test() when you're ready to consume output.


im really unfamiliar with MPI, this will be my first program using it and this is all super helpful info
MPI_Isend() seems like its what i want to use, i cant see whether it works like java thread start where you initialiaze, then call start later or if you just call it then the thread thats receiving accepts the data when its ready?
11/5/14 CATACLYSM | The South West's worst Falco main
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
Last Edited: 2017-01-02 02:00:11
January 02 2017 01:51 GMT
#16451
lol, almost done with my brute force TSP solver... gotta debug this


right now it's outputting a seemingly random solution every time
which looks really neat if i just sit and click it over and over but isn't what I want haha


edit: well this is odd. it's finding the correct amount of pathways (n-1)!, but it's not finding the optimal solution for some reason. that or its checking the same pathways twice but that seems unlikely since it's finding the right amount of pathways

edit2: it's not storing the nodes correctly, well here I go into the deep dark tunnels of debugging this
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
Last Edited: 2017-01-02 02:27:21
January 02 2017 02:20 GMT
#16452
AHAHA yes
got it working!

This is probably absolutely nothing to some of you and I am sure my code isn't great (god it looks complicated) but for me this is the hardest thing I have done in programming, lol

Solves under 10 cities very fast. 10 cities takes like 10 seconds.. well maybe slightly faster

anyways, very cool I am happy


most important new thing I learned: using constructors in java data structures when making copies. super handy! and necessary to avoid concurrent modification when using recursion.
Silvanel
Profile Blog Joined March 2003
Poland4733 Posts
January 02 2017 15:20 GMT
#16453
What resources would You guys recommend to someone trying to learn C ?
Pathetic Greta hater.
Manit0u
Profile Blog Joined August 2004
Poland17433 Posts
January 03 2017 01:03 GMT
#16454
On January 03 2017 00:20 Silvanel wrote:
What resources would You guys recommend to someone trying to learn C ?


Get this book. By far the least daunting experience. It's also very good and will get you started real nice: https://en.wikipedia.org/wiki/The_C_Programming_Language

You can compliment it with this: https://learncodethehardway.org/c/

Unfortunately ZAS doesn't provide his content for free any more (you can still get it on EvilZone if that's your thing), some people have also criticized it, but I've found his introduction to some of the other tools you should be using alongside C very useful for someone not familiar with it (Make and Valgrind primarily).
Time is precious. Waste it wisely.
Mr. Wiggles
Profile Blog Joined August 2010
Canada5894 Posts
January 03 2017 04:32 GMT
#16455
On January 02 2017 08:43 Targe wrote:
Show nested quote +
On January 02 2017 07:43 Mr. Wiggles wrote:
On January 01 2017 03:58 Targe wrote:
On December 31 2016 08:49 spinesheath wrote:
On December 31 2016 07:46 Targe wrote:
the problem i believe requires me to synchronise all threads with every iteration (with each loop threads need the new values updated by other threads or they wont be using the right values), for the shared memory implementation i had a barrier at the end of every loop and after the threads checked whether the required precision was reached.

i need to come up with a way of passing the updated information between the threads with as little overhead as possible

It sounds like you probably have some wiggle room.

I imagine this as a 2D Array, with 4 neighbours top/bottom/left/right. Naively you would probably split a 40x40 Array into 16 10x10 Arrays with an extra ring outside for synchronization, so 16 12x12 Arrays. But maybe you could also use 14x14 arrays and synchronize 2 rings every 2 iterations. Depending on the concrete environment that could improve performance.

I don't know if this is even remotely close to your actual problem, but I would suspect that there are some options whatever it is exactly. Look for less intuitive chunks to cut your problem into and compare them to what you have.


i dont know why i didnt think of splitting it into smaller arrays, ive been splitting the work for each thread by the first X elements in the array (where X is the total number of elements divided by the number of arrays), thats pretty eye opening for possible solutions thanks!

trying that route i would have to do some correctness testing to ensure that the correct results are achieved

On December 31 2016 18:49 RoomOfMush wrote:
On December 31 2016 07:46 Targe wrote:
On December 31 2016 05:00 Mr. Wiggles wrote:
On December 31 2016 03:56 Targe wrote:
On December 31 2016 03:12 Mr. Wiggles wrote:
On December 31 2016 02:36 Targe wrote:
is anyone familiar with MPI?

im trying to determine whether the communication overhead is greater than the overhead of retrieving data from another node in a massively parallel computer


I've used MPI a little bit on relatively small (~8 machine) clusters as part of some course work.

Would you mind explaining your problem a little more? I'm not quite sure what you're asking from what you've written here.

im trying to decide how to approach a problem (need to write a program that repeatedly replaces the values in an array with the value of its 4 neighbours, with the exception of boundary values, until the values of the array settle to within a given precision)

i think i have access to 4 nodes (16 cores per node) and need to come up with and test a solution to the above problem

previously i wrote a program as a solution to the same problem but for an environment with shared memory rather than distributed memory

e: the idea is for the program to scale as best as possible with the number of threads so im interested in what has the most overhead


MPI itself doesn't have that much overhead, as the major implementations are mature and optimized. Most of your overhead is going to come from communication and synchronization costs in your program. This is all very workload dependent, so I can give my thoughts, but I can only really outline what to look at.

There's obviously going to be a cost if you have to transfer data back and forth between nodes, but depending on the length of the computation and how fast you can transfer data between nodes, this might be amortized. Similarly, any time your program has to perform some kind of synchronization/communication you're going to pay an overhead based on the communication latency between your nodes.

So, your observed speedups are going to depend a lot on how much synchronization and communication needs to occur between your nodes. If you can just partition your dataset between all the nodes and let them chug away, you're likely to see good speedups. If your nodes need to constantly communicate between each other, you might hit a bottleneck.

Depending on what synchronization primitives you're using in your shared-memory program, porting to MPI may be relatively straightforward. For example, if synchronization is barrier-based, pthread_barrier_wait() transfers directly to MPI_Barrier(). MPI provides some higher-level functions which can make porting a bit nicer, but is generally pretty low-level.

If you're just interested in making one problem instance run as fast as possible, looking at distributed memory frameworks makes sense. Depending on your workload, it might also make sense to just run four different instances on each available node. In this case, it depends on if you care about throughput or response time for a single problem instance.

All in all, I can only say that MPI doesn't have much inherent overhead, and that the choice to use it basically depends on your problem and what your workload looks like. If your problem exhibits coarse granularity and doesn't require much synchronization overhead, I'd say go for it and measure what speedups you see. If there's a large amount of synchronization required, then you might not see good speedups, even if you're giving additional resources to the program.


sorry for being confusing, by overhead i meant to say the overhead of communication, your post is still informative though.

the problem i believe requires me to synchronise all threads with every iteration (with each loop threads need the new values updated by other threads or they wont be using the right values), for the shared memory implementation i had a barrier at the end of every loop and after the threads checked whether the required precision was reached.

i need to come up with a way of passing the updated information between the threads with as little overhead as possible

I assume the size of your problem is sufficiently big to even think about spreading it over multiple processors. In that case I would suggest calculating the values that need to be shared first, then start the communication while simultaneously working on the values that dont need to be shared. Ideally the messages from the other machines will arrive before you have finished with all the local calculations and your communication overhead becomes zero.


size of the problem is up to me (but yes i choose sizes large enough for splitting the work to be worth it)
ill look to see if what you suggested is possible, my curent understanding is that whilst sending/receiving a message a thread doesnt progress through its logic?

also as MPI requires both the sending thread and the receiving thread to call a send/receive function at the send time ill have to have some sort of master/slave system in place for synchronisation? but having 1 master for 10+ slaves seems like a major bottleneck as it will have to wait for each thread?

Communications in MPI are done through what they call "communicators", which essentially define groups of processes which can communicate together.

If you wanted to do master/slave, you would just use MPI_COMM_WORLD (global communicator) to have communication which spans all MPI processes in the system. Alternatively, to perform communications between specific subsets of processes, you'd have to manually set up new communicators, either with MPI_Comm_split() or by manually creating groups and then calling MPI_Comm_create(). This might make sense depending on exactly how you split the work. For example, if you have several processes which need to communicate only amongst themselves when doing work, it probably makes sense to create a communicator for them.

Lastly, you can also do non-blocking communication in MPI if that makes more sense for your use case. this is done through the MPI_I* (MPI_Irecv(), MPI_Isend(), etc.) functions. These functions generally return a handle to a communication request which can be checked later with MPI_Wait()/MPI_Test() when you're ready to consume output.


im really unfamiliar with MPI, this will be my first program using it and this is all super helpful info
MPI_Isend() seems like its what i want to use, i cant see whether it works like java thread start where you initialiaze, then call start later or if you just call it then the thread thats receiving accepts the data when its ready?

I'm not super familiar with Java, but I think it's closer to the second scenario. You would call MPI_Isend() on the sending node, and after that point, you can't touch the buffer containing the sent data until you've checked for completion with one of MPI_Wait()/MPI_Test() or their multi-handle equivalents. Since the send is non-blocking, the send buffer can't be touched until you've verified that the send has completed by calling one of those functions. Modifying the send buffer before the send completes would cause incorrect (undefined?) behaviour.

Similarly on the receiving end, you would call MPI_Irecv(), but have to query the handle to make sure that the receive has completed before being able to consume the output. In a master/slave program, you could have the master perform an MPI_Irecv() for each expected slave message, and then loop using MPI_Waitany() over the returned handles to consume input as it arrives. This might make sense if you expect differing send times for each slave, or if each slave is simply streaming data back to the master, and it makes more sense to consume this data as it arrives than to wait for all of it.
you gotta dance
Biolunar
Profile Joined February 2012
Germany224 Posts
January 03 2017 11:37 GMT
#16456
On January 03 2017 00:20 Silvanel wrote:
What resources would You guys recommend to someone trying to learn C ?

This might be too hard if you have no programming experience, but if you do, use this:
http://icube-icps.unistra.fr/index.php/File:ModernC.pdf
This book focuses on how to avoid many of C’s pitfalls and teaches many good habits that are not possible in older versions of the language.
What is best? To crush the Zerg, see them driven before you, and hear the lamentations of the Protoss.
mantequilla
Profile Blog Joined June 2012
Turkey779 Posts
January 03 2017 14:16 GMT
#16457
On January 03 2017 20:37 Biolunar wrote:
Show nested quote +
On January 03 2017 00:20 Silvanel wrote:
What resources would You guys recommend to someone trying to learn C ?

This might be too hard if you have no programming experience, but if you do, use this:
http://icube-icps.unistra.fr/index.php/File:ModernC.pdf
This book focuses on how to avoid many of C’s pitfalls and teaches many good habits that are not possible in older versions of the language.




thanks for the cool book. the information about c and c++ is so overpopulated you can't find up to date modern knowledge about the languages without running into outdated complex content. even basic things like strings have so much information clutter when it comes to c and cpp.
Age of Mythology forever!
Manit0u
Profile Blog Joined August 2004
Poland17433 Posts
Last Edited: 2017-01-03 16:40:46
January 03 2017 16:40 GMT
#16458
Fuck material design and fuck DataTables. Fuck front-end. Fuck!
Time is precious. Waste it wisely.
phar
Profile Joined August 2011
United States1080 Posts
January 03 2017 17:15 GMT
#16459
On January 04 2017 01:40 Manit0u wrote:
Fuck material design and fuck DataTables. Fuck front-end. Fuck!

You sound like me haha.
Who after all is today speaking about the destruction of the Armenians?
tofucake
Profile Blog Joined October 2009
Hyrule19152 Posts
January 03 2017 17:27 GMT
#16460
Old datatables was awful. New datatables is pretty nice
Liquipediaasante sana squash banana
Prev 1 821 822 823 824 825 1032 Next
Please log in or register to reply.
Live Events Refresh
IPSL
17:00
Ro16 Group D
ZZZero vs rasowy
Napoleon vs KameZerg
Liquipedia
PSISTORM Gaming Misc
15:55
FSL teamleague CNvsASH, ASHvRR
Freeedom33
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
Railgan 184
IndyStarCraft 110
BRAT_OK 47
MindelVK 29
EmSc Tv 13
StarCraft: Brood War
Britney 25290
Calm 2532
Shuttle 775
Stork 341
firebathero 288
Dewaltoss 113
Barracks 68
Mong 61
Rock 43
Shine 18
Dota 2
Gorgc5830
qojqva1713
Dendi963
Counter-Strike
ScreaM1084
byalli378
Heroes of the Storm
Khaldor535
Liquid`Hasu237
Other Games
Beastyqt573
DeMusliM291
Hui .222
Fuzer 215
Lowko213
Trikslyr46
CadenZie18
Organizations
Dota 2
PGL Dota 2 - Main Stream8971
Other Games
EGCTV625
gamesdonequick359
StarCraft 2
angryscii 24
EmSc Tv 13
EmSc2Tv 13
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 22 non-featured ]
StarCraft 2
• HappyZerGling 71
• HeavenSC 61
• printf 5
• LaughNgamezSOOP
• sooper7s
• AfreecaTV YouTube
• Migwel
• intothetv
• Kozan
• IndyKCrew
StarCraft: Brood War
• Airneanach42
• HerbMon 6
• Michael_bg 3
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• C_a_k_e 2455
• WagamamaTV359
• Ler79
League of Legends
• Nemesis2825
Other Games
• imaqtpie1125
• Shiphtur313
Upcoming Events
OSC
45m
davetesta15
BSL 21
1h 45m
Tarson vs Julia
Doodle vs OldBoy
eOnzErG vs WolFix
StRyKeR vs Aeternum
Sparkling Tuna Cup
15h 45m
RSL Revival
15h 45m
Reynor vs sOs
Maru vs Ryung
Kung Fu Cup
17h 45m
Cure vs herO
Reynor vs TBD
WardiTV Korean Royale
17h 45m
BSL 21
1d 1h
JDConan vs Semih
Dragon vs Dienmax
Tech vs NewOcean
TerrOr vs Artosis
IPSL
1d 1h
Dewalt vs WolFix
eOnzErG vs Bonyth
Replay Cast
1d 4h
Wardi Open
1d 17h
[ Show More ]
Monday Night Weeklies
1d 22h
WardiTV Korean Royale
2 days
BSL: GosuLeague
3 days
The PondCast
3 days
Replay Cast
4 days
RSL Revival
4 days
BSL: GosuLeague
5 days
RSL Revival
5 days
WardiTV Korean Royale
5 days
RSL Revival
6 days
WardiTV Korean Royale
6 days
IPSL
6 days
Julia vs Artosis
JDConan vs DragOn
Liquipedia Results

Completed

Proleague 2025-11-14
Stellar Fest: Constellation Cup
Eternal Conflict S1

Ongoing

C-Race Season 1
IPSL Winter 2025-26
KCM Race Survival 2025 Season 4
SOOP Univ League 2025
YSL S2
BSL Season 21
CSCL: Masked Kings S3
SLON Tour Season 2
RSL Revival: Season 3
META Madness #9
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025

Upcoming

BSL 21 Non-Korean Championship
Acropolis #4
IPSL Spring 2026
HSC XXVIII
RSL Offline Finals
WardiTV 2025
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter 2026: Closed Qualifier
eXTREMESLAND 2025
ESL Impact League Season 8
SL Budapest Major 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.