On May 14 2015 18:28 Khalum wrote:
Nope, that's true for all odd levels.
[edit] But you could start counting with 0...
Nope, that's true for all odd levels.
[edit] But you could start counting with 0...

FORGOT HOW MODULO WORKS
Forum Index > General Forum |
Thread Rules 1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution. 2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20) 3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible. 4. Use [code] tags to format code blocks. | ||
Blisse
Canada3710 Posts
May 14 2015 19:28 GMT
#12621
On May 14 2015 18:28 Khalum wrote: Nope, that's true for all odd levels. [edit] But you could start counting with 0... ![]() FORGOT HOW MODULO WORKS | ||
bangsholt
Denmark138 Posts
May 14 2015 19:30 GMT
#12622
On May 15 2015 01:34 Manit0u wrote: + Show Spoiler +
On May 15 2015 02:01 spinesheath wrote: + Show Spoiler +
Fizzbuzz is not a very good choice to learn a new language with. Usually I build a couple of the simple Unix tools, then I build a very simple database and then I go ahead and build something I've build in a different language to get an idea of why the new language is cools. Secondly, both of you have just failed Fizzbuzz - it's from 1 to 100, both of you made loops that go from 0 to 99 ![]() Bangsholt's FizzBuzz + Show Spoiler +
Other things to do to learn a new language is to find nice libraries that are written in pragmatic style, which for Java would be the standard library, Google Guava and the Goldman Sachs Collections. | ||
spinesheath
Germany8679 Posts
May 14 2015 20:35 GMT
#12623
On May 15 2015 04:30 bangsholt wrote: Secondly, both of you have just failed Fizzbuzz - it's from 1 to 100, both of you made loops that go from 0 to 99 ![]() I didn't write FizzBuzz, I refactored a piece of code that really could have been anything. No change to semantics during refactoring! | ||
Manit0u
Poland17196 Posts
May 14 2015 21:49 GMT
#12624
On May 15 2015 04:30 bangsholt wrote: Show nested quote + On May 15 2015 01:34 Manit0u wrote: + Show Spoiler +
Show nested quote + On May 15 2015 02:01 spinesheath wrote: + Show Spoiler +
Secondly, both of you have just failed Fizzbuzz - it's from 1 to 100, both of you made loops that go from 0 to 99 ![]() Actually, my code counts from 1 to 100 (1 to Buzz really) - operator precedence FTW (took me a while to properly use pre- and post-increment operators). Was also thinking of extracting the getMessage method out of it (which is a good idea). I'm also a fan of single-exit principle and try to adhere to it when possible (unless it would make the code too convoluted), so taking into account spinesheath's refactoring I'd do it like that:
| ||
killa_robot
Canada1884 Posts
May 14 2015 22:40 GMT
#12625
On May 14 2015 20:14 Manit0u wrote: Show nested quote + On May 14 2015 05:49 killa_robot wrote: On May 13 2015 19:47 Manit0u wrote: People who had Java during their CS courses, please tell me - did they teach you anything about stuff like Maven, Spring, Hibernate, JDBC? Because that are just few of the tools you'll be using out in the real world and knowing them well is pretty much a prerequisite for getting a job. Nope. Used Hadoop, but apart from that didn't use any frameworks. You don't go to school to learn how to work though, you go to school to learn how to learn. One of the greatest disconnects we have right now is between schools and employers. Well, if you want to be an academic then I believe studying maths or philosophy would be more beneficial. CS should be pretty much purely technical (technical in learn how it's done in the real world sense) studies that prepare you for work in the industry. Also, quite a large number of developers aren't even schooled in it. Here are some interesting resources: http://stackoverflow.com/research/developer-survey-2015#profile-education http://blog.codinghorror.com/why-cant-programmers-program/ There was also a great paper showing that people without CS education tend to be better programmers most of the time. From personal experience, most of the people I went to Uni with didn't care for/understand programming. I went to college (for programming), and then to Uni, so I was already familiar with programming. Intro level courses seem to be brutal for most just starting, so it puts them off (I was a lab assistant for a while, no one enjoyed the classes). With my Uni career, I didn't meet anyone that seemed to enjoyed programming, even in programming classes. In college I met a few, but many didn't seem to enjoy it either. It's just something you need the personality for I suppose. The issue is "technical" for programming translates into learning the most current technologies/libraries, which isn't a suitable format for schools. Most of the technologies wouldn't take long to learn once on the job anyway. Never mind the fact that most employers don't even seem to know what the fuck they want. I have no idea why they insist on listing every fucking language they've ever heard for jobs, and the diversity of libraries I see leads me to believe it'd be a pointless endeavour for schools to attempt to accommodate them all anyway. Programming is one of the few talents that can easily be self-taught. You likely won't learn the behind the scenes theory of what you're doing, but it usually doesn't matters unless you're in a more sciencey position. Self-taught people tend to be more motivated to learning the subject, so it's no surprise they surpass those that go to school for it. With all this said, I would never recommend anyone become a programmer anyway. Way too much supply for the demand right now, very few entry level positions from what I've seen, and it's one of the few jobs that actively works to make itself obsolete. It's fantastic as a secondary skill for another career though. | ||
BlueRoyaL
United States2493 Posts
May 14 2015 22:57 GMT
#12626
![]() I'm curious to know how many of you work within the TDD workflow, where you write tests first and then go on to write the code that passes them. I'm still in the infant stages of learning unit testing, but I've found that it's much much easier to write the tests AFTER writing the production code. At the moment, it's not difficult to write the tests first if what I'm testing is something very trivial such as a method's return value or something of that nature. When it gets more complicated, where multiple classes are involved, or when I need to create mocks/stubs for external dependencies, I end up spending so much time trying to think of how the tests should be written out. Is it common to be spending a ton of time writing these tests for non-trivial code? For example, if you're testing the execution of code that spans multiple classes? I almost feel like writing the tests first for these kinds of cases forces you to design the whole system/architecture in the process. Maybe that's a good thing, I'm not sure. Now granted, I'm just making up random "complex" scenarios and trying to code it using the test-first mantra. I guess in a more realistic scenario, where some planning and design would be involved beforehand, it would make writing the tests easier as you would already have conceptualized what you're supposed to test. Lastly, does it take a long time to become very proficient in writing unit tests? Is it kind of like programming where there's always something new to learn and get better at, or is it more like a small set of tools and mindsets that you need to grasp to become a "good" unit tester? | ||
xboi209
United States1173 Posts
May 15 2015 00:41 GMT
#12627
| ||
berated-
United States1134 Posts
May 15 2015 01:24 GMT
#12628
On May 15 2015 07:57 BlueRoyaL wrote: Hey guys, I've begun attempting to learn unit testing and I have several questions. I consider myself a decent developer, although I've only been doing it for a little over a year part-time (I mainly do sysadmin stuff at my work). Regretfully, up to this point, the only testing I've been doing is integrating testing ![]() I'm curious to know how many of you work within the TDD workflow, where you write tests first and then go on to write the code that passes them. I'm still in the infant stages of learning unit testing, but I've found that it's much much easier to write the tests AFTER writing the production code. At the moment, it's not difficult to write the tests first if what I'm testing is something very trivial such as a method's return value or something of that nature. Pure TDD is quite extreme and a I believe to be a waste of time. It's important to get test coverage and to figure out how to learn to write tests, following TDD to the letter of the law has felt a bit extreme. However, I also believe that you don't know how far to go until you've taken it to far. I know that I didn't... I took it too far, I went for 100% code coverage on a small project. I think that its too much now. If you can hit 85%+ code coverage and make sure that the tests get written either before or after, I think its okay. You'll need to learn how to make testing work for you so that you write better code that shows your intent, but also provides value. When it gets more complicated, where multiple classes are involved, or when I need to create mocks/stubs for external dependencies, I end up spending so much time trying to think of how the tests should be written out. Is it common to be spending a ton of time writing these tests for non-trivial code? For example, if you're testing the execution of code that spans multiple classes? I almost feel like writing the tests first for these kinds of cases forces you to design the whole system/architecture in the process. Maybe that's a good thing, I'm not sure. Unit tests should cover a single unit. In java this should _generally_ mean one class. If its more than that it should probably be worked into an integration test. This is really hard to pull off. I think the thing that wasn't obvious to me before writing tests is I didn't spend time thinking about what does it mean to write a good interface. What does it mean to expose only the api that you want to expose? What even should my api be? For the first time I became both the author and the consumer of my own api. It really offers things in a new light that wasn't there before testing. I still (shamefully) haven't read the gang of four book. However, design patterns started making sense. Why might you want a factory? Why might you want a builder? It's so you can change out only the things that you want to change. When you write your tests it forces you to think about these things. You should have been thinking about them all along. Now granted, I'm just making up random "complex" scenarios and trying to code it using the test-first mantra. I guess in a more realistic scenario, where some planning and design would be involved beforehand, it would make writing the tests easier as you would already have conceptualized what you're supposed to test. Lastly, does it take a long time to become very proficient in writing unit tests? Is it kind of like programming where there's always something new to learn and get better at, or is it more like a small set of tools and mindsets that you need to grasp to become a "good" unit tester? It took me a few years to figure out how to become more efficient by writing tests. I think the journey made me a _much_ better developer than I was before I started. As a general rule I've found that I write about 66% more code than I would have written without tests. However, I feel that I'm finally faster now that I've learned how to write tests because I don't waste time deploying applications (in our case to tomcat) and running through an entire flow multiple times to make sure that I get a single step right. If you have the time to take the journey, I think you should keep with it. If I were to offer a few words of advice... Write tests at the layer that they are meant to be used. If you are writing a library that is meant to be used from within other objects, then test it as such. If you are writing a RESTful service, I probably wouldn't unit test that. It's meant to be used from an http call, not a unit test. Mocks feel awesome at first, but I've iterated to using them quite rarely. If you have an actual integration point that you want to mock, then you probably should still mock it. Mocking leads to what I've found to be a completely useless testing strategy.... your test case is a reverse implementation of your actual implementation. This is an extreme waste of time because you've written a very brittle test. If you have to change the test every time you change every little detail about your implemenation, then you might be testing at the wrong level. You may not have exposed the right interface. I probably skipped over a lot, but if you ever have any questions or more specific don't hesitate to ask. | ||
netherh
United Kingdom333 Posts
May 15 2015 05:31 GMT
#12629
On May 15 2015 09:41 xboi209 wrote: Is anyone here with lots of spare time and who knows C++ willing to solve a memory leak problem? I wrote a description of the problem here: https://github.com/HarpyWar/pvpgn/issues/152#issuecomment-100534880 Seeing code like that called C++ makes me cry. ![]() Anyway, if my googling is correct, it looks like xstrdup creates a copy of a c string using xmalloc. So to fix the leak, you probably want to call xfree on it after you're done with it. I have no idea what's going on with casting a char* to an unsigned int* and dereferencing... That shit be cray. | ||
_fool
Netherlands673 Posts
May 15 2015 07:27 GMT
#12630
On May 15 2015 10:24 berated- wrote: Show nested quote + On May 15 2015 07:57 BlueRoyaL wrote: Hey guys, I've begun attempting to learn unit testing and I have several questions. I consider myself a decent developer, although I've only been doing it for a little over a year part-time (I mainly do sysadmin stuff at my work). Regretfully, up to this point, the only testing I've been doing is integrating testing ![]() I'm curious to know how many of you work within the TDD workflow, where you write tests first and then go on to write the code that passes them. I'm still in the infant stages of learning unit testing, but I've found that it's much much easier to write the tests AFTER writing the production code. At the moment, it's not difficult to write the tests first if what I'm testing is something very trivial such as a method's return value or something of that nature. Pure TDD is quite extreme and a I believe to be a waste of time. It's important to get test coverage and to figure out how to learn to write tests, following TDD to the letter of the law has felt a bit extreme. However, I also believe that you don't know how far to go until you've taken it to far. I know that I didn't... I took it too far, I went for 100% code coverage on a small project. I think that its too much now. If you can hit 85%+ code coverage and make sure that the tests get written either before or after, I think its okay. You'll need to learn how to make testing work for you so that you write better code that shows your intent, but also provides value. Show nested quote + When it gets more complicated, where multiple classes are involved, or when I need to create mocks/stubs for external dependencies, I end up spending so much time trying to think of how the tests should be written out. Is it common to be spending a ton of time writing these tests for non-trivial code? For example, if you're testing the execution of code that spans multiple classes? I almost feel like writing the tests first for these kinds of cases forces you to design the whole system/architecture in the process. Maybe that's a good thing, I'm not sure. Unit tests should cover a single unit. In java this should _generally_ mean one class. If its more than that it should probably be worked into an integration test. This is really hard to pull off. I think the thing that wasn't obvious to me before writing tests is I didn't spend time thinking about what does it mean to write a good interface. What does it mean to expose only the api that you want to expose? What even should my api be? For the first time I became both the author and the consumer of my own api. It really offers things in a new light that wasn't there before testing. I still (shamefully) haven't read the gang of four book. However, design patterns started making sense. Why might you want a factory? Why might you want a builder? It's so you can change out only the things that you want to change. When you write your tests it forces you to think about these things. You should have been thinking about them all along. Show nested quote + Now granted, I'm just making up random "complex" scenarios and trying to code it using the test-first mantra. I guess in a more realistic scenario, where some planning and design would be involved beforehand, it would make writing the tests easier as you would already have conceptualized what you're supposed to test. Lastly, does it take a long time to become very proficient in writing unit tests? Is it kind of like programming where there's always something new to learn and get better at, or is it more like a small set of tools and mindsets that you need to grasp to become a "good" unit tester? It took me a few years to figure out how to become more efficient by writing tests. I think the journey made me a _much_ better developer than I was before I started. As a general rule I've found that I write about 66% more code than I would have written without tests. However, I feel that I'm finally faster now that I've learned how to write tests because I don't waste time deploying applications (in our case to tomcat) and running through an entire flow multiple times to make sure that I get a single step right. If you have the time to take the journey, I think you should keep with it. If I were to offer a few words of advice... Write tests at the layer that they are meant to be used. If you are writing a library that is meant to be used from within other objects, then test it as such. If you are writing a RESTful service, I probably wouldn't unit test that. It's meant to be used from an http call, not a unit test. Mocks feel awesome at first, but I've iterated to using them quite rarely. If you have an actual integration point that you want to mock, then you probably should still mock it. Mocking leads to what I've found to be a completely useless testing strategy.... your test case is a reverse implementation of your actual implementation. This is an extreme waste of time because you've written a very brittle test. If you have to change the test every time you change every little detail about your implemenation, then you might be testing at the wrong level. You may not have exposed the right interface. I probably skipped over a lot, but if you ever have any questions or more specific don't hesitate to ask. Good post. I agree that pure TDD sometimes feels like a burden rather then a blessing. It does force you to think through your implementation on an abstract level, which is a good thing. But you would get the same result with a more mixed approach (write a test, write some impl, write another test, etc). Writing tests after the implementation isn't bad in my perspective, but I do feel it is important to have each test fail at first (for instance commenting out some implementation code, see your test fail. Then put your code back in, check that the test succeeds). That way you know for sure that you're actually testing something useful. As for "the unit test exactly mirroring the implementation". In bigger projects that sometimes is what you want. Not for today, since it feels trivial indeed. But for -say- 6 months further along the line, when a team member refactors your code. Her changes will break the tests, and force her to be very aware of the changes she made. Breaking tests isn't bad per se. It's only bad if behaviour was supposed to remain unchanged. Oddly enough I write unit tests for most pet projects, but at my job I sometimes tend to forget. I guess the pet projects have a more clearly defined behaviour, so it's easier to think up useful test scenarios. | ||
berated-
United States1134 Posts
May 15 2015 09:32 GMT
#12631
On May 15 2015 16:27 _fool wrote: Show nested quote + On May 15 2015 10:24 berated- wrote: On May 15 2015 07:57 BlueRoyaL wrote: Hey guys, I've begun attempting to learn unit testing and I have several questions. I consider myself a decent developer, although I've only been doing it for a little over a year part-time (I mainly do sysadmin stuff at my work). Regretfully, up to this point, the only testing I've been doing is integrating testing ![]() I'm curious to know how many of you work within the TDD workflow, where you write tests first and then go on to write the code that passes them. I'm still in the infant stages of learning unit testing, but I've found that it's much much easier to write the tests AFTER writing the production code. At the moment, it's not difficult to write the tests first if what I'm testing is something very trivial such as a method's return value or something of that nature. Pure TDD is quite extreme and a I believe to be a waste of time. It's important to get test coverage and to figure out how to learn to write tests, following TDD to the letter of the law has felt a bit extreme. However, I also believe that you don't know how far to go until you've taken it to far. I know that I didn't... I took it too far, I went for 100% code coverage on a small project. I think that its too much now. If you can hit 85%+ code coverage and make sure that the tests get written either before or after, I think its okay. You'll need to learn how to make testing work for you so that you write better code that shows your intent, but also provides value. When it gets more complicated, where multiple classes are involved, or when I need to create mocks/stubs for external dependencies, I end up spending so much time trying to think of how the tests should be written out. Is it common to be spending a ton of time writing these tests for non-trivial code? For example, if you're testing the execution of code that spans multiple classes? I almost feel like writing the tests first for these kinds of cases forces you to design the whole system/architecture in the process. Maybe that's a good thing, I'm not sure. Unit tests should cover a single unit. In java this should _generally_ mean one class. If its more than that it should probably be worked into an integration test. This is really hard to pull off. I think the thing that wasn't obvious to me before writing tests is I didn't spend time thinking about what does it mean to write a good interface. What does it mean to expose only the api that you want to expose? What even should my api be? For the first time I became both the author and the consumer of my own api. It really offers things in a new light that wasn't there before testing. I still (shamefully) haven't read the gang of four book. However, design patterns started making sense. Why might you want a factory? Why might you want a builder? It's so you can change out only the things that you want to change. When you write your tests it forces you to think about these things. You should have been thinking about them all along. Now granted, I'm just making up random "complex" scenarios and trying to code it using the test-first mantra. I guess in a more realistic scenario, where some planning and design would be involved beforehand, it would make writing the tests easier as you would already have conceptualized what you're supposed to test. Lastly, does it take a long time to become very proficient in writing unit tests? Is it kind of like programming where there's always something new to learn and get better at, or is it more like a small set of tools and mindsets that you need to grasp to become a "good" unit tester? It took me a few years to figure out how to become more efficient by writing tests. I think the journey made me a _much_ better developer than I was before I started. As a general rule I've found that I write about 66% more code than I would have written without tests. However, I feel that I'm finally faster now that I've learned how to write tests because I don't waste time deploying applications (in our case to tomcat) and running through an entire flow multiple times to make sure that I get a single step right. If you have the time to take the journey, I think you should keep with it. If I were to offer a few words of advice... Write tests at the layer that they are meant to be used. If you are writing a library that is meant to be used from within other objects, then test it as such. If you are writing a RESTful service, I probably wouldn't unit test that. It's meant to be used from an http call, not a unit test. Mocks feel awesome at first, but I've iterated to using them quite rarely. If you have an actual integration point that you want to mock, then you probably should still mock it. Mocking leads to what I've found to be a completely useless testing strategy.... your test case is a reverse implementation of your actual implementation. This is an extreme waste of time because you've written a very brittle test. If you have to change the test every time you change every little detail about your implemenation, then you might be testing at the wrong level. You may not have exposed the right interface. I probably skipped over a lot, but if you ever have any questions or more specific don't hesitate to ask. Good post. I agree that pure TDD sometimes feels like a burden rather then a blessing. It does force you to think through your implementation on an abstract level, which is a good thing. But you would get the same result with a more mixed approach (write a test, write some impl, write another test, etc). Writing tests after the implementation isn't bad in my perspective, but I do feel it is important to have each test fail at first (for instance commenting out some implementation code, see your test fail. Then put your code back in, check that the test succeeds). That way you know for sure that you're actually testing something useful. As for "the unit test exactly mirroring the implementation". In bigger projects that sometimes is what you want. Not for today, since it feels trivial indeed. But for -say- 6 months further along the line, when a team member refactors your code. Her changes will break the tests, and force her to be very aware of the changes she made. Breaking tests isn't bad per se. It's only bad if behaviour was supposed to remain unchanged. Oddly enough I write unit tests for most pet projects, but at my job I sometimes tend to forget. I guess the pet projects have a more clearly defined behaviour, so it's easier to think up useful test scenarios. Sometimes, maybe. I think you should still be careful when you realize you are doing this type of testing. I think once you really get the hang of testing you know when it's okay and when it's the wrong decision. I have the privilege (burden) of teaching a lot of our younger guys how to unit test, and I see this actually happen quite a bit. Hopefully for you, and the guy learning testing now, it's just obvious what the right thing is more quickly and you guys never dealt with this. I.E. A service has a couple dependencies, and the test case looks like mock dependency a, add when statements for the method calls on dependency a, mock dependency b, add when statements for dependency b, so on and so forth. If done incorrectly you've completely destroyed the entire reason behind adding that abstraction layer in the first place. The test is testing the exact implementation of your class, not the intent of the class. In a super simple case where you are testing add(3,4) I would rather see an assert that it is equal to 7, not assert that it is equal to (3+4). | ||
xboi209
United States1173 Posts
May 15 2015 14:42 GMT
#12632
On May 15 2015 14:31 netherh wrote: Show nested quote + On May 15 2015 09:41 xboi209 wrote: Is anyone here with lots of spare time and who knows C++ willing to solve a memory leak problem? I wrote a description of the problem here: https://github.com/HarpyWar/pvpgn/issues/152#issuecomment-100534880 Seeing code like that called C++ makes me cry. ![]() Anyway, if my googling is correct, it looks like xstrdup creates a copy of a c string using xmalloc. So to fix the leak, you probably want to call xfree on it after you're done with it. I have no idea what's going on with casting a char* to an unsigned int* and dereferencing... That shit be cray. Sorry, I meant C++ is allowed, the whole project is/was in the middle of moving towards C++. | ||
spinesheath
Germany8679 Posts
May 15 2015 16:37 GMT
#12633
On May 15 2015 06:49 Manit0u wrote: Actually, my code counts from 1 to 100 (1 to Buzz really) - operator precedence FTW (took me a while to properly use pre- and post-increment operators). Was also thinking of extracting the getMessage method out of it (which is a good idea). I'm also a fan of single-exit principle and try to adhere to it when possible (unless it would make the code too convoluted), so taking into account spinesheath's refactoring I'd do it like that: Oh wow, I DID change the semantics. That's exactly why I recommend you split the increment and the compare instructions. If you have to think about operator precedence it's too complicated. Actually I recommend to avoid the increment operators in all cases except the for statement. I also recommend to use the for statement only in its idiomatic form, which is counting from a to b in steps of size 1. I admit that code like while(i++ < 100) has a draw to it, but at the end of the day you or me or someone else is going to mess up like I did. | ||
Shield
Bulgaria4824 Posts
May 15 2015 18:24 GMT
#12634
Heap isn't too much of a problem since I can either deallocate from destructor or use a smart-pointer but I'm just wondering. Edit: I've also read the potential for stackoverflow rises with that, but how can you tell if you overdo it...? It's also suggested to prefer heap for large objects but how do you define 'large'? | ||
Ropid
Germany3557 Posts
May 15 2015 18:30 GMT
#12635
| ||
Nesserev
Belgium2760 Posts
May 15 2015 18:31 GMT
#12636
| ||
Shield
Bulgaria4824 Posts
May 15 2015 18:33 GMT
#12637
On May 16 2015 03:30 Ropid wrote: What do you mean with "member variables" and the stack? Did you want to say "local variables"?
And then initialise them without using 'new' at all. On May 16 2015 03:31 Nesserev wrote: Show nested quote + On May 16 2015 03:24 darkness wrote: So when I program in C++, I usually make my member variables on the stack. Is there ever a reason to prefer heap? I hardly find any necessary place for heap objects since the usual advice is to allocate on the stack. Heap isn't too much of a problem since I can either deallocate from destructor or use a smart-pointer but I'm just wondering. The only time when you really should use the heap instead of the stack, is when you're dealing with (potentially) very large objects. Because the size of an object on the stack is known at compilation time, and where a certain member is relatively stored in said object, accessing a member stored on the stack is faster; but in general, you shouldn't really keep these things in mind when coding, except for the case mentioned above. Alright, thanks! That's what I've read as well, if I ever get large objects. However, 'large' seems a bit loose / not well-defined. ![]() | ||
Ropid
Germany3557 Posts
May 15 2015 18:44 GMT
#12638
Btw., about what's large or not, the stack size for a program is 8 kB size by default over here for me, or I'm understanding this here wrong: $ ulimit -s ? EDIT: That's just the default soft limit and programs can change it by themselves if they want to. The hard limit for a normal user account on my machine here is without limit: $ ulimit -H -s No idea how normal this is nowadays on Unix-like stuff. | ||
RoyGBiv_13
United States1275 Posts
May 15 2015 19:00 GMT
#12639
On May 16 2015 03:31 Nesserev wrote: Show nested quote + On May 16 2015 03:24 darkness wrote: So when I program in C++, I usually make my member variables on the stack. Is there ever a reason to prefer heap? I hardly find any necessary place for heap objects since the usual advice is to allocate on the stack. Heap isn't too much of a problem since I can either deallocate from destructor or use a smart-pointer but I'm just wondering. The only time when you really should use the heap instead of the stack, is when you're dealing with (potentially) very large objects. Because the size of an object on the stack is known at compilation time, and where a certain member is relatively stored in said object, accessing a member stored on the stack is faster; but in general, you shouldn't really keep these things in mind when coding, except for the case mentioned above. *cringe* Accessing on the stack verse the heap is the same latency, both are in RAM, and usually both are in the same RAM. In the trivial case of a mostly empty heap, malloc is very fast. Not quite as fast as initiating a stack frame, but fast nonetheless. There isn't a singular set of rules that governs where to put objects in memory. Below are some pointers: Keeping objects or buffers on the stack is dangerous if you accidentally pass the pointer to the buffer to somewhere in the program that uses it after the stack frame containing the buffer is lost. This can happen often if you have data structures where you store pointers to things. Big or small, things in data structures should be allocated on the heap, not the stack. If the usefulness of a buffer or object is limited in scope, then it's safe to use the stack. Overusing either the stack or the heap is dangerous. Overusing the stack will segfault. Overusing malloc will return a useful error code that can be recovered from. Constantly allocating and freeing memory on the heap will cause heap fragmentation, but constantly pushing and popping buffers on the stack will have no lasting effect (unless you have a buffer overflow). Dynamically sized buffers should go on the heap. putting them on the stack may allow for potential stack overflow exploits. Many programmers will avoid malloc and free in their programs as a way to ensure there are no memory leaks. This is a valid way to program safely. Many library functions use the heap (printf, for example), and if you accidentally fill up the heap, your program may fail in unexpected ways. Another way to program safely is to use a tool that performs run-time memory checking to ensure you don't have memory leaks. Becoming familiar with these tools (see valgrind) will pay dividends later by catching bugs before they happen. | ||
Shield
Bulgaria4824 Posts
May 15 2015 19:09 GMT
#12640
On May 16 2015 04:00 RoyGBiv_13 wrote: Show nested quote + On May 16 2015 03:31 Nesserev wrote: On May 16 2015 03:24 darkness wrote: So when I program in C++, I usually make my member variables on the stack. Is there ever a reason to prefer heap? I hardly find any necessary place for heap objects since the usual advice is to allocate on the stack. Heap isn't too much of a problem since I can either deallocate from destructor or use a smart-pointer but I'm just wondering. The only time when you really should use the heap instead of the stack, is when you're dealing with (potentially) very large objects. Because the size of an object on the stack is known at compilation time, and where a certain member is relatively stored in said object, accessing a member stored on the stack is faster; but in general, you shouldn't really keep these things in mind when coding, except for the case mentioned above. *cringe* Accessing on the stack verse the heap is the same latency, both are in RAM, and usually both are in the same RAM. In the trivial case of a mostly empty heap, malloc is very fast. Not quite as fast as initiating a stack frame, but fast nonetheless. There isn't a singular set of rules that governs where to put objects in memory. Below are some pointers: Keeping objects or buffers on the stack is dangerous if you accidentally pass the pointer to the buffer to somewhere in the program that uses it after the stack frame containing the buffer is lost. This can happen often if you have data structures where you store pointers to things. Big or small, things in data structures should be allocated on the heap, not the stack. If the usefulness of a buffer or object is limited in scope, then it's safe to use the stack. Overusing either the stack or the heap is dangerous. Overusing the stack will segfault. Overusing malloc will return a useful error code that can be recovered from. Constantly allocating and freeing memory on the heap will cause heap fragmentation, but constantly pushing and popping buffers on the stack will have no lasting effect (unless you have a buffer overflow). Dynamically sized buffers should go on the heap. putting them on the stack may allow for potential stack overflow exploits. Many programmers will avoid malloc and free in their programs as a way to ensure there are no memory leaks. This is a valid way to program safely. Many library functions use the heap (printf, for example), and if you accidentally fill up the heap, your program may fail in unexpected ways. Another way to program safely is to use a tool that performs run-time memory checking to ensure you don't have memory leaks. Becoming familiar with these tools (see valgrind) will pay dividends later by catching bugs before they happen. Well, at least we know allocation on the stack is faster. ![]() Quick example from stackoverflow (link):
Output stack allocation took 0 clock ticks heap allocation took 78 clock ticks | ||
| ||
WardiTV Map Contest Tou…
Group A
Rogue vs CreatorLIVE!
MaxPax vs Rogue
Spirit vs Creator
Spirit vs Rogue
Spirit vs MaxPax
[ Submit Event ] |
![]() StarCraft 2 StarCraft: Brood War Britney Dota 2![]() ![]() Calm ![]() Rain ![]() Horang2 ![]() Larva ![]() Pusan ![]() BeSt ![]() Jaedong ![]() Harstem ![]() Mini ![]() [ Show more ] Counter-Strike Other Games singsing2662 B2W.Neo970 hungrybox312 Pyrionflax311 SortOf279 Fuzer ![]() Lowko158 ZerO(Twitch)30 JuggernautJason3 Organizations
StarCraft 2 • StrangeGG StarCraft: Brood War![]() • AfreecaTV YouTube • intothetv ![]() • Kozan • IndyKCrew ![]() • LaughNgamezSOOP • Migwel ![]() • sooper7s Dota 2 League of Legends |
Code For Giants Cup
WardiTV Map Contest Tou…
Jumy vs Zoun
Clem vs Jumy
ByuN vs Zoun
Clem vs Zoun
ByuN vs Jumy
ByuN vs Clem
The PondCast
WardiTV Map Contest Tou…
Replay Cast
WardiTV Map Contest Tou…
SC Evo Complete
Classic vs uThermal
SOOP StarCraft League
CranKy Ducklings
SOOP
[ Show More ] WardiTV Map Contest Tou…
[BSL 2025] Weekly
SOOP StarCraft League
Sparkling Tuna Cup
WardiTV Map Contest Tou…
uThermal 2v2 Circuit
uThermal 2v2 Circuit
|
|