|
Thread Rules 1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution. 2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20) 3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible. 4. Use [code] tags to format code blocks. |
On May 16 2015 04:00 RoyGBiv_13 wrote:Show nested quote +On May 16 2015 03:31 Nesserev wrote:On May 16 2015 03:24 darkness wrote: So when I program in C++, I usually make my member variables on the stack. Is there ever a reason to prefer heap? I hardly find any necessary place for heap objects since the usual advice is to allocate on the stack.
Heap isn't too much of a problem since I can either deallocate from destructor or use a smart-pointer but I'm just wondering. The only time when you really should use the heap instead of the stack, is when you're dealing with (potentially) very large objects. Because the size of an object on the stack is known at compilation time, and where a certain member is relatively stored in said object, accessing a member stored on the stack is faster; but in general, you shouldn't really keep these things in mind when coding, except for the case mentioned above. *cringe* Accessing on the stack verse the heap is the same latency, both are in RAM, and usually both are in the same RAM. In the trivial case of a mostly empty heap, malloc is very fast. Not quite as fast as initiating a stack frame, but fast nonetheless. There isn't a singular set of rules that governs where to put objects in memory. Below are some pointers: Keeping objects or buffers on the stack is dangerous if you accidentally pass the pointer to the buffer to somewhere in the program that uses it after the stack frame containing the buffer is lost. This can happen often if you have data structures where you store pointers to things. Big or small, things in data structures should be allocated on the heap, not the stack. If the usefulness of a buffer or object is limited in scope, then it's safe to use the stack. Overusing either the stack or the heap is dangerous. Overusing the stack will segfault. Overusing malloc will return a useful error code that can be recovered from. Constantly allocating and freeing memory on the heap will cause heap fragmentation, but constantly pushing and popping buffers on the stack will have no lasting effect (unless you have a buffer overflow). Dynamically sized buffers should go on the heap. putting them on the stack may allow for potential stack overflow exploits. Many programmers will avoid malloc and free in their programs as a way to ensure there are no memory leaks. This is a valid way to program safely. Many library functions use the heap (printf, for example), and if you accidentally fill up the heap, your program may fail in unexpected ways. Another way to program safely is to use a tool that performs run-time memory checking to ensure you don't have memory leaks. Becoming familiar with these tools (see valgrind) will pay dividends later by catching bugs before they happen.
For my response to the stack overflow claim, malloc is indeed a function call with it's own stack, that it has to initiate. It will also have a few more calls underneath it (eventually to sbrk which does all the actual work of malloc) and it has to mark in a data structure what memory is being used, all of which take time. It should be no surprise that it isn't free, but it is still very fast (measured in instructions) and O(1). If you are doing a ten million allocations in a program, you probably don't want to use the heap anyway since the heap fragmentation issue. + Show Spoiler +I did mention that it was *quite* as fast, though
Additionally, while the stack and the heap can be used for dynamic data allocation, there are several other statically linked sections which can be used if you know the size of your object during compile time.
global initialized variables are placed in .data uninitialized variables go in .bss small uninitialized variables go in .sbss (if available) constant initialized variables go in .rodata
foo.c class foo { public int a; public int b; };
char hello[] = "hello world!\n"; //this goes in the .data section foo bar; //This POD goes in global .bss section const unsigned int key = 0xFFAB3411; //This goes in the .rodata section int biz; // this goes in .sbss
int main() { int c = 4; //this goes in the stack char* d = malloc(10); //this is taken from the heap }
Additionally, many operating systems allow you to create your own dynamically allocated sections of memory by allocating free pages. See get_free_page() for linux. This can, for example, be used to create your own implementation of a heap or stack.
|
Hi everyone. I recently started teaching myself to code and got the book The C Programming language second edition. I am having a huge problem with chapter 1.5 where it has character input and output. It keeps giving me code on how to count chars and it says it returns them as output. But everytime I run this code nothing actually comes out. There is no where that they tell me to add input, I have no clue how to add input. I am just so god damn confused.
They have the code
#include<studio.h>
main() { int c;
c = getchar(); while (c != EOF) { putchar(c); c = getchar(); } }
I get that this code is supposed to give me an output, I just have no clue where I put the input. I get how the code works, just not how to make it work. Can anyone help? I am really confsued
|
On May 16 2015 04:25 Deathmanbob wrote: Hi everyone. I recently started teaching myself to code and got the book The C Programming language second edition. I am having a huge problem with chapter 1.5 where it has character input and output. It keeps giving me code on how to count chars and it says it returns them as output. But everytime I run this code nothing actually comes out. There is no where that they tell me to add input, I have no clue how to add input. I am just so god damn confused.
They have the code
#include<studio.h>
main() { int c;
c = getchar(); while (c != EOF) { putchar(c); c = getchar(); } }
I get that this code is supposed to give me an output, I just have no clue where I put the input. I get how the code works, just not how to make it work. Can anyone help? I am really confsued
After you compile the code and you have a program (a.out, for example). If you run the program from the terminal, it will look something like
$ ./a.out
After this will be a blank line awaiting your input. Start typing and see what happens.
If you're using an IDE like visual studio, then when you run the program, it should open up a blank terminal. Type in there.
|
|
On May 16 2015 05:20 Nesserev wrote:Show nested quote +On May 16 2015 03:33 darkness wrote:Alright, thanks! That's what I've read as well, if I ever get large objects. However, 'large' seems a bit loose / not well-defined.  Well, in practice, you almost never directly push a large block of data onto the stack. Instead, you will generally push a (relatively) small object that internally manages larger objects through pointers(well, they should), and thus said object will be responsible for putting it on the heap... and it probably will. Because it's actually very difficult, or at least very rare, to push a large object onto the stack. I'm pretty sure that the only (potentially) large object that you can push onto the stack, is an array. So, someone might have the genuinely awful idea that the following is a good idea: int mArray[1028][2048];
Show nested quote +On May 16 2015 04:00 RoyGBiv_13 wrote:On May 16 2015 03:31 Nesserev wrote:On May 16 2015 03:24 darkness wrote: So when I program in C++, I usually make my member variables on the stack. Is there ever a reason to prefer heap? I hardly find any necessary place for heap objects since the usual advice is to allocate on the stack.
Heap isn't too much of a problem since I can either deallocate from destructor or use a smart-pointer but I'm just wondering. The only time when you really should use the heap instead of the stack, is when you're dealing with (potentially) very large objects. Because the size of an object on the stack is known at compilation time, and where a certain member is relatively stored in said object, accessing a member stored on the stack is faster; but in general, you shouldn't really keep these things in mind when coding, except for the case mentioned above. *cringe* Accessing on the stack verse the heap is the same latency, both are in RAM, and usually both are in the same RAM. In the trivial case of a mostly empty heap, malloc is very fast. Not quite as fast as initiating a stack frame, but fast nonetheless. *cringe*  Well, I know what you're getting at. Both the program's stack and heap are stored in the main memory, so if you were to look at loading something into a register, it'll be the same regardless. There is no magical fairy dust that gives the part where the stack happens to be stored 10x faster access, but I also didn't say that. (Maybe I did say that, but that's not what I meant... *(ab)uses 'English is not my first language'-card*) What I was getting at, was, when a program is executed, less instructions are needed to "use" the object on the stack. You always need at least that one extra instruction for the object stored on the heap, because you're using an 'indirect address'. Or am I wrong about this...
There is no indirect access (dereferencing multiple pointers). A variable stored on the stack or in the heap are just places in memory accessed via a load instruction. malloc returns the address of the location in the heap that has been allocated, similarly, pushing a variable on the stack pointer just makes room in the stack.
This did remind me that functions with locally used variables declared on the stack may have them placed in registers instead of memory, which would indeed be a great speed benefit in accessing things stored on the stack. This is much less true for x86 with it's tiny register set.
|
On May 15 2015 16:27 _fool wrote:Show nested quote +On May 15 2015 10:24 berated- wrote:On May 15 2015 07:57 BlueRoyaL wrote:Hey guys, I've begun attempting to learn unit testing and I have several questions. I consider myself a decent developer, although I've only been doing it for a little over a year part-time (I mainly do sysadmin stuff at my work). Regretfully, up to this point, the only testing I've been doing is integrating testing  I'm curious to know how many of you work within the TDD workflow, where you write tests first and then go on to write the code that passes them. I'm still in the infant stages of learning unit testing, but I've found that it's much much easier to write the tests AFTER writing the production code. At the moment, it's not difficult to write the tests first if what I'm testing is something very trivial such as a method's return value or something of that nature. Pure TDD is quite extreme and a I believe to be a waste of time. It's important to get test coverage and to figure out how to learn to write tests, following TDD to the letter of the law has felt a bit extreme. However, I also believe that you don't know how far to go until you've taken it to far. I know that I didn't... I took it too far, I went for 100% code coverage on a small project. I think that its too much now. If you can hit 85%+ code coverage and make sure that the tests get written either before or after, I think its okay. You'll need to learn how to make testing work for you so that you write better code that shows your intent, but also provides value. When it gets more complicated, where multiple classes are involved, or when I need to create mocks/stubs for external dependencies, I end up spending so much time trying to think of how the tests should be written out.
Is it common to be spending a ton of time writing these tests for non-trivial code? For example, if you're testing the execution of code that spans multiple classes?
I almost feel like writing the tests first for these kinds of cases forces you to design the whole system/architecture in the process. Maybe that's a good thing, I'm not sure.
Unit tests should cover a single unit. In java this should _generally_ mean one class. If its more than that it should probably be worked into an integration test. This is really hard to pull off. I think the thing that wasn't obvious to me before writing tests is I didn't spend time thinking about what does it mean to write a good interface. What does it mean to expose only the api that you want to expose? What even should my api be? For the first time I became both the author and the consumer of my own api. It really offers things in a new light that wasn't there before testing. I still (shamefully) haven't read the gang of four book. However, design patterns started making sense. Why might you want a factory? Why might you want a builder? It's so you can change out only the things that you want to change. When you write your tests it forces you to think about these things. You should have been thinking about them all along. Now granted, I'm just making up random "complex" scenarios and trying to code it using the test-first mantra. I guess in a more realistic scenario, where some planning and design would be involved beforehand, it would make writing the tests easier as you would already have conceptualized what you're supposed to test.
Lastly, does it take a long time to become very proficient in writing unit tests? Is it kind of like programming where there's always something new to learn and get better at, or is it more like a small set of tools and mindsets that you need to grasp to become a "good" unit tester?
It took me a few years to figure out how to become more efficient by writing tests. I think the journey made me a _much_ better developer than I was before I started. As a general rule I've found that I write about 66% more code than I would have written without tests. However, I feel that I'm finally faster now that I've learned how to write tests because I don't waste time deploying applications (in our case to tomcat) and running through an entire flow multiple times to make sure that I get a single step right. If you have the time to take the journey, I think you should keep with it. If I were to offer a few words of advice... Write tests at the layer that they are meant to be used. If you are writing a library that is meant to be used from within other objects, then test it as such. If you are writing a RESTful service, I probably wouldn't unit test that. It's meant to be used from an http call, not a unit test. Mocks feel awesome at first, but I've iterated to using them quite rarely. If you have an actual integration point that you want to mock, then you probably should still mock it. Mocking leads to what I've found to be a completely useless testing strategy.... your test case is a reverse implementation of your actual implementation. This is an extreme waste of time because you've written a very brittle test. If you have to change the test every time you change every little detail about your implemenation, then you might be testing at the wrong level. You may not have exposed the right interface. I probably skipped over a lot, but if you ever have any questions or more specific don't hesitate to ask. Good post. I agree that pure TDD sometimes feels like a burden rather then a blessing. It does force you to think through your implementation on an abstract level, which is a good thing. But you would get the same result with a more mixed approach (write a test, write some impl, write another test, etc). Writing tests after the implementation isn't bad in my perspective, but I do feel it is important to have each test fail at first (for instance commenting out some implementation code, see your test fail. Then put your code back in, check that the test succeeds). That way you know for sure that you're actually testing something useful. As for "the unit test exactly mirroring the implementation". In bigger projects that sometimes is what you want. Not for today, since it feels trivial indeed. But for -say- 6 months further along the line, when a team member refactors your code. Her changes will break the tests, and force her to be very aware of the changes she made. Breaking tests isn't bad per se. It's only bad if behaviour was supposed to remain unchanged. Oddly enough I write unit tests for most pet projects, but at my job I sometimes tend to forget. I guess the pet projects have a more clearly defined behaviour, so it's easier to think up useful test scenarios. If you mean by pure TDD also testing the data structures I agree, if not I don't see a reason to EVER not do pure TDD
for me the following the three rules of TDD puts you into another state of mind and it gaurantees that you will only have 1. gaurantee that the tests won't pass when they shouldn't. 2. the bare minimum amount of code to make your test fail when it has to 3. the minimum amount of code in production that is needed.
|
On May 16 2015 07:32 RoyGBiv_13 wrote: [...] x86 with it's tiny register set. There were eight additional registers introduced with x86_64, so it's now a good amount of registers.
|
On May 16 2015 04:00 RoyGBiv_13 wrote: Accessing on the stack verse the heap is the same latency, both are in RAM, and usually both are in the same RAM. In the trivial case of a mostly empty heap, malloc is very fast. Not quite as fast as initiating a stack frame, but fast nonetheless.
Same latency to pull over the memory bus but which is more likely to result in cache hits? Data on the stack by its very nature has the best locality. Obviously a vector<> has an inherent heap allocation and extra level of indirection but you're doubling down with a *vector<>, for example.
On May 16 2015 04:00 RoyGBiv_13 wrote: Keeping objects or buffers on the stack is dangerous if you accidentally pass the pointer to the buffer to somewhere in the program that uses it after the stack frame containing the buffer is lost. This can happen often if you have data structures where you store pointers to things. Big or small, things in data structures should be allocated on the heap, not the stack.
I think your comment on 'danger' is an indictment of program and system design rather than keeping a buffer on the stack. If you design systems that store pointers not mindful of their scope then you'll likely need to heap allocate your objects Java/C# style, count references, and collect your garbage. C++ is more flexible than this. Object lifetimes can be completely hierarchical and thus their data validity utterly predicable. You have the option to do away with the concept of 'garbage' because fundamentally...
On May 16 2015 04:00 RoyGBiv_13 wrote: If the usefulness of a buffer or object is limited in scope, then it's safe to use the stack.
...all objects are "limited in scope". A pointer to an object on the stack in main() will be valid for essentially your entire program. Use references instead of pointers (in addition to keeping things on the stack) and you've got to do silly things to address something that has gone out of scope.
|
On May 16 2015 07:32 RoyGBiv_13 wrote: There is no indirect access (dereferencing multiple pointers). A variable stored on the stack or in the heap are just places in memory accessed via a load instruction. malloc returns the address of the location in the heap that has been allocated, similarly, pushing a variable on the stack pointer just makes room in the stack.
Malloc returns an address that must be stored somewhere (usually a stack variable). To retrieve the data from that address you must first retrieve the address. The compiler knows at which stack base pointer offset your stack objects exist at. It doesn't and can't know where your heap objects are, only where to find their address. That's the indirect access.
|
On May 16 2015 07:51 Mstring wrote:Show nested quote +On May 16 2015 07:32 RoyGBiv_13 wrote: There is no indirect access (dereferencing multiple pointers). A variable stored on the stack or in the heap are just places in memory accessed via a load instruction. malloc returns the address of the location in the heap that has been allocated, similarly, pushing a variable on the stack pointer just makes room in the stack.
Malloc returns an address that must be stored somewhere (usually a stack variable). To retrieve the data from that address you must first retrieve the address. The compiler knows at which stack base pointer offset your stack objects exist at. It doesn't and can't know where your heap objects are, only where to find their address. That's the indirect access. That's one added step of preparation to get the address into a register. After that, the following code should look similar for both.
|
On May 16 2015 07:59 Ropid wrote:Show nested quote +On May 16 2015 07:51 Mstring wrote:On May 16 2015 07:32 RoyGBiv_13 wrote: There is no indirect access (dereferencing multiple pointers). A variable stored on the stack or in the heap are just places in memory accessed via a load instruction. malloc returns the address of the location in the heap that has been allocated, similarly, pushing a variable on the stack pointer just makes room in the stack.
Malloc returns an address that must be stored somewhere (usually a stack variable). To retrieve the data from that address you must first retrieve the address. The compiler knows at which stack base pointer offset your stack objects exist at. It doesn't and can't know where your heap objects are, only where to find their address. That's the indirect access. That's one added step of preparation to get the address into a register. After that, the following code should look similar for both. The original comment stated that "you always need at least that one extra instruction". It seems you agree =)
|
Since I want to learn some Java and I'm also starting my own 1-man company I thought I'll use this opportunity to write a system using Spring for myself to manage invoices and stuff. Sounds like a project that'll teach me the basics of creating enterprise applications in Java and at the same time won't be something I just do for learning (more motivation to actually complete it). Do you think it's a good idea?
|
edit: drunk posting = bad idea.
|
On May 17 2015 00:14 Manit0u wrote: Since I want to learn some Java and I'm also starting my own 1-man company I thought I'll use this opportunity to write a system using Spring for myself to manage invoices and stuff. Sounds like a project that'll teach me the basics of creating enterprise applications in Java and at the same time won't be something I just do for learning (more motivation to actually complete it). Do you think it's a good idea?
For learning purposes, it as good as any project.
A note on the business side of things though... Spend time on your core competences, buy the rest.
|
On May 17 2015 09:51 Khalum wrote: edit: drunk posting = bad idea. Haha, your original post gave me a good laugh :D.
|
So does any of you still play StarCraft 2? If yes, please PM me so I can have fellow programmers on friends list.
|
On May 18 2015 09:16 darkness wrote:So does any of you still play StarCraft 2? If yes, please PM me so I can have fellow programmers on friends list. 
No, but I guess some of us are still playing BroodWar and other good games
|
If anybody wants to make a few dollars ( pay pal? ) I have a java project due tomorrow and I've put it off for so long and am definitely not good enough at this to get it done tonight.
hit me up
User was warned for this post
|
On May 18 2015 09:39 Manit0u wrote:Show nested quote +On May 18 2015 09:16 darkness wrote:So does any of you still play StarCraft 2? If yes, please PM me so I can have fellow programmers on friends list.  No, but I guess some of us are still playing BroodWar and other good games  implying that starcraft 2 isn't good game? + Show Spoiler +
anyway I do still sometimes still play Starcraft 2.
|
Just finished my first week of my intership and holyfuck I never experianced something mentally draining in my live.
I feel just going to sleep after coming back home and do nothing else. Anybody else experianced this?
|
|
|
|