|
Thread Rules 1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution. 2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20) 3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible. 4. Use [code] tags to format code blocks. |
On September 18 2013 13:22 Manit0u wrote:Show nested quote +On September 17 2013 15:08 rasnj wrote:On September 17 2013 13:34 Amnesty wrote:Anyone know of a way to put something into a input stream. So there is a default value there. So instead of cout >> "Input a number :"; int x; cin << x;
printing Input a number : it would print Input a number :150_<cursor right here> For example. I do not believe there is a standard C++ way of doing this. You can inject data into cin as follows: std::cin.putback('1'); std::cin.putback('5'); std::cin.putback('0'); But it will not show up on the console and you cannot erase those characters. Generally standard C++ is very limited in what you can do at the console. If you are just using the console because you are still learning and it is more convenient than coding a GUI interface, then I would just settle for something simpler like: Input a number (default: 150): _ If you actually want to do it properly you will need to do something platform dependent or use a third-party library that abstracts away the platform dependence. On Win32 you could do (this has no error-handling and can only handle digits, but the idea can easily be generalized if you wish): #include <iostream> #include <windows.h>
void write_digit_to_stdin(char digit) { // Calculate virtual-key code. // Look up on MSDN for codes for other keys than the digits. int virtual_key_code = static_cast<int>(digit) + 0x30 - static_cast<int>('0');
// Fill out an input record that simulates pressing the desired key followed // by releasing the same key. INPUT_RECORD input_record[2]; for(int i = 0; i < 2; ++i) { input_record[i].EventType = KEY_EVENT; input_record[i].Event.KeyEvent.dwControlKeyState = 0; input_record[i].Event.KeyEvent.uChar.UnicodeChar = digit; input_record[i].Event.KeyEvent.wRepeatCount = 1; input_record[i].Event.KeyEvent.wVirtualKeyCode = virtual_key_code; input_record[i].Event.KeyEvent.wVirtualScanCode = MapVirtualKey(virtual_key_code, MAPVK_VK_TO_VSC); } input_record[0].Event.KeyEvent.bKeyDown = TRUE; input_record[1].Event.KeyEvent.bKeyDown = FALSE;
DWORD tmp = 0; WriteConsoleInput( GetStdHandle(STD_INPUT_HANDLE) ,input_record,2,&tmp); }
int main() { int x; std::cout << "Input a number : "; write_digit_to_stdin('1'); write_digit_to_stdin('5'); write_digit_to_stdin('0'); std::cin >> x; std::cout << "Wrote: " << x << std::endl;
return 0; }
Why make it so complicated? #include <iostream> #include <string> #include <sstream> using namespace std;
int main() { int x = 150; string str;
cout << "Input a number (default 150): "; getline(cin, str); if(strlen str) { stringstream(str) >> x; }
cout << "Your chosen number is: " << x << endl;
return 0; }
Well because the person I was replying to asked for it to have a prompt of the form: "Input a number: 150" where the "150" can be edited, so I may type <backspace>23<enter> and the console would look like Input a number: 1523 Wrote: 1523 What you wrote is similar to the simple alternative I suggested.
|
On September 18 2013 14:11 rasnj wrote:Show nested quote +On September 18 2013 13:22 Manit0u wrote:On September 17 2013 15:08 rasnj wrote:On September 17 2013 13:34 Amnesty wrote:Anyone know of a way to put something into a input stream. So there is a default value there. So instead of cout >> "Input a number :"; int x; cin << x;
printing Input a number : it would print Input a number :150_<cursor right here> For example. I do not believe there is a standard C++ way of doing this. You can inject data into cin as follows: std::cin.putback('1'); std::cin.putback('5'); std::cin.putback('0'); But it will not show up on the console and you cannot erase those characters. Generally standard C++ is very limited in what you can do at the console. If you are just using the console because you are still learning and it is more convenient than coding a GUI interface, then I would just settle for something simpler like: Input a number (default: 150): _ If you actually want to do it properly you will need to do something platform dependent or use a third-party library that abstracts away the platform dependence. On Win32 you could do (this has no error-handling and can only handle digits, but the idea can easily be generalized if you wish): #include <iostream> #include <windows.h>
void write_digit_to_stdin(char digit) { // Calculate virtual-key code. // Look up on MSDN for codes for other keys than the digits. int virtual_key_code = static_cast<int>(digit) + 0x30 - static_cast<int>('0');
// Fill out an input record that simulates pressing the desired key followed // by releasing the same key. INPUT_RECORD input_record[2]; for(int i = 0; i < 2; ++i) { input_record[i].EventType = KEY_EVENT; input_record[i].Event.KeyEvent.dwControlKeyState = 0; input_record[i].Event.KeyEvent.uChar.UnicodeChar = digit; input_record[i].Event.KeyEvent.wRepeatCount = 1; input_record[i].Event.KeyEvent.wVirtualKeyCode = virtual_key_code; input_record[i].Event.KeyEvent.wVirtualScanCode = MapVirtualKey(virtual_key_code, MAPVK_VK_TO_VSC); } input_record[0].Event.KeyEvent.bKeyDown = TRUE; input_record[1].Event.KeyEvent.bKeyDown = FALSE;
DWORD tmp = 0; WriteConsoleInput( GetStdHandle(STD_INPUT_HANDLE) ,input_record,2,&tmp); }
int main() { int x; std::cout << "Input a number : "; write_digit_to_stdin('1'); write_digit_to_stdin('5'); write_digit_to_stdin('0'); std::cin >> x; std::cout << "Wrote: " << x << std::endl;
return 0; }
Why make it so complicated? #include <iostream> #include <string> #include <sstream> using namespace std;
int main() { int x = 150; string str;
cout << "Input a number (default 150): "; getline(cin, str); if(strlen str) { stringstream(str) >> x; }
cout << "Your chosen number is: " << x << endl;
return 0; }
Well because the person I was replying to asked for it to have a prompt of the form: "Input a number: 150" where the "150" can be edited, so I may type <backspace>23<enter> and the console would look like Input a number: 1523 Wrote: 1523 What you wrote is similar to the simple alternative I suggested.
You may want to look here then: http://www.cplusplus.com/reference/vector/vector/push_back/
|
On September 18 2013 14:36 Manit0u wrote:Show nested quote +On September 18 2013 14:11 rasnj wrote:On September 18 2013 13:22 Manit0u wrote:On September 17 2013 15:08 rasnj wrote:On September 17 2013 13:34 Amnesty wrote:Anyone know of a way to put something into a input stream. So there is a default value there. So instead of cout >> "Input a number :"; int x; cin << x;
printing Input a number : it would print Input a number :150_<cursor right here> For example. I do not believe there is a standard C++ way of doing this. You can inject data into cin as follows: std::cin.putback('1'); std::cin.putback('5'); std::cin.putback('0'); But it will not show up on the console and you cannot erase those characters. Generally standard C++ is very limited in what you can do at the console. If you are just using the console because you are still learning and it is more convenient than coding a GUI interface, then I would just settle for something simpler like: Input a number (default: 150): _ If you actually want to do it properly you will need to do something platform dependent or use a third-party library that abstracts away the platform dependence. On Win32 you could do (this has no error-handling and can only handle digits, but the idea can easily be generalized if you wish): #include <iostream> #include <windows.h>
void write_digit_to_stdin(char digit) { // Calculate virtual-key code. // Look up on MSDN for codes for other keys than the digits. int virtual_key_code = static_cast<int>(digit) + 0x30 - static_cast<int>('0');
// Fill out an input record that simulates pressing the desired key followed // by releasing the same key. INPUT_RECORD input_record[2]; for(int i = 0; i < 2; ++i) { input_record[i].EventType = KEY_EVENT; input_record[i].Event.KeyEvent.dwControlKeyState = 0; input_record[i].Event.KeyEvent.uChar.UnicodeChar = digit; input_record[i].Event.KeyEvent.wRepeatCount = 1; input_record[i].Event.KeyEvent.wVirtualKeyCode = virtual_key_code; input_record[i].Event.KeyEvent.wVirtualScanCode = MapVirtualKey(virtual_key_code, MAPVK_VK_TO_VSC); } input_record[0].Event.KeyEvent.bKeyDown = TRUE; input_record[1].Event.KeyEvent.bKeyDown = FALSE;
DWORD tmp = 0; WriteConsoleInput( GetStdHandle(STD_INPUT_HANDLE) ,input_record,2,&tmp); }
int main() { int x; std::cout << "Input a number : "; write_digit_to_stdin('1'); write_digit_to_stdin('5'); write_digit_to_stdin('0'); std::cin >> x; std::cout << "Wrote: " << x << std::endl;
return 0; }
Why make it so complicated? #include <iostream> #include <string> #include <sstream> using namespace std;
int main() { int x = 150; string str;
cout << "Input a number (default 150): "; getline(cin, str); if(strlen str) { stringstream(str) >> x; }
cout << "Your chosen number is: " << x << endl;
return 0; }
Well because the person I was replying to asked for it to have a prompt of the form: "Input a number: 150" where the "150" can be edited, so I may type <backspace>23<enter> and the console would look like Input a number: 1523 Wrote: 1523 What you wrote is similar to the simple alternative I suggested. You may want to look here then: http://www.cplusplus.com/reference/vector/vector/push_back/ I think you are misunderstanding what is expected. std::cin is not a vector it is an std::istream, so the relevant instruction is std::istream::putback as I remarked in my post. However this does not actually show up on the console, only in cin's internal buffer, and the user cannot delete it.
What we want is that when we launch the prompt looks as follows: > Input a number: 150 If we press backspace we get > Input a number: 15 If we press 2 at this point we get: > Input a number 152 And we can now press enter at which point cin appends sends "152" to str.
|
On September 18 2013 10:17 waxypants wrote:Show nested quote +On September 17 2013 16:20 WindWolf wrote:On September 17 2013 15:06 sluggaslamoo wrote:On September 17 2013 14:56 WindWolf wrote: One reason I'm wondering why people hate setters and getters:
Lets assume that you've written a RNG that has complex internal code (just as an example) and requires lots of code just to set a seed (once again, just as an example). In that case, I would find it easier to read setSeed() than reading all of the code required to set the seed itself There are plenty of reasons. In this case it could make the code hard to debug. Someone could easily already be using setSeed() deeply nested within some code file. Then when your code doesn't randomize differently, you are sitting their scratching your head why your RNG code didn't work when really it was setSeed causing the issue. The seed should be set in the initialiser or as a config variable or both. You aren't going to need to set the seed on the fly for RNG in a game. When the game starts you initialise the RNG object with the seed using the game's config table, after that it doesn't need to change until the next game. This allows you to isolate the issue and make debugging much easier. IMO it is also simpler and much more intuitive than using setSeed. I didn't meant about RNG's specifically, I was just the principal of using a set method for a long piece of code instead of writing all of that code again. And even is setSeed isn't a good name, what would make for a good name (we can take the debate on whenever you should be able to change the seed of a RGN or not another time) What you described is not a "setter" just because it has name "set" in it. Of course almost every function or even single statement in programming involves something getting "set" to something else. Doesn't make it a "setter". edit: Although I think you inadvertently brought up a good question, which is "how much work can a method do before it's not really a 'setter' or 'getter' anymore?" I think you brought up an example where it is clearly doing too much work to be a "setter". I'd say it's a setter just fine. While it's true that a setter will usually not have all that much code it in, and definitely not a ton of side-effects, from an interface standpoint, it's still a setter since you would definitely describe the Random class as something containing a seed property, and this setter sets that value. From an interface standpoint, you don't know whether it's implemented in a complex way.
At least IMO, a setter or getter is defined by their purpose in the interface, not by their actual implementations.
|
On September 18 2013 14:44 rasnj wrote:Show nested quote +On September 18 2013 14:36 Manit0u wrote:On September 18 2013 14:11 rasnj wrote:On September 18 2013 13:22 Manit0u wrote:On September 17 2013 15:08 rasnj wrote:On September 17 2013 13:34 Amnesty wrote:Anyone know of a way to put something into a input stream. So there is a default value there. So instead of cout >> "Input a number :"; int x; cin << x;
printing Input a number : it would print Input a number :150_<cursor right here> For example. I do not believe there is a standard C++ way of doing this. You can inject data into cin as follows: std::cin.putback('1'); std::cin.putback('5'); std::cin.putback('0'); But it will not show up on the console and you cannot erase those characters. Generally standard C++ is very limited in what you can do at the console. If you are just using the console because you are still learning and it is more convenient than coding a GUI interface, then I would just settle for something simpler like: Input a number (default: 150): _ If you actually want to do it properly you will need to do something platform dependent or use a third-party library that abstracts away the platform dependence. On Win32 you could do (this has no error-handling and can only handle digits, but the idea can easily be generalized if you wish): #include <iostream> #include <windows.h>
void write_digit_to_stdin(char digit) { // Calculate virtual-key code. // Look up on MSDN for codes for other keys than the digits. int virtual_key_code = static_cast<int>(digit) + 0x30 - static_cast<int>('0');
// Fill out an input record that simulates pressing the desired key followed // by releasing the same key. INPUT_RECORD input_record[2]; for(int i = 0; i < 2; ++i) { input_record[i].EventType = KEY_EVENT; input_record[i].Event.KeyEvent.dwControlKeyState = 0; input_record[i].Event.KeyEvent.uChar.UnicodeChar = digit; input_record[i].Event.KeyEvent.wRepeatCount = 1; input_record[i].Event.KeyEvent.wVirtualKeyCode = virtual_key_code; input_record[i].Event.KeyEvent.wVirtualScanCode = MapVirtualKey(virtual_key_code, MAPVK_VK_TO_VSC); } input_record[0].Event.KeyEvent.bKeyDown = TRUE; input_record[1].Event.KeyEvent.bKeyDown = FALSE;
DWORD tmp = 0; WriteConsoleInput( GetStdHandle(STD_INPUT_HANDLE) ,input_record,2,&tmp); }
int main() { int x; std::cout << "Input a number : "; write_digit_to_stdin('1'); write_digit_to_stdin('5'); write_digit_to_stdin('0'); std::cin >> x; std::cout << "Wrote: " << x << std::endl;
return 0; }
Why make it so complicated? #include <iostream> #include <string> #include <sstream> using namespace std;
int main() { int x = 150; string str;
cout << "Input a number (default 150): "; getline(cin, str); if(strlen str) { stringstream(str) >> x; }
cout << "Your chosen number is: " << x << endl;
return 0; }
Well because the person I was replying to asked for it to have a prompt of the form: "Input a number: 150" where the "150" can be edited, so I may type <backspace>23<enter> and the console would look like Input a number: 1523 Wrote: 1523 What you wrote is similar to the simple alternative I suggested. You may want to look here then: http://www.cplusplus.com/reference/vector/vector/push_back/ I think you are misunderstanding what is expected. std::cin is not a vector it is an std::istream, so the relevant instruction is std::istream::putbackas I remarked in my post. However this does not actually show up on the console, only in cin's internal buffer, and the user cannot delete it. What we want is that when we launch the prompt looks as follows: > Input a number: 150 If we press backspace we get > Input a number: 15 If we press 2 at this point we get: > Input a number 152 And we can now press enter at which point cin appends sends "152" to str.
How about this?
class mystream { stringstream stream;
public: template<typename T> istream& operator >> (T& x) { return stream.str().empty() ? cin >> x : stream >> x; }
template<typename T> ostream& operator << (const T& x) { return stream << x; } } myin;
myin << x will print x into the buffer. myin >> x will read from the cin.
Damn it's hard... How about just using the GNU readline library?
|
Its far from pretty..
template<typename Target, typename Source> Target lexical_cast(Source arg) { std::stringstream interpreter; Target result; if(!(interpreter << arg) || !(interpreter >> result) || !(interpreter >> std::ws).eof()) throw std::exception("lexical_cast - bad cast");
return result; } template<typename Type> Type default_value(Type val) { std::string defaultval = lexical_cast<std::string>(val); auto hConsole = GetStdHandle(STD_OUTPUT_HANDLE); CONSOLE_SCREEN_BUFFER_INFO csbi; GetConsoleScreenBufferInfo(hConsole, &csbi); auto start_position = csbi.dwCursorPosition; std::cout << defaultval;
while(true) { auto input = _getch(); if(input == 13 || input == 230) { std::cout << std::endl; break; } else if( input == 8) // backspace { if(defaultval.length() > 0) { defaultval.pop_back(); COORD c(start_position); c.X += defaultval.length(); SetConsoleCursorPosition(hConsole, c); std::cout << ' '; SetConsoleCursorPosition(hConsole, c); } } else { defaultval.push_back(char(input)); std::cout << (char)input; } } return lexical_cast<Type>(defaultval); }
int main() { std::cout << "Input a filename : "; std::string x = default_value<std::string>("autosave.dat"); cout << x << std::endl; std::cout << "Input a threshold : "; float f = default_value<float>(50.0f); cout << f << std::endl;
return 0; }
But it works ok i guess. I'm not even using it anymore however =\
|
On September 18 2013 17:41 Manit0u wrote:Show nested quote +On September 18 2013 14:44 rasnj wrote:On September 18 2013 14:36 Manit0u wrote:On September 18 2013 14:11 rasnj wrote:On September 18 2013 13:22 Manit0u wrote:On September 17 2013 15:08 rasnj wrote:On September 17 2013 13:34 Amnesty wrote:Anyone know of a way to put something into a input stream. So there is a default value there. So instead of cout >> "Input a number :"; int x; cin << x;
printing Input a number : it would print Input a number :150_<cursor right here> For example. I do not believe there is a standard C++ way of doing this. You can inject data into cin as follows: std::cin.putback('1'); std::cin.putback('5'); std::cin.putback('0'); But it will not show up on the console and you cannot erase those characters. Generally standard C++ is very limited in what you can do at the console. If you are just using the console because you are still learning and it is more convenient than coding a GUI interface, then I would just settle for something simpler like: Input a number (default: 150): _ If you actually want to do it properly you will need to do something platform dependent or use a third-party library that abstracts away the platform dependence. On Win32 you could do (this has no error-handling and can only handle digits, but the idea can easily be generalized if you wish): #include <iostream> #include <windows.h>
void write_digit_to_stdin(char digit) { // Calculate virtual-key code. // Look up on MSDN for codes for other keys than the digits. int virtual_key_code = static_cast<int>(digit) + 0x30 - static_cast<int>('0');
// Fill out an input record that simulates pressing the desired key followed // by releasing the same key. INPUT_RECORD input_record[2]; for(int i = 0; i < 2; ++i) { input_record[i].EventType = KEY_EVENT; input_record[i].Event.KeyEvent.dwControlKeyState = 0; input_record[i].Event.KeyEvent.uChar.UnicodeChar = digit; input_record[i].Event.KeyEvent.wRepeatCount = 1; input_record[i].Event.KeyEvent.wVirtualKeyCode = virtual_key_code; input_record[i].Event.KeyEvent.wVirtualScanCode = MapVirtualKey(virtual_key_code, MAPVK_VK_TO_VSC); } input_record[0].Event.KeyEvent.bKeyDown = TRUE; input_record[1].Event.KeyEvent.bKeyDown = FALSE;
DWORD tmp = 0; WriteConsoleInput( GetStdHandle(STD_INPUT_HANDLE) ,input_record,2,&tmp); }
int main() { int x; std::cout << "Input a number : "; write_digit_to_stdin('1'); write_digit_to_stdin('5'); write_digit_to_stdin('0'); std::cin >> x; std::cout << "Wrote: " << x << std::endl;
return 0; }
Why make it so complicated? #include <iostream> #include <string> #include <sstream> using namespace std;
int main() { int x = 150; string str;
cout << "Input a number (default 150): "; getline(cin, str); if(strlen str) { stringstream(str) >> x; }
cout << "Your chosen number is: " << x << endl;
return 0; }
Well because the person I was replying to asked for it to have a prompt of the form: "Input a number: 150" where the "150" can be edited, so I may type <backspace>23<enter> and the console would look like Input a number: 1523 Wrote: 1523 What you wrote is similar to the simple alternative I suggested. You may want to look here then: http://www.cplusplus.com/reference/vector/vector/push_back/ I think you are misunderstanding what is expected. std::cin is not a vector it is an std::istream, so the relevant instruction is std::istream::putbackas I remarked in my post. However this does not actually show up on the console, only in cin's internal buffer, and the user cannot delete it. What we want is that when we launch the prompt looks as follows: > Input a number: 150 If we press backspace we get > Input a number: 15 If we press 2 at this point we get: > Input a number 152 And we can now press enter at which point cin appends sends "152" to str. How about this? class mystream { stringstream stream;
public: template<typename T> istream& operator >> (T& x) { return stream.str().empty() ? cin >> x : stream >> x; }
template<typename T> ostream& operator << (const T& x) { return stream << x; } } myin;
myin << x will print x into the buffer. myin >> x will read from the cin. Damn it's hard... How about just using the GNU readline library? I don't see how you could use that to reproduce what I typed. You still don't have a way to type "150" in code to the console and being able to erase that with backspace once the program is running In any case I am pretty sure that it is actually impossible in standard C++. There is no utility to write to the buffer that the console has prior to pressing enter. In fact there is no notion of console, there is just "standard input" and "standard output". This is why I'm fairly certain it is technically impossible (not just hard) in standard C++ to accomplish this.
However if you are willing to use platform specific techniques it can usually be done by writing characters into STDIN (so making it as though you typed them on the keyboard) which I provided code for on Win32 or by changing the cursor position and deleting characters appropriately as Amnesty himself just posted a sample of.
I believe the GNU readline library could probably accomplish this from what I've heard, but I've never used it, and if he is on a specific platform anyway I thought I would recommend just using the native method rather than using a library.
|
I've been wondering, what's the reason for C style pointers missing in languages such as Java? Is object reference the better replacement?
|
On September 18 2013 23:26 darkness wrote: I've been wondering, what's the reason for C style pointers missing in languages such as Java? Is object reference the better replacement? It comes down to: 1) It's hard to manage memory, and even if most programmers believe they are superstars they often make mistakes so we don't allow them to. 2) It is much easier to make an efficient garbage collector if you control all of memory allocation. It's hard to say when to destroy an object when at any time a programmer might pull of some fancy pointer arithmetic that points to an object you thought no one was referencing. 3) Why would you want to?
If you can come up with an example where you think they might help you, then we can comment more concretely on that.
EDIT: And no, object reference is not "better". For the sake of just passing objects by reference, object reference is likely better, but pointers give you more flexibility. They are different concepts. It is like asking whether monitors or printers are better. It depends on your application.
|
On September 18 2013 12:57 sluggaslamoo wrote:Show nested quote +On September 18 2013 08:45 berated- wrote:On September 18 2013 08:33 sluggaslamoo wrote:On September 17 2013 16:20 WindWolf wrote:On September 17 2013 15:06 sluggaslamoo wrote:On September 17 2013 14:56 WindWolf wrote: One reason I'm wondering why people hate setters and getters:
Lets assume that you've written a RNG that has complex internal code (just as an example) and requires lots of code just to set a seed (once again, just as an example). In that case, I would find it easier to read setSeed() than reading all of the code required to set the seed itself There are plenty of reasons. In this case it could make the code hard to debug. Someone could easily already be using setSeed() deeply nested within some code file. Then when your code doesn't randomize differently, you are sitting their scratching your head why your RNG code didn't work when really it was setSeed causing the issue. The seed should be set in the initialiser or as a config variable or both. You aren't going to need to set the seed on the fly for RNG in a game. When the game starts you initialise the RNG object with the seed using the game's config table, after that it doesn't need to change until the next game. This allows you to isolate the issue and make debugging much easier. IMO it is also simpler and much more intuitive than using setSeed. I didn't meant about RNG's specifically, I was just the principal of using a set method for a long piece of code instead of writing all of that code again. And even is setSeed isn't a good name, what would make for a good name (we can take the debate on whenever you should be able to change the seed of a RGN or not another time) The same principle still applies for lots of things. You want to reduce the amount of side effects to a minimum in order to make maintenance easier. I can't even remember the last time I have used a setter. Can you make some clarifications, for those of us in the peanut gallery. What language do you do most of your programming in? And what are the size of the projects you work on? Also, are you excluding domain objects from outside of your discussions of getters and setters are bad. I use Ruby mostly, but in certain companies I have had to deal with other languages, and I've worked on both small and behemoth monolithic Java projects. Basically what phar said. For every time you allow something to change, it means one extra thing you have to think of when debugging. Setting is best left to initialisers or factories, then the object performs services using those parameters and keeps everything self contained. Instead of using setters, you can create a new object with new parameters. Think of it as people working within an organisation, you don't set attributes to people, you have different people who are capable of doing different things. "Hire" the people you need, then hand off the work to each of these people to get the work done. In Ruby I also tend to do a lot of things functionally, which removes the issue of side effects, but you also don't have these hand offs between objects when really you just want something short and sweet. Its very hard to do right and will take years to figure out how to be able to do it in a way that makes the code look better not worse, let alone figure it out quickly. Just realise that in the long term you are going to be much better off. The reason people write setters is because they don't realise other patterns exist to allow you to perform operations without them.
While that sounds nice in theory, not using setters at all often produces needless overhead.
Take for an example a simple forum with user management. You have users, users have passwords and passwords are encrypted and occasionally get changed. In your example, everytime a user changes his password, you would clone the old user object with a different password parameter, which is a lot more overhead than taking an existing object and simply changing a property of it through a setter.
A simple "SetPassword(new Password)" method is easier to read, write and understand with exactly zero useless overhead. You can even hide the password encrypting, salting, spicing and garnishing behind it and not use a getter because passwords are 1-way encrypted and reading it doesn't make sense.
Yes, there are lots of getters/setters that are bad and i'm usually in favour of only using what is really neccessary and not go by the internal datastructure of the object, but there are also lots of legitimate reasons to use getters/setters, even in semi-functional languages like ruby. You just always have to think before blindly creating getters and setters, which admittedly is something not a lot of programmers do. Setters should never be a replacement for a proper constructor nor be required to initialize an object; That should always happen in the constructor so the object always has a valid state.
|
There is nothing inherently wrong with getters and setters, but like many things in programming, they are overused in the industry.
There are some easy rules to follow for the most common languages. In java for example, never use public fields. You want to use getters and setters because you don't want to refactor all your code later if you decide to change something from a public field to a method. If you want to expose a field use a getter every time. Think really carefully before making a setter for anything. Do not just use Eclipse to generate them for every field you have, this is the worst. Every setter makes your class that much more mutable and more complicated. Always use final for fields that won't change. Managing what is mutable and what isn't is a key point of proper class design.
It's easier in a language like C#, because there you have Properties. It's a uniform and standard way to abstract access to your class fields. You want to expose a private field, just pass it along trough a Property of the same name. You decide later that there should be some computation as well during getting and setting, just add it to the Property. Nice and easy, although the name is confusing and overloaded, but you get used to that.
This is called the Uniform access principle: http://en.wikipedia.org/wiki/Uniform_access_principle
It's even easier in Ruby because like the Eiffel programming language that originated the principle, it has absolutely uniform access to its fields and computations. So in Ruby your method for getting field x of a class will be called simply x. So when you call it it looks like you are using public fields, like SomeObject.x. That is not the case however, and that is the point of the Uniform access principle. It's just a method passing along a private field named x, or it could be any kind of computation. You can't tell just from the method name. This lack of expressiveness bothers some people, but it's more abstract for sure, and is slowly becoming the norm.
You can't have this in a language like java, for example if in a class you have a field: private int number;
You can then have getters and setters like this: public int number() {return this.number;} public void number(int input) {this.number = input;}
And call it like this: System.out.print(SomeObject.number); //get SomeObject.number(5); //set But in java you can't overload a method so that it has different return types, even if it's parameters are different. Just some of many limitations of java.
I hope that clears it all up.
|
On September 18 2013 23:36 rasnj wrote:Show nested quote +On September 18 2013 23:26 darkness wrote: I've been wondering, what's the reason for C style pointers missing in languages such as Java? Is object reference the better replacement? It comes down to: 1) It's hard to manage memory, and even if most programmers believe they are superstars they often make mistakes so we don't allow them to. 2) It is much easier to make an efficient garbage collector if you control all of memory allocation. It's hard to say when to destroy an object when at any time a programmer might pull of some fancy pointer arithmetic that points to an object you thought no one was referencing. 3) Why would you want to? If you can come up with an example where you think they might help you, then we can comment more concretely on that. EDIT: And no, object reference is not "better". For the sake of just passing objects by reference, object reference is likely better, but pointers give you more flexibility. They are different concepts. It is like asking whether monitors or printers are better. It depends on your application.
Make sure to distinguish between pointers vs. references and manual memory management. C is a language that contains explicit pointers and manual memory management, but the two concepts need not be intertwined, i.e., you can have a language with manual memory management but no notion of a C-like pointer. In particular, references are merely a restricted version of pointers. That is a reference is a pointer that cannot be re-seated, i.e., made to point to another object/chunk of memory, once it is set initially.
Manual memory management aside, the behavior of being able to re-assign a pointer is seldom necessary in most domains. When it is necessary, e.g., iterating over an array, specialized constructs such as array indexing (distinct from pointer manipulation) covers those situations in a safer, more direct manner than pointers can.
|
And also, the conversation about getters and setters is confusing two distinct concepts:
(1) Getters/setters and properties. Setters and getters are methods that wrap properties of an object, typically abstracting away the fact that they are "backed" by an underlying instance variable/field. This is useful when you want to enforce invariants about those properties (e.g., a setter that ensures that a person's age should always be positive) or control the behavior of an object with respect to those properties (e.g., making a getter private, effectively making the corresponding property "write only").
People deride getters/setters (in the "post-Java" world) because they exemplary of the over-engineering mentality that they believe characterizes many object-oriented programs. Trivial getter/setters, i.e., those that simply get and set some underlying instance variable/field, do not add immediate value and most likely will never be changed. Or if they are changed, will require invasive enough changes elsewhere that the benefit of the abstraction is negligible.
(2) Mutation. Simply put, the ability to change the value of variables in a program can ruin your ability to locally reason about the correctness of your programs. Furthermore, mutation of data in many cases can be considered a low-level concern relatively to the high-level concern of specifying, in a declarative manner, how a computation ought to be carried out. In virtually all programming languages and situations, you want to limit your use of mutation as much as possible in order to preserve local reasoning and be able to state clearly and concisely "what" your code does rather than "how" it should operate.
|
So from all this discussion about getters and setters I have gathered that I should generally avoid them except in very specific situations where their use will not cause problems later?
|
On September 19 2013 08:28 Azerbaijan wrote: So from all this discussion about getters and setters I have gathered that I should generally avoid them except in very specific situations where their use will not cause problems later?
You should use them in most situations but it depends on the language and what you're using them for. Programmer are very nitpicky when it comes to certain things and if it doesn't work in certain situations, it becomes their mission in life to make sure that no one ever uses it even if it will be useful.
|
On September 19 2013 08:36 zzdd wrote:Show nested quote +On September 19 2013 08:28 Azerbaijan wrote: So from all this discussion about getters and setters I have gathered that I should generally avoid them except in very specific situations where their use will not cause problems later? You should use them in most situations but it depends on the language and what you're using them for. Programmer are very nitpicky when it comes to certain things and if it doesn't work in certain situations, it becomes their mission in life to make sure that no one ever uses it even if it will be useful.
The higher order bit here is minimizing state in your program. Try to avoid extraneous state in your programs (e.g., using iterator/algorithms in C++ or comprehensions in Python over for-loops), and when you need to use state, ensure that you limit the scope of who can observe and modify it (e.g., using a getter with a private setter). Getters and setters in this light become a language-dependent implementation detail whose use depends on the context you are programming in.
|
On September 19 2013 01:58 Morfildur wrote:Show nested quote +On September 18 2013 12:57 sluggaslamoo wrote:On September 18 2013 08:45 berated- wrote:On September 18 2013 08:33 sluggaslamoo wrote:On September 17 2013 16:20 WindWolf wrote:On September 17 2013 15:06 sluggaslamoo wrote:On September 17 2013 14:56 WindWolf wrote: One reason I'm wondering why people hate setters and getters:
Lets assume that you've written a RNG that has complex internal code (just as an example) and requires lots of code just to set a seed (once again, just as an example). In that case, I would find it easier to read setSeed() than reading all of the code required to set the seed itself There are plenty of reasons. In this case it could make the code hard to debug. Someone could easily already be using setSeed() deeply nested within some code file. Then when your code doesn't randomize differently, you are sitting their scratching your head why your RNG code didn't work when really it was setSeed causing the issue. The seed should be set in the initialiser or as a config variable or both. You aren't going to need to set the seed on the fly for RNG in a game. When the game starts you initialise the RNG object with the seed using the game's config table, after that it doesn't need to change until the next game. This allows you to isolate the issue and make debugging much easier. IMO it is also simpler and much more intuitive than using setSeed. I didn't meant about RNG's specifically, I was just the principal of using a set method for a long piece of code instead of writing all of that code again. And even is setSeed isn't a good name, what would make for a good name (we can take the debate on whenever you should be able to change the seed of a RGN or not another time) The same principle still applies for lots of things. You want to reduce the amount of side effects to a minimum in order to make maintenance easier. I can't even remember the last time I have used a setter. Can you make some clarifications, for those of us in the peanut gallery. What language do you do most of your programming in? And what are the size of the projects you work on? Also, are you excluding domain objects from outside of your discussions of getters and setters are bad. I use Ruby mostly, but in certain companies I have had to deal with other languages, and I've worked on both small and behemoth monolithic Java projects. Basically what phar said. For every time you allow something to change, it means one extra thing you have to think of when debugging. Setting is best left to initialisers or factories, then the object performs services using those parameters and keeps everything self contained. Instead of using setters, you can create a new object with new parameters. Think of it as people working within an organisation, you don't set attributes to people, you have different people who are capable of doing different things. "Hire" the people you need, then hand off the work to each of these people to get the work done. In Ruby I also tend to do a lot of things functionally, which removes the issue of side effects, but you also don't have these hand offs between objects when really you just want something short and sweet. Its very hard to do right and will take years to figure out how to be able to do it in a way that makes the code look better not worse, let alone figure it out quickly. Just realise that in the long term you are going to be much better off. The reason people write setters is because they don't realise other patterns exist to allow you to perform operations without them. While that sounds nice in theory, not using setters at all often produces needless overhead. Take for an example a simple forum with user management. You have users, users have passwords and passwords are encrypted and occasionally get changed. In your example, everytime a user changes his password, you would clone the old user object with a different password parameter, which is a lot more overhead than taking an existing object and simply changing a property of it through a setter. A simple "SetPassword(new Password)" method is easier to read, write and understand with exactly zero useless overhead. You can even hide the password encrypting, salting, spicing and garnishing behind it and not use a getter because passwords are 1-way encrypted and reading it doesn't make sense. Yes, there are lots of getters/setters that are bad and i'm usually in favour of only using what is really neccessary and not go by the internal datastructure of the object, but there are also lots of legitimate reasons to use getters/setters, even in semi-functional languages like ruby. You just always have to think before blindly creating getters and setters, which admittedly is something not a lot of programmers do. Setters should never be a replacement for a proper constructor nor be required to initialize an object; That should always happen in the constructor so the object always has a valid state.
Not trying to sound arrogant or anything, but I think its funny to say "it sounds nice in theory" when it is actually something I do in my profession all the time.
For databasing you should have a library which prevents you from having to use or write setters. Good libraries will provide an abstraction which takes you away from the process of having to manage data manually, mitigating the risk of human error. You then declare you database strategy, your validations, and data processing methods, and your library should handle the rest.
Sure in the end, you are still "setting" data but because it is all automated and declarative you are reducing side-effects to a minimum.
For example you will first be encapsulating the process in a transaction which gives you fail fast and rollback capabilities. Second of all instead of using setPassword, you would be using a system which takes form data and processes it automatically.
Any fields that are changed are processed within a transaction, if there are any errors it is rolled back, if not the transaction is committed and any processing required, e.g creating an encrypted password, is executed without procedural programming involved.
Feel free to keep shooting me examples. Keep in mind we are talking in the context of using OOP, which mostly useful for websites and business applications. The only thing I can think of where setters will be used extensively will be in an application where full OOP is not ideal, like traditional games.
EDIT: I think I should clarify and answer your question more specifically.
Sometimes its impossible to avoid side-effects or setting data, your example is a good one. When you have to save user data, its physically impossible not to "set" data, it wouldn't make sense not to.
But as I said earlier, the only reason it doesn't make sense not to use a "setter" in this example is because you haven't learned a pattern to avoid doing so. There is always more overhead in writing and using setters.
To provide a solution to your example specifically. - I would use a database library which handles my form data. - I write a callback in my database which automatically encrypts and save the password whenever it changes. - When the user submits form data which includes a password, the database library executes the callback which encrypts the password
This is much more elegant to using setPassword(), which I'm sure you would understand the headaches involved when it would come to maintaining very large program. You would have to make sure setPassword() is called in all the appropriate user save methods, finding those methods would take you hours in a corporate business application, and you would have to make sure that every other programmer knows to use it as well.
Having an on_change callback solves all these problems, if a person puts a password field in a new page that I don't know of, it will still take effect. Now I am completely abstracted away from the database management process. I don't need to use set() or save() anywhere in my code which makes my life much easier when I run into any problems.
|
On September 19 2013 10:16 sluggaslamoo wrote:Show nested quote +On September 19 2013 01:58 Morfildur wrote:On September 18 2013 12:57 sluggaslamoo wrote:On September 18 2013 08:45 berated- wrote:On September 18 2013 08:33 sluggaslamoo wrote:On September 17 2013 16:20 WindWolf wrote:On September 17 2013 15:06 sluggaslamoo wrote:On September 17 2013 14:56 WindWolf wrote: One reason I'm wondering why people hate setters and getters:
Lets assume that you've written a RNG that has complex internal code (just as an example) and requires lots of code just to set a seed (once again, just as an example). In that case, I would find it easier to read setSeed() than reading all of the code required to set the seed itself There are plenty of reasons. In this case it could make the code hard to debug. Someone could easily already be using setSeed() deeply nested within some code file. Then when your code doesn't randomize differently, you are sitting their scratching your head why your RNG code didn't work when really it was setSeed causing the issue. The seed should be set in the initialiser or as a config variable or both. You aren't going to need to set the seed on the fly for RNG in a game. When the game starts you initialise the RNG object with the seed using the game's config table, after that it doesn't need to change until the next game. This allows you to isolate the issue and make debugging much easier. IMO it is also simpler and much more intuitive than using setSeed. I didn't meant about RNG's specifically, I was just the principal of using a set method for a long piece of code instead of writing all of that code again. And even is setSeed isn't a good name, what would make for a good name (we can take the debate on whenever you should be able to change the seed of a RGN or not another time) The same principle still applies for lots of things. You want to reduce the amount of side effects to a minimum in order to make maintenance easier. I can't even remember the last time I have used a setter. Can you make some clarifications, for those of us in the peanut gallery. What language do you do most of your programming in? And what are the size of the projects you work on? Also, are you excluding domain objects from outside of your discussions of getters and setters are bad. I use Ruby mostly, but in certain companies I have had to deal with other languages, and I've worked on both small and behemoth monolithic Java projects. Basically what phar said. For every time you allow something to change, it means one extra thing you have to think of when debugging. Setting is best left to initialisers or factories, then the object performs services using those parameters and keeps everything self contained. Instead of using setters, you can create a new object with new parameters. Think of it as people working within an organisation, you don't set attributes to people, you have different people who are capable of doing different things. "Hire" the people you need, then hand off the work to each of these people to get the work done. In Ruby I also tend to do a lot of things functionally, which removes the issue of side effects, but you also don't have these hand offs between objects when really you just want something short and sweet. Its very hard to do right and will take years to figure out how to be able to do it in a way that makes the code look better not worse, let alone figure it out quickly. Just realise that in the long term you are going to be much better off. The reason people write setters is because they don't realise other patterns exist to allow you to perform operations without them. While that sounds nice in theory, not using setters at all often produces needless overhead. Take for an example a simple forum with user management. You have users, users have passwords and passwords are encrypted and occasionally get changed. In your example, everytime a user changes his password, you would clone the old user object with a different password parameter, which is a lot more overhead than taking an existing object and simply changing a property of it through a setter. A simple "SetPassword(new Password)" method is easier to read, write and understand with exactly zero useless overhead. You can even hide the password encrypting, salting, spicing and garnishing behind it and not use a getter because passwords are 1-way encrypted and reading it doesn't make sense. Yes, there are lots of getters/setters that are bad and i'm usually in favour of only using what is really neccessary and not go by the internal datastructure of the object, but there are also lots of legitimate reasons to use getters/setters, even in semi-functional languages like ruby. You just always have to think before blindly creating getters and setters, which admittedly is something not a lot of programmers do. Setters should never be a replacement for a proper constructor nor be required to initialize an object; That should always happen in the constructor so the object always has a valid state. Not trying to sound arrogant or anything, but I think its funny to say "it sounds nice in theory" when it is actually something I do in my profession all the time. For databasing you should have a library which prevents you from having to use or write setters. Good libraries will provide an abstraction which takes you away from the process of having to manage data manually, mitigating the risk of human error. You then declare you database strategy, your validations, and data processing methods, and your library should handle the rest. Sure in the end, you are still "setting" data but because it is all automated and declarative you are reducing side-effects to a minimum. For example you will first be encapsulating the process in a transaction which gives you fail fast and rollback capabilities. Second of all instead of using setPassword, you would be using a system which takes form data and processes it automatically. Any fields that are changed are processed within a transaction, if there are any errors it is rolled back, if not the transaction is committed and any processing required, e.g creating an encrypted password, is executed without procedural programming involved. Feel free to keep shooting me examples. Keep in mind we are talking in the context of using OOP, which mostly useful for websites and business applications. The only thing I can think of where setters will be used extensively will be in an application where full OOP is not ideal, like traditional games. EDIT: I think I should clarify and answer your question more specifically. Sometimes its impossible to avoid side-effects or setting data, your example is a good one. When you have to save user data, its physically impossible not to "set" data, it wouldn't make sense not to. But as I said earlier, the only reason it doesn't make sense not to use a "setter" in this example is because you haven't learned a pattern to avoid doing so. There is always more overhead in writing and using setters. To provide a solution to your example specifically. - I would use a database library which handles my form data. - I write a callback in my database which automatically encrypts and save the password whenever it changes. - When the user submits form data which includes a password, the database library executes the callback which encrypts the password This is much more elegant to using setPassword(), which I'm sure you would understand the headaches involved when it would come to maintaining very large program. You would have to make sure setPassword() is called in all the appropriate user save methods, finding those methods would take you hours in a corporate business application, and you would have to make sure that every other programmer knows to use it as well. Having an on_change callback solves all these problems, if a person puts a password field in a new page that I don't know of, it will still take effect. Now I am completely abstracted away from the database management process. I don't need to use set() or save() anywhere in my code which makes my life much easier when I run into any problems.
This assumes that you are able to use a library like the ones used in ruby, python, or hibernate for java. Sometimes the sql that you have is so complex because you don't control the schema that this isn't possible. The strategy used for these is great for simple schemas, not so much when the schemas get super complex. This isn't an excuse to allow for poorly designed code, just sayin...
The thing to note here is that every in development is about trade offs. Even Holub's article on javaworld touches on this. Encapsulating all the code in the model is great because it allows for you to only touch the parts of code that you want to touch. Using libraries that handle this is even better.
What if you decided though that relational databases are complete shit, and nosql is the way to go? You would either have to redo all the libraries that were done for sql, or you hope that in that instance you wrote your own DAO layer and now you can just swap that out instead of having to touch all the rest of your model code.
Design is hard.
|
On September 19 2013 10:47 berated- wrote:Show nested quote +On September 19 2013 10:16 sluggaslamoo wrote:On September 19 2013 01:58 Morfildur wrote:On September 18 2013 12:57 sluggaslamoo wrote:On September 18 2013 08:45 berated- wrote:On September 18 2013 08:33 sluggaslamoo wrote:On September 17 2013 16:20 WindWolf wrote:On September 17 2013 15:06 sluggaslamoo wrote:On September 17 2013 14:56 WindWolf wrote: One reason I'm wondering why people hate setters and getters:
Lets assume that you've written a RNG that has complex internal code (just as an example) and requires lots of code just to set a seed (once again, just as an example). In that case, I would find it easier to read setSeed() than reading all of the code required to set the seed itself There are plenty of reasons. In this case it could make the code hard to debug. Someone could easily already be using setSeed() deeply nested within some code file. Then when your code doesn't randomize differently, you are sitting their scratching your head why your RNG code didn't work when really it was setSeed causing the issue. The seed should be set in the initialiser or as a config variable or both. You aren't going to need to set the seed on the fly for RNG in a game. When the game starts you initialise the RNG object with the seed using the game's config table, after that it doesn't need to change until the next game. This allows you to isolate the issue and make debugging much easier. IMO it is also simpler and much more intuitive than using setSeed. I didn't meant about RNG's specifically, I was just the principal of using a set method for a long piece of code instead of writing all of that code again. And even is setSeed isn't a good name, what would make for a good name (we can take the debate on whenever you should be able to change the seed of a RGN or not another time) The same principle still applies for lots of things. You want to reduce the amount of side effects to a minimum in order to make maintenance easier. I can't even remember the last time I have used a setter. Can you make some clarifications, for those of us in the peanut gallery. What language do you do most of your programming in? And what are the size of the projects you work on? Also, are you excluding domain objects from outside of your discussions of getters and setters are bad. I use Ruby mostly, but in certain companies I have had to deal with other languages, and I've worked on both small and behemoth monolithic Java projects. Basically what phar said. For every time you allow something to change, it means one extra thing you have to think of when debugging. Setting is best left to initialisers or factories, then the object performs services using those parameters and keeps everything self contained. Instead of using setters, you can create a new object with new parameters. Think of it as people working within an organisation, you don't set attributes to people, you have different people who are capable of doing different things. "Hire" the people you need, then hand off the work to each of these people to get the work done. In Ruby I also tend to do a lot of things functionally, which removes the issue of side effects, but you also don't have these hand offs between objects when really you just want something short and sweet. Its very hard to do right and will take years to figure out how to be able to do it in a way that makes the code look better not worse, let alone figure it out quickly. Just realise that in the long term you are going to be much better off. The reason people write setters is because they don't realise other patterns exist to allow you to perform operations without them. While that sounds nice in theory, not using setters at all often produces needless overhead. Take for an example a simple forum with user management. You have users, users have passwords and passwords are encrypted and occasionally get changed. In your example, everytime a user changes his password, you would clone the old user object with a different password parameter, which is a lot more overhead than taking an existing object and simply changing a property of it through a setter. A simple "SetPassword(new Password)" method is easier to read, write and understand with exactly zero useless overhead. You can even hide the password encrypting, salting, spicing and garnishing behind it and not use a getter because passwords are 1-way encrypted and reading it doesn't make sense. Yes, there are lots of getters/setters that are bad and i'm usually in favour of only using what is really neccessary and not go by the internal datastructure of the object, but there are also lots of legitimate reasons to use getters/setters, even in semi-functional languages like ruby. You just always have to think before blindly creating getters and setters, which admittedly is something not a lot of programmers do. Setters should never be a replacement for a proper constructor nor be required to initialize an object; That should always happen in the constructor so the object always has a valid state. Not trying to sound arrogant or anything, but I think its funny to say "it sounds nice in theory" when it is actually something I do in my profession all the time. For databasing you should have a library which prevents you from having to use or write setters. Good libraries will provide an abstraction which takes you away from the process of having to manage data manually, mitigating the risk of human error. You then declare you database strategy, your validations, and data processing methods, and your library should handle the rest. Sure in the end, you are still "setting" data but because it is all automated and declarative you are reducing side-effects to a minimum. For example you will first be encapsulating the process in a transaction which gives you fail fast and rollback capabilities. Second of all instead of using setPassword, you would be using a system which takes form data and processes it automatically. Any fields that are changed are processed within a transaction, if there are any errors it is rolled back, if not the transaction is committed and any processing required, e.g creating an encrypted password, is executed without procedural programming involved. Feel free to keep shooting me examples. Keep in mind we are talking in the context of using OOP, which mostly useful for websites and business applications. The only thing I can think of where setters will be used extensively will be in an application where full OOP is not ideal, like traditional games. EDIT: I think I should clarify and answer your question more specifically. Sometimes its impossible to avoid side-effects or setting data, your example is a good one. When you have to save user data, its physically impossible not to "set" data, it wouldn't make sense not to. But as I said earlier, the only reason it doesn't make sense not to use a "setter" in this example is because you haven't learned a pattern to avoid doing so. There is always more overhead in writing and using setters. To provide a solution to your example specifically. - I would use a database library which handles my form data. - I write a callback in my database which automatically encrypts and save the password whenever it changes. - When the user submits form data which includes a password, the database library executes the callback which encrypts the password This is much more elegant to using setPassword(), which I'm sure you would understand the headaches involved when it would come to maintaining very large program. You would have to make sure setPassword() is called in all the appropriate user save methods, finding those methods would take you hours in a corporate business application, and you would have to make sure that every other programmer knows to use it as well. Having an on_change callback solves all these problems, if a person puts a password field in a new page that I don't know of, it will still take effect. Now I am completely abstracted away from the database management process. I don't need to use set() or save() anywhere in my code which makes my life much easier when I run into any problems. This assumes that you are able to use a library like the ones used in ruby, python, or hibernate for java. Sometimes the sql that you have is so complex because you don't control the schema that this isn't possible. The strategy used for these is great for simple schemas, not so much when the schemas get super complex. This isn't an excuse to allow for poorly designed code, just sayin... The thing to note here is that every in development is about trade offs. Even Holub's article on javaworld touches on this. Encapsulating all the code in the model is great because it allows for you to only touch the parts of code that you want to touch. Using libraries that handle this is even better. What if you decided though that relational databases are complete shit, and nosql is the way to go? You would either have to redo all the libraries that were done for sql, or you hope that in that instance you wrote your own DAO layer and now you can just swap that out instead of having to touch all the rest of your model code. Design is hard.
When the schemas get super complex that's when this system becomes even better, however there should really never really be a time when your schemas get super complex. If that happens you are really over-engineering. I've worked with some very complex schemas using this system, I myself like to keep things very simple but that's probably due to the fact that things can be much simpler with ODM than SQL.
In my case most libraries reserve the same words as active_record, so swapping between a relational database and a document oriented database is a piece of cake. However in a large application the company wouldn't bother to give you the time to do this anyway.
All I'm doing is giving an appropriate example which in this case I'm assuming Java. Sure there are some edge cases where some obscure programming language won't have an appropriate library to do that, but there's nothing to be gained from using those examples in this debate.
It just seems silly to bring up an edge case example where you have to be dead set on writing setters. You may as well bring up C and ask how to write a website in it without using setters, then when I can't answer you just say "hah! see I told you!".
If an obscure language which nobody uses doesn't provide a library, I think many people would appreciate you writing the library yourself and sharing it . Otherwise don't use that language, any problems you face you brought upon yourself.
|
On September 19 2013 11:05 sluggaslamoo wrote:Show nested quote +On September 19 2013 10:47 berated- wrote:On September 19 2013 10:16 sluggaslamoo wrote:On September 19 2013 01:58 Morfildur wrote:On September 18 2013 12:57 sluggaslamoo wrote:On September 18 2013 08:45 berated- wrote:On September 18 2013 08:33 sluggaslamoo wrote:On September 17 2013 16:20 WindWolf wrote:On September 17 2013 15:06 sluggaslamoo wrote:On September 17 2013 14:56 WindWolf wrote: One reason I'm wondering why people hate setters and getters:
Lets assume that you've written a RNG that has complex internal code (just as an example) and requires lots of code just to set a seed (once again, just as an example). In that case, I would find it easier to read setSeed() than reading all of the code required to set the seed itself There are plenty of reasons. In this case it could make the code hard to debug. Someone could easily already be using setSeed() deeply nested within some code file. Then when your code doesn't randomize differently, you are sitting their scratching your head why your RNG code didn't work when really it was setSeed causing the issue. The seed should be set in the initialiser or as a config variable or both. You aren't going to need to set the seed on the fly for RNG in a game. When the game starts you initialise the RNG object with the seed using the game's config table, after that it doesn't need to change until the next game. This allows you to isolate the issue and make debugging much easier. IMO it is also simpler and much more intuitive than using setSeed. I didn't meant about RNG's specifically, I was just the principal of using a set method for a long piece of code instead of writing all of that code again. And even is setSeed isn't a good name, what would make for a good name (we can take the debate on whenever you should be able to change the seed of a RGN or not another time) The same principle still applies for lots of things. You want to reduce the amount of side effects to a minimum in order to make maintenance easier. I can't even remember the last time I have used a setter. Can you make some clarifications, for those of us in the peanut gallery. What language do you do most of your programming in? And what are the size of the projects you work on? Also, are you excluding domain objects from outside of your discussions of getters and setters are bad. I use Ruby mostly, but in certain companies I have had to deal with other languages, and I've worked on both small and behemoth monolithic Java projects. Basically what phar said. For every time you allow something to change, it means one extra thing you have to think of when debugging. Setting is best left to initialisers or factories, then the object performs services using those parameters and keeps everything self contained. Instead of using setters, you can create a new object with new parameters. Think of it as people working within an organisation, you don't set attributes to people, you have different people who are capable of doing different things. "Hire" the people you need, then hand off the work to each of these people to get the work done. In Ruby I also tend to do a lot of things functionally, which removes the issue of side effects, but you also don't have these hand offs between objects when really you just want something short and sweet. Its very hard to do right and will take years to figure out how to be able to do it in a way that makes the code look better not worse, let alone figure it out quickly. Just realise that in the long term you are going to be much better off. The reason people write setters is because they don't realise other patterns exist to allow you to perform operations without them. While that sounds nice in theory, not using setters at all often produces needless overhead. Take for an example a simple forum with user management. You have users, users have passwords and passwords are encrypted and occasionally get changed. In your example, everytime a user changes his password, you would clone the old user object with a different password parameter, which is a lot more overhead than taking an existing object and simply changing a property of it through a setter. A simple "SetPassword(new Password)" method is easier to read, write and understand with exactly zero useless overhead. You can even hide the password encrypting, salting, spicing and garnishing behind it and not use a getter because passwords are 1-way encrypted and reading it doesn't make sense. Yes, there are lots of getters/setters that are bad and i'm usually in favour of only using what is really neccessary and not go by the internal datastructure of the object, but there are also lots of legitimate reasons to use getters/setters, even in semi-functional languages like ruby. You just always have to think before blindly creating getters and setters, which admittedly is something not a lot of programmers do. Setters should never be a replacement for a proper constructor nor be required to initialize an object; That should always happen in the constructor so the object always has a valid state. Not trying to sound arrogant or anything, but I think its funny to say "it sounds nice in theory" when it is actually something I do in my profession all the time. For databasing you should have a library which prevents you from having to use or write setters. Good libraries will provide an abstraction which takes you away from the process of having to manage data manually, mitigating the risk of human error. You then declare you database strategy, your validations, and data processing methods, and your library should handle the rest. Sure in the end, you are still "setting" data but because it is all automated and declarative you are reducing side-effects to a minimum. For example you will first be encapsulating the process in a transaction which gives you fail fast and rollback capabilities. Second of all instead of using setPassword, you would be using a system which takes form data and processes it automatically. Any fields that are changed are processed within a transaction, if there are any errors it is rolled back, if not the transaction is committed and any processing required, e.g creating an encrypted password, is executed without procedural programming involved. Feel free to keep shooting me examples. Keep in mind we are talking in the context of using OOP, which mostly useful for websites and business applications. The only thing I can think of where setters will be used extensively will be in an application where full OOP is not ideal, like traditional games. EDIT: I think I should clarify and answer your question more specifically. Sometimes its impossible to avoid side-effects or setting data, your example is a good one. When you have to save user data, its physically impossible not to "set" data, it wouldn't make sense not to. But as I said earlier, the only reason it doesn't make sense not to use a "setter" in this example is because you haven't learned a pattern to avoid doing so. There is always more overhead in writing and using setters. To provide a solution to your example specifically. - I would use a database library which handles my form data. - I write a callback in my database which automatically encrypts and save the password whenever it changes. - When the user submits form data which includes a password, the database library executes the callback which encrypts the password This is much more elegant to using setPassword(), which I'm sure you would understand the headaches involved when it would come to maintaining very large program. You would have to make sure setPassword() is called in all the appropriate user save methods, finding those methods would take you hours in a corporate business application, and you would have to make sure that every other programmer knows to use it as well. Having an on_change callback solves all these problems, if a person puts a password field in a new page that I don't know of, it will still take effect. Now I am completely abstracted away from the database management process. I don't need to use set() or save() anywhere in my code which makes my life much easier when I run into any problems. This assumes that you are able to use a library like the ones used in ruby, python, or hibernate for java. Sometimes the sql that you have is so complex because you don't control the schema that this isn't possible. The strategy used for these is great for simple schemas, not so much when the schemas get super complex. This isn't an excuse to allow for poorly designed code, just sayin... The thing to note here is that every in development is about trade offs. Even Holub's article on javaworld touches on this. Encapsulating all the code in the model is great because it allows for you to only touch the parts of code that you want to touch. Using libraries that handle this is even better. What if you decided though that relational databases are complete shit, and nosql is the way to go? You would either have to redo all the libraries that were done for sql, or you hope that in that instance you wrote your own DAO layer and now you can just swap that out instead of having to touch all the rest of your model code. Design is hard. When the schemas get super complex that's when this system becomes even better, however there should really never really be a time when your schemas get super complex. If that happens you are really over-engineering. I've worked with some very complex schemas using this system, I myself like to keep things very simple but that's probably due to the fact that things can be much simpler with ODM than SQL. In my case most libraries reserve the same words as active_record, so swapping between a relational database and a document oriented database is a piece of cake. However in a large application the company wouldn't bother to give you the time to do this anyway. All I'm doing is giving an appropriate example which in this case I'm assuming Java. Sure there are some edge cases where some obscure programming language won't have an appropriate library to do that, but there's nothing to be gained from using those examples in this debate. It just seems silly to bring up an edge case example where you have to be dead set on writing setters. You may as well bring up C and ask how to write a website in it without using setters, then when I can't answer you just say "hah! see I told you!". If an obscure language which nobody uses doesn't provide a library, I think many people would appreciate you writing the library yourself and sharing it  . Otherwise don't use that language, any problems you face you brought upon yourself.
I guess your edge is my daily reality, and my edge is your daily reality. It seems silly to me to talk about running a website and calling that the normal case. It's kind of hard to keep running a warehouse a simple process.
I'm by no means saying you're wrong, just simply observing that making an absolute statement like getters and setters are bad, and that languages that use them are _terrible_ languages is pretty bold. The only point I'm trying to raise is that everything is about tradeoffs, and while one might prefer to avoid getters and setters, that doing so should be weighed and debated on a situational basis instead of an absolution.
Edit: I'm far down on the list though, so, I should probably bow out of this conversation. :D
|
|
|
|