• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 09:22
CEST 15:22
KST 22:22
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
[ASL20] Ro24 Preview Pt1: Runway132v2 & SC: Evo Complete: Weekend Double Feature4Team Liquid Map Contest #21 - Presented by Monster Energy9uThermal's 2v2 Tour: $15,000 Main Event18Serral wins EWC 202549
Community News
Maestros of The Game—$20k event w/ live finals in Paris23Weekly Cups (Aug 11-17): MaxPax triples again!13Weekly Cups (Aug 4-10): MaxPax wins a triple6SC2's Safe House 2 - October 18 & 195Weekly Cups (Jul 28-Aug 3): herO doubles up6
StarCraft 2
General
What mix of new and old maps do you want in the next 1v1 ladder pool? (SC2) : 2v2 & SC: Evo Complete: Weekend Double Feature Geoff 'iNcontroL' Robinson has passed away The GOAT ranking of GOAT rankings RSL Revival patreon money discussion thread
Tourneys
RSL: Revival, a new crowdfunded tournament series Maestros of The Game—$20k event w/ live finals in Paris Sparkling Tuna Cup - Weekly Open Tournament Monday Nights Weeklies Master Swan Open (Global Bronze-Master 2)
Strategy
Custom Maps
External Content
Mutation # 487 Think Fast Mutation # 486 Watch the Skies Mutation # 485 Death from Below Mutation # 484 Magnetic Pull
Brood War
General
Flash On His 2010 "God" Form, Mind Games, vs JD BGH Auto Balance -> http://bghmmr.eu/ Joined effort New season has just come in ladder BW General Discussion
Tourneys
[ASL20] Ro24 Group B [ASL20] Ro24 Group C BWCL Season 63 Announcement [CSLPRO] It's CSLAN Season! - Last Chance
Strategy
Simple Questions, Simple Answers Fighting Spirit mining rates [G] Mineral Boosting Muta micro map competition
Other Games
General Games
Nintendo Switch Thread General RTS Discussion Thread Dawn of War IV Path of Exile Stormgate/Frost Giant Megathread
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Vanilla Mini Mafia
Community
General
US Politics Mega-thread Russo-Ukrainian War Thread The year 2050 Things Aren’t Peaceful in Palestine European Politico-economics QA Mega-thread
Fan Clubs
INnoVation Fan Club SKT1 Classic Fan Club!
Media & Entertainment
Anime Discussion Thread Movie Discussion! [Manga] One Piece [\m/] Heavy Metal Thread
Sports
2024 - 2026 Football Thread TeamLiquid Health and Fitness Initiative For 2023 Formula 1 Discussion
World Cup 2022
Tech Support
High temperatures on bridge(s) Gtx660 graphics card replacement Installation of Windows 10 suck at "just a moment"
TL Community
The Automated Ban List TeamLiquid Team Shirt On Sale
Blogs
Evil Gacha Games and the…
ffswowsucks
Breaking the Meta: Non-Stand…
TrAiDoS
INDEPENDIENTE LA CTM
XenOsky
[Girl blog} My fema…
artosisisthebest
Sharpening the Filtration…
frozenclaw
ASL S20 English Commentary…
namkraft
Customize Sidebar...

Website Feedback

Closed Threads



Active: 2197 users

The Big Programming Thread - Page 356

Forum Index > General Forum
Post a Reply
Prev 1 354 355 356 357 358 1031 Next
Thread Rules
1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution.
2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20)
3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible.
4. Use [code] tags to format code blocks.
rasnj
Profile Joined May 2010
United States1959 Posts
September 18 2013 05:11 GMT
#7101
On September 18 2013 13:22 Manit0u wrote:
Show nested quote +
On September 17 2013 15:08 rasnj wrote:
On September 17 2013 13:34 Amnesty wrote:
Anyone know of a way to put something into a input stream. So there is a default value there.

So instead of


cout >> "Input a number :";
int x;
cin << x;

printing
Input a number :

it would print
Input a number :150_<cursor right here>

For example.


I do not believe there is a standard C++ way of doing this. You can inject data into cin as follows:
std::cin.putback('1');
std::cin.putback('5');
std::cin.putback('0');
But it will not show up on the console and you cannot erase those characters. Generally standard C++ is very limited in what you can do at the console. If you are just using the console because you are still learning and it is more convenient than coding a GUI interface, then I would just settle for something simpler like:
Input a number (default: 150): _

If you actually want to do it properly you will need to do something platform dependent or use a third-party library that abstracts away the platform dependence. On Win32 you could do (this has no error-handling and can only handle digits, but the idea can easily be generalized if you wish):
#include <iostream>
#include <windows.h>

void write_digit_to_stdin(char digit)
{
// Calculate virtual-key code.
// Look up on MSDN for codes for other keys than the digits.
int virtual_key_code = static_cast<int>(digit) + 0x30 - static_cast<int>('0');

// Fill out an input record that simulates pressing the desired key followed
// by releasing the same key.
INPUT_RECORD input_record[2];
for(int i = 0; i < 2; ++i)
{
input_record[i].EventType = KEY_EVENT;
input_record[i].Event.KeyEvent.dwControlKeyState = 0;
input_record[i].Event.KeyEvent.uChar.UnicodeChar = digit;
input_record[i].Event.KeyEvent.wRepeatCount = 1;
input_record[i].Event.KeyEvent.wVirtualKeyCode = virtual_key_code;
input_record[i].Event.KeyEvent.wVirtualScanCode = MapVirtualKey(virtual_key_code, MAPVK_VK_TO_VSC);
}
input_record[0].Event.KeyEvent.bKeyDown = TRUE;
input_record[1].Event.KeyEvent.bKeyDown = FALSE;

DWORD tmp = 0;
WriteConsoleInput( GetStdHandle(STD_INPUT_HANDLE) ,input_record,2,&tmp);
}

int main()
{
int x;
std::cout << "Input a number : ";
write_digit_to_stdin('1');
write_digit_to_stdin('5');
write_digit_to_stdin('0');
std::cin >> x;
std::cout << "Wrote: " << x << std::endl;

return 0;
}


Why make it so complicated?


#include <iostream>
#include <string>
#include <sstream>
using namespace std;

int main()
{
int x = 150;
string str;

cout << "Input a number (default 150): ";
getline(cin, str);

if(strlen str)
{
stringstream(str) >> x;
}

cout << "Your chosen number is: " << x << endl;

return 0;
}

Well because the person I was replying to asked for it to have a prompt of the form:
"Input a number: 150"
where the "150" can be edited, so I may type
<backspace>23<enter>
and the console would look like
Input a number: 1523
Wrote: 1523
What you wrote is similar to the simple alternative I suggested.
Manit0u
Profile Blog Joined August 2004
Poland17291 Posts
September 18 2013 05:36 GMT
#7102
On September 18 2013 14:11 rasnj wrote:
Show nested quote +
On September 18 2013 13:22 Manit0u wrote:
On September 17 2013 15:08 rasnj wrote:
On September 17 2013 13:34 Amnesty wrote:
Anyone know of a way to put something into a input stream. So there is a default value there.

So instead of


cout >> "Input a number :";
int x;
cin << x;

printing
Input a number :

it would print
Input a number :150_<cursor right here>

For example.


I do not believe there is a standard C++ way of doing this. You can inject data into cin as follows:
std::cin.putback('1');
std::cin.putback('5');
std::cin.putback('0');
But it will not show up on the console and you cannot erase those characters. Generally standard C++ is very limited in what you can do at the console. If you are just using the console because you are still learning and it is more convenient than coding a GUI interface, then I would just settle for something simpler like:
Input a number (default: 150): _

If you actually want to do it properly you will need to do something platform dependent or use a third-party library that abstracts away the platform dependence. On Win32 you could do (this has no error-handling and can only handle digits, but the idea can easily be generalized if you wish):
#include <iostream>
#include <windows.h>

void write_digit_to_stdin(char digit)
{
// Calculate virtual-key code.
// Look up on MSDN for codes for other keys than the digits.
int virtual_key_code = static_cast<int>(digit) + 0x30 - static_cast<int>('0');

// Fill out an input record that simulates pressing the desired key followed
// by releasing the same key.
INPUT_RECORD input_record[2];
for(int i = 0; i < 2; ++i)
{
input_record[i].EventType = KEY_EVENT;
input_record[i].Event.KeyEvent.dwControlKeyState = 0;
input_record[i].Event.KeyEvent.uChar.UnicodeChar = digit;
input_record[i].Event.KeyEvent.wRepeatCount = 1;
input_record[i].Event.KeyEvent.wVirtualKeyCode = virtual_key_code;
input_record[i].Event.KeyEvent.wVirtualScanCode = MapVirtualKey(virtual_key_code, MAPVK_VK_TO_VSC);
}
input_record[0].Event.KeyEvent.bKeyDown = TRUE;
input_record[1].Event.KeyEvent.bKeyDown = FALSE;

DWORD tmp = 0;
WriteConsoleInput( GetStdHandle(STD_INPUT_HANDLE) ,input_record,2,&tmp);
}

int main()
{
int x;
std::cout << "Input a number : ";
write_digit_to_stdin('1');
write_digit_to_stdin('5');
write_digit_to_stdin('0');
std::cin >> x;
std::cout << "Wrote: " << x << std::endl;

return 0;
}


Why make it so complicated?


#include <iostream>
#include <string>
#include <sstream>
using namespace std;

int main()
{
int x = 150;
string str;

cout << "Input a number (default 150): ";
getline(cin, str);

if(strlen str)
{
stringstream(str) >> x;
}

cout << "Your chosen number is: " << x << endl;

return 0;
}

Well because the person I was replying to asked for it to have a prompt of the form:
"Input a number: 150"
where the "150" can be edited, so I may type
<backspace>23<enter>
and the console would look like
Input a number: 1523
Wrote: 1523
What you wrote is similar to the simple alternative I suggested.


You may want to look here then: http://www.cplusplus.com/reference/vector/vector/push_back/
Time is precious. Waste it wisely.
rasnj
Profile Joined May 2010
United States1959 Posts
Last Edited: 2013-09-18 05:44:54
September 18 2013 05:44 GMT
#7103
On September 18 2013 14:36 Manit0u wrote:
Show nested quote +
On September 18 2013 14:11 rasnj wrote:
On September 18 2013 13:22 Manit0u wrote:
On September 17 2013 15:08 rasnj wrote:
On September 17 2013 13:34 Amnesty wrote:
Anyone know of a way to put something into a input stream. So there is a default value there.

So instead of


cout >> "Input a number :";
int x;
cin << x;

printing
Input a number :

it would print
Input a number :150_<cursor right here>

For example.


I do not believe there is a standard C++ way of doing this. You can inject data into cin as follows:
std::cin.putback('1');
std::cin.putback('5');
std::cin.putback('0');
But it will not show up on the console and you cannot erase those characters. Generally standard C++ is very limited in what you can do at the console. If you are just using the console because you are still learning and it is more convenient than coding a GUI interface, then I would just settle for something simpler like:
Input a number (default: 150): _

If you actually want to do it properly you will need to do something platform dependent or use a third-party library that abstracts away the platform dependence. On Win32 you could do (this has no error-handling and can only handle digits, but the idea can easily be generalized if you wish):
#include <iostream>
#include <windows.h>

void write_digit_to_stdin(char digit)
{
// Calculate virtual-key code.
// Look up on MSDN for codes for other keys than the digits.
int virtual_key_code = static_cast<int>(digit) + 0x30 - static_cast<int>('0');

// Fill out an input record that simulates pressing the desired key followed
// by releasing the same key.
INPUT_RECORD input_record[2];
for(int i = 0; i < 2; ++i)
{
input_record[i].EventType = KEY_EVENT;
input_record[i].Event.KeyEvent.dwControlKeyState = 0;
input_record[i].Event.KeyEvent.uChar.UnicodeChar = digit;
input_record[i].Event.KeyEvent.wRepeatCount = 1;
input_record[i].Event.KeyEvent.wVirtualKeyCode = virtual_key_code;
input_record[i].Event.KeyEvent.wVirtualScanCode = MapVirtualKey(virtual_key_code, MAPVK_VK_TO_VSC);
}
input_record[0].Event.KeyEvent.bKeyDown = TRUE;
input_record[1].Event.KeyEvent.bKeyDown = FALSE;

DWORD tmp = 0;
WriteConsoleInput( GetStdHandle(STD_INPUT_HANDLE) ,input_record,2,&tmp);
}

int main()
{
int x;
std::cout << "Input a number : ";
write_digit_to_stdin('1');
write_digit_to_stdin('5');
write_digit_to_stdin('0');
std::cin >> x;
std::cout << "Wrote: " << x << std::endl;

return 0;
}


Why make it so complicated?


#include <iostream>
#include <string>
#include <sstream>
using namespace std;

int main()
{
int x = 150;
string str;

cout << "Input a number (default 150): ";
getline(cin, str);

if(strlen str)
{
stringstream(str) >> x;
}

cout << "Your chosen number is: " << x << endl;

return 0;
}

Well because the person I was replying to asked for it to have a prompt of the form:
"Input a number: 150"
where the "150" can be edited, so I may type
<backspace>23<enter>
and the console would look like
Input a number: 1523
Wrote: 1523
What you wrote is similar to the simple alternative I suggested.


You may want to look here then: http://www.cplusplus.com/reference/vector/vector/push_back/

I think you are misunderstanding what is expected. std::cin is not a vector it is an std::istream, so the relevant instruction is
std::istream::putback
as I remarked in my post. However this does not actually show up on the console, only in cin's internal buffer, and the user cannot delete it.

What we want is that when we launch the prompt looks as follows:
> Input a number: 150
If we press backspace we get
> Input a number: 15
If we press 2 at this point we get:
> Input a number 152
And we can now press enter at which point cin appends sends "152" to str.
Tobberoth
Profile Joined August 2010
Sweden6375 Posts
September 18 2013 06:24 GMT
#7104
On September 18 2013 10:17 waxypants wrote:
Show nested quote +
On September 17 2013 16:20 WindWolf wrote:
On September 17 2013 15:06 sluggaslamoo wrote:
On September 17 2013 14:56 WindWolf wrote:
One reason I'm wondering why people hate setters and getters:

Lets assume that you've written a RNG that has complex internal code (just as an example) and requires lots of code just to set a seed (once again, just as an example). In that case, I would find it easier to read setSeed() than reading all of the code required to set the seed itself


There are plenty of reasons. In this case it could make the code hard to debug. Someone could easily already be using setSeed() deeply nested within some code file.

Then when your code doesn't randomize differently, you are sitting their scratching your head why your RNG code didn't work when really it was setSeed causing the issue.

The seed should be set in the initialiser or as a config variable or both. You aren't going to need to set the seed on the fly for RNG in a game. When the game starts you initialise the RNG object with the seed using the game's config table, after that it doesn't need to change until the next game. This allows you to isolate the issue and make debugging much easier.

IMO it is also simpler and much more intuitive than using setSeed.

I didn't meant about RNG's specifically, I was just the principal of using a set method for a long piece of code instead of writing all of that code again.

And even is setSeed isn't a good name, what would make for a good name (we can take the debate on whenever you should be able to change the seed of a RGN or not another time)


What you described is not a "setter" just because it has name "set" in it. Of course almost every function or even single statement in programming involves something getting "set" to something else. Doesn't make it a "setter".

edit: Although I think you inadvertently brought up a good question, which is "how much work can a method do before it's not really a 'setter' or 'getter' anymore?" I think you brought up an example where it is clearly doing too much work to be a "setter".

I'd say it's a setter just fine. While it's true that a setter will usually not have all that much code it in, and definitely not a ton of side-effects, from an interface standpoint, it's still a setter since you would definitely describe the Random class as something containing a seed property, and this setter sets that value. From an interface standpoint, you don't know whether it's implemented in a complex way.

At least IMO, a setter or getter is defined by their purpose in the interface, not by their actual implementations.
Manit0u
Profile Blog Joined August 2004
Poland17291 Posts
Last Edited: 2013-09-18 08:43:39
September 18 2013 08:41 GMT
#7105
On September 18 2013 14:44 rasnj wrote:
Show nested quote +
On September 18 2013 14:36 Manit0u wrote:
On September 18 2013 14:11 rasnj wrote:
On September 18 2013 13:22 Manit0u wrote:
On September 17 2013 15:08 rasnj wrote:
On September 17 2013 13:34 Amnesty wrote:
Anyone know of a way to put something into a input stream. So there is a default value there.

So instead of


cout >> "Input a number :";
int x;
cin << x;

printing
Input a number :

it would print
Input a number :150_<cursor right here>

For example.


I do not believe there is a standard C++ way of doing this. You can inject data into cin as follows:
std::cin.putback('1');
std::cin.putback('5');
std::cin.putback('0');
But it will not show up on the console and you cannot erase those characters. Generally standard C++ is very limited in what you can do at the console. If you are just using the console because you are still learning and it is more convenient than coding a GUI interface, then I would just settle for something simpler like:
Input a number (default: 150): _

If you actually want to do it properly you will need to do something platform dependent or use a third-party library that abstracts away the platform dependence. On Win32 you could do (this has no error-handling and can only handle digits, but the idea can easily be generalized if you wish):
#include <iostream>
#include <windows.h>

void write_digit_to_stdin(char digit)
{
// Calculate virtual-key code.
// Look up on MSDN for codes for other keys than the digits.
int virtual_key_code = static_cast<int>(digit) + 0x30 - static_cast<int>('0');

// Fill out an input record that simulates pressing the desired key followed
// by releasing the same key.
INPUT_RECORD input_record[2];
for(int i = 0; i < 2; ++i)
{
input_record[i].EventType = KEY_EVENT;
input_record[i].Event.KeyEvent.dwControlKeyState = 0;
input_record[i].Event.KeyEvent.uChar.UnicodeChar = digit;
input_record[i].Event.KeyEvent.wRepeatCount = 1;
input_record[i].Event.KeyEvent.wVirtualKeyCode = virtual_key_code;
input_record[i].Event.KeyEvent.wVirtualScanCode = MapVirtualKey(virtual_key_code, MAPVK_VK_TO_VSC);
}
input_record[0].Event.KeyEvent.bKeyDown = TRUE;
input_record[1].Event.KeyEvent.bKeyDown = FALSE;

DWORD tmp = 0;
WriteConsoleInput( GetStdHandle(STD_INPUT_HANDLE) ,input_record,2,&tmp);
}

int main()
{
int x;
std::cout << "Input a number : ";
write_digit_to_stdin('1');
write_digit_to_stdin('5');
write_digit_to_stdin('0');
std::cin >> x;
std::cout << "Wrote: " << x << std::endl;

return 0;
}


Why make it so complicated?


#include <iostream>
#include <string>
#include <sstream>
using namespace std;

int main()
{
int x = 150;
string str;

cout << "Input a number (default 150): ";
getline(cin, str);

if(strlen str)
{
stringstream(str) >> x;
}

cout << "Your chosen number is: " << x << endl;

return 0;
}

Well because the person I was replying to asked for it to have a prompt of the form:
"Input a number: 150"
where the "150" can be edited, so I may type
<backspace>23<enter>
and the console would look like
Input a number: 1523
Wrote: 1523
What you wrote is similar to the simple alternative I suggested.


You may want to look here then: http://www.cplusplus.com/reference/vector/vector/push_back/

I think you are misunderstanding what is expected. std::cin is not a vector it is an std::istream, so the relevant instruction is
std::istream::putback
as I remarked in my post. However this does not actually show up on the console, only in cin's internal buffer, and the user cannot delete it.

What we want is that when we launch the prompt looks as follows:
> Input a number: 150
If we press backspace we get
> Input a number: 15
If we press 2 at this point we get:
> Input a number 152
And we can now press enter at which point cin appends sends "152" to str.


How about this?


class mystream
{
stringstream stream;

public:
template<typename T>
istream& operator >> (T& x)
{
return stream.str().empty() ? cin >> x : stream >> x;
}

template<typename T>
ostream& operator << (const T& x)
{
return stream << x;
}
} myin;


myin << x will print x into the buffer. myin >> x will read from the cin.

Damn it's hard... How about just using the GNU readline library?
Time is precious. Waste it wisely.
Amnesty
Profile Joined April 2003
United States2054 Posts
September 18 2013 10:16 GMT
#7106
Its far from pretty..


template<typename Target, typename Source>
Target lexical_cast(Source arg)
{
std::stringstream interpreter;
Target result;
if(!(interpreter << arg) ||
!(interpreter >> result) ||
!(interpreter >> std::ws).eof())
throw std::exception("lexical_cast - bad cast");

return result;
}
template<typename Type>
Type default_value(Type val)
{
std::string defaultval = lexical_cast<std::string>(val);
auto hConsole = GetStdHandle(STD_OUTPUT_HANDLE);
CONSOLE_SCREEN_BUFFER_INFO csbi;
GetConsoleScreenBufferInfo(hConsole, &csbi);

auto start_position = csbi.dwCursorPosition;
std::cout << defaultval;

while(true)
{
auto input = _getch();
if(input == 13 || input == 230)
{
std::cout << std::endl;
break;
}
else if( input == 8) // backspace
{
if(defaultval.length() > 0)
{
defaultval.pop_back();
COORD c(start_position);
c.X += defaultval.length();
SetConsoleCursorPosition(hConsole, c);
std::cout << ' ';
SetConsoleCursorPosition(hConsole, c);
}
}
else
{
defaultval.push_back(char(input));
std::cout << (char)input;
}
}
return lexical_cast<Type>(defaultval);
}

int main()
{
std::cout << "Input a filename : ";
std::string x = default_value<std::string>("autosave.dat");
cout << x << std::endl;

std::cout << "Input a threshold : ";
float f = default_value<float>(50.0f);
cout << f << std::endl;

return 0;
}


But it works ok i guess. I'm not even using it anymore however =\
The sky just is, and goes on and on; and we play all our BW games beneath it.
rasnj
Profile Joined May 2010
United States1959 Posts
September 18 2013 14:04 GMT
#7107
On September 18 2013 17:41 Manit0u wrote:
Show nested quote +
On September 18 2013 14:44 rasnj wrote:
On September 18 2013 14:36 Manit0u wrote:
On September 18 2013 14:11 rasnj wrote:
On September 18 2013 13:22 Manit0u wrote:
On September 17 2013 15:08 rasnj wrote:
On September 17 2013 13:34 Amnesty wrote:
Anyone know of a way to put something into a input stream. So there is a default value there.

So instead of


cout >> "Input a number :";
int x;
cin << x;

printing
Input a number :

it would print
Input a number :150_<cursor right here>

For example.


I do not believe there is a standard C++ way of doing this. You can inject data into cin as follows:
std::cin.putback('1');
std::cin.putback('5');
std::cin.putback('0');
But it will not show up on the console and you cannot erase those characters. Generally standard C++ is very limited in what you can do at the console. If you are just using the console because you are still learning and it is more convenient than coding a GUI interface, then I would just settle for something simpler like:
Input a number (default: 150): _

If you actually want to do it properly you will need to do something platform dependent or use a third-party library that abstracts away the platform dependence. On Win32 you could do (this has no error-handling and can only handle digits, but the idea can easily be generalized if you wish):
#include <iostream>
#include <windows.h>

void write_digit_to_stdin(char digit)
{
// Calculate virtual-key code.
// Look up on MSDN for codes for other keys than the digits.
int virtual_key_code = static_cast<int>(digit) + 0x30 - static_cast<int>('0');

// Fill out an input record that simulates pressing the desired key followed
// by releasing the same key.
INPUT_RECORD input_record[2];
for(int i = 0; i < 2; ++i)
{
input_record[i].EventType = KEY_EVENT;
input_record[i].Event.KeyEvent.dwControlKeyState = 0;
input_record[i].Event.KeyEvent.uChar.UnicodeChar = digit;
input_record[i].Event.KeyEvent.wRepeatCount = 1;
input_record[i].Event.KeyEvent.wVirtualKeyCode = virtual_key_code;
input_record[i].Event.KeyEvent.wVirtualScanCode = MapVirtualKey(virtual_key_code, MAPVK_VK_TO_VSC);
}
input_record[0].Event.KeyEvent.bKeyDown = TRUE;
input_record[1].Event.KeyEvent.bKeyDown = FALSE;

DWORD tmp = 0;
WriteConsoleInput( GetStdHandle(STD_INPUT_HANDLE) ,input_record,2,&tmp);
}

int main()
{
int x;
std::cout << "Input a number : ";
write_digit_to_stdin('1');
write_digit_to_stdin('5');
write_digit_to_stdin('0');
std::cin >> x;
std::cout << "Wrote: " << x << std::endl;

return 0;
}


Why make it so complicated?


#include <iostream>
#include <string>
#include <sstream>
using namespace std;

int main()
{
int x = 150;
string str;

cout << "Input a number (default 150): ";
getline(cin, str);

if(strlen str)
{
stringstream(str) >> x;
}

cout << "Your chosen number is: " << x << endl;

return 0;
}

Well because the person I was replying to asked for it to have a prompt of the form:
"Input a number: 150"
where the "150" can be edited, so I may type
<backspace>23<enter>
and the console would look like
Input a number: 1523
Wrote: 1523
What you wrote is similar to the simple alternative I suggested.


You may want to look here then: http://www.cplusplus.com/reference/vector/vector/push_back/

I think you are misunderstanding what is expected. std::cin is not a vector it is an std::istream, so the relevant instruction is
std::istream::putback
as I remarked in my post. However this does not actually show up on the console, only in cin's internal buffer, and the user cannot delete it.

What we want is that when we launch the prompt looks as follows:
> Input a number: 150
If we press backspace we get
> Input a number: 15
If we press 2 at this point we get:
> Input a number 152
And we can now press enter at which point cin appends sends "152" to str.


How about this?


class mystream
{
stringstream stream;

public:
template<typename T>
istream& operator >> (T& x)
{
return stream.str().empty() ? cin >> x : stream >> x;
}

template<typename T>
ostream& operator << (const T& x)
{
return stream << x;
}
} myin;


myin << x will print x into the buffer. myin >> x will read from the cin.

Damn it's hard... How about just using the GNU readline library?

I don't see how you could use that to reproduce what I typed. You still don't have a way to type "150" in code to the console and being able to erase that with backspace once the program is running In any case I am pretty sure that it is actually impossible in standard C++. There is no utility to write to the buffer that the console has prior to pressing enter. In fact there is no notion of console, there is just "standard input" and "standard output". This is why I'm fairly certain it is technically impossible (not just hard) in standard C++ to accomplish this.

However if you are willing to use platform specific techniques it can usually be done by writing characters into STDIN (so making it as though you typed them on the keyboard) which I provided code for on Win32 or by changing the cursor position and deleting characters appropriately as Amnesty himself just posted a sample of.

I believe the GNU readline library could probably accomplish this from what I've heard, but I've never used it, and if he is on a specific platform anyway I thought I would recommend just using the native method rather than using a library.
Shield
Profile Blog Joined August 2009
Bulgaria4824 Posts
September 18 2013 14:26 GMT
#7108
I've been wondering, what's the reason for C style pointers missing in languages such as Java? Is object reference the better replacement?
rasnj
Profile Joined May 2010
United States1959 Posts
Last Edited: 2013-09-18 14:40:15
September 18 2013 14:36 GMT
#7109
On September 18 2013 23:26 darkness wrote:
I've been wondering, what's the reason for C style pointers missing in languages such as Java? Is object reference the better replacement?

It comes down to:
1) It's hard to manage memory, and even if most programmers believe they are superstars they often make mistakes so we don't allow them to.
2) It is much easier to make an efficient garbage collector if you control all of memory allocation. It's hard to say when to destroy an object when at any time a programmer might pull of some fancy pointer arithmetic that points to an object you thought no one was referencing.
3) Why would you want to?

If you can come up with an example where you think they might help you, then we can comment more concretely on that.

EDIT: And no, object reference is not "better". For the sake of just passing objects by reference, object reference is likely better, but pointers give you more flexibility. They are different concepts. It is like asking whether monitors or printers are better. It depends on your application.
Deleted User 101379
Profile Blog Joined August 2010
4849 Posts
September 18 2013 16:58 GMT
#7110
On September 18 2013 12:57 sluggaslamoo wrote:
Show nested quote +
On September 18 2013 08:45 berated- wrote:
On September 18 2013 08:33 sluggaslamoo wrote:
On September 17 2013 16:20 WindWolf wrote:
On September 17 2013 15:06 sluggaslamoo wrote:
On September 17 2013 14:56 WindWolf wrote:
One reason I'm wondering why people hate setters and getters:

Lets assume that you've written a RNG that has complex internal code (just as an example) and requires lots of code just to set a seed (once again, just as an example). In that case, I would find it easier to read setSeed() than reading all of the code required to set the seed itself


There are plenty of reasons. In this case it could make the code hard to debug. Someone could easily already be using setSeed() deeply nested within some code file.

Then when your code doesn't randomize differently, you are sitting their scratching your head why your RNG code didn't work when really it was setSeed causing the issue.

The seed should be set in the initialiser or as a config variable or both. You aren't going to need to set the seed on the fly for RNG in a game. When the game starts you initialise the RNG object with the seed using the game's config table, after that it doesn't need to change until the next game. This allows you to isolate the issue and make debugging much easier.

IMO it is also simpler and much more intuitive than using setSeed.

I didn't meant about RNG's specifically, I was just the principal of using a set method for a long piece of code instead of writing all of that code again.

And even is setSeed isn't a good name, what would make for a good name (we can take the debate on whenever you should be able to change the seed of a RGN or not another time)


The same principle still applies for lots of things. You want to reduce the amount of side effects to a minimum in order to make maintenance easier. I can't even remember the last time I have used a setter.


Can you make some clarifications, for those of us in the peanut gallery.

What language do you do most of your programming in? And what are the size of the projects you work on? Also, are you excluding domain objects from outside of your discussions of getters and setters are bad.


I use Ruby mostly, but in certain companies I have had to deal with other languages, and I've worked on both small and behemoth monolithic Java projects.

Basically what phar said. For every time you allow something to change, it means one extra thing you have to think of when debugging.

Setting is best left to initialisers or factories, then the object performs services using those parameters and keeps everything self contained. Instead of using setters, you can create a new object with new parameters.

Think of it as people working within an organisation, you don't set attributes to people, you have different people who are capable of doing different things. "Hire" the people you need, then hand off the work to each of these people to get the work done.

In Ruby I also tend to do a lot of things functionally, which removes the issue of side effects, but you also don't have these hand offs between objects when really you just want something short and sweet.

Its very hard to do right and will take years to figure out how to be able to do it in a way that makes the code look better not worse, let alone figure it out quickly. Just realise that in the long term you are going to be much better off.

The reason people write setters is because they don't realise other patterns exist to allow you to perform operations without them.


While that sounds nice in theory, not using setters at all often produces needless overhead.

Take for an example a simple forum with user management. You have users, users have passwords and passwords are encrypted and occasionally get changed. In your example, everytime a user changes his password, you would clone the old user object with a different password parameter, which is a lot more overhead than taking an existing object and simply changing a property of it through a setter.

A simple "SetPassword(new Password)" method is easier to read, write and understand with exactly zero useless overhead. You can even hide the password encrypting, salting, spicing and garnishing behind it and not use a getter because passwords are 1-way encrypted and reading it doesn't make sense.

Yes, there are lots of getters/setters that are bad and i'm usually in favour of only using what is really neccessary and not go by the internal datastructure of the object, but there are also lots of legitimate reasons to use getters/setters, even in semi-functional languages like ruby. You just always have to think before blindly creating getters and setters, which admittedly is something not a lot of programmers do. Setters should never be a replacement for a proper constructor nor be required to initialize an object; That should always happen in the constructor so the object always has a valid state.
Shikada
Profile Joined May 2012
Serbia976 Posts
September 18 2013 21:00 GMT
#7111
There is nothing inherently wrong with getters and setters, but like many things in programming, they are overused in the industry.

There are some easy rules to follow for the most common languages. In java for example, never use public fields. You want to use getters and setters because you don't want to refactor all your code later if you decide to change something from a public field to a method. If you want to expose a field use a getter every time. Think really carefully before making a setter for anything. Do not just use Eclipse to generate them for every field you have, this is the worst. Every setter makes your class that much more mutable and more complicated. Always use final for fields that won't change. Managing what is mutable and what isn't is a key point of proper class design.

It's easier in a language like C#, because there you have Properties. It's a uniform and standard way to abstract access to your class fields. You want to expose a private field, just pass it along trough a Property of the same name. You decide later that there should be some computation as well during getting and setting, just add it to the Property. Nice and easy, although the name is confusing and overloaded, but you get used to that.

This is called the Uniform access principle:
http://en.wikipedia.org/wiki/Uniform_access_principle

It's even easier in Ruby because like the Eiffel programming language that originated the principle, it has absolutely uniform access to its fields and computations. So in Ruby your method for getting field x of a class will be called simply x. So when you call it it looks like you are using public fields, like SomeObject.x. That is not the case however, and that is the point of the Uniform access principle. It's just a method passing along a private field named x, or it could be any kind of computation. You can't tell just from the method name. This lack of expressiveness bothers some people, but it's more abstract for sure, and is slowly becoming the norm.

You can't have this in a language like java, for example if in a class you have a field:
private int number;

You can then have getters and setters like this:
public int number() {return this.number;}
public void number(int input) {this.number = input;}

And call it like this:
System.out.print(SomeObject.number); //get
SomeObject.number(5); //set
But in java you can't overload a method so that it has different return types, even if it's parameters are different. Just some of many limitations of java.

I hope that clears it all up.
Kambing
Profile Joined May 2010
United States1176 Posts
September 18 2013 21:25 GMT
#7112
On September 18 2013 23:36 rasnj wrote:
Show nested quote +
On September 18 2013 23:26 darkness wrote:
I've been wondering, what's the reason for C style pointers missing in languages such as Java? Is object reference the better replacement?

It comes down to:
1) It's hard to manage memory, and even if most programmers believe they are superstars they often make mistakes so we don't allow them to.
2) It is much easier to make an efficient garbage collector if you control all of memory allocation. It's hard to say when to destroy an object when at any time a programmer might pull of some fancy pointer arithmetic that points to an object you thought no one was referencing.
3) Why would you want to?

If you can come up with an example where you think they might help you, then we can comment more concretely on that.

EDIT: And no, object reference is not "better". For the sake of just passing objects by reference, object reference is likely better, but pointers give you more flexibility. They are different concepts. It is like asking whether monitors or printers are better. It depends on your application.


Make sure to distinguish between pointers vs. references and manual memory management. C is a language that contains explicit pointers and manual memory management, but the two concepts need not be intertwined, i.e., you can have a language with manual memory management but no notion of a C-like pointer. In particular, references are merely a restricted version of pointers. That is a reference is a pointer that cannot be re-seated, i.e., made to point to another object/chunk of memory, once it is set initially.

Manual memory management aside, the behavior of being able to re-assign a pointer is seldom necessary in most domains. When it is necessary, e.g., iterating over an array, specialized constructs such as array indexing (distinct from pointer manipulation) covers those situations in a safer, more direct manner than pointers can.
Kambing
Profile Joined May 2010
United States1176 Posts
September 18 2013 22:45 GMT
#7113
And also, the conversation about getters and setters is confusing two distinct concepts:

(1) Getters/setters and properties. Setters and getters are methods that wrap properties of an object, typically abstracting away the fact that they are "backed" by an underlying instance variable/field. This is useful when you want to enforce invariants about those properties (e.g., a setter that ensures that a person's age should always be positive) or control the behavior of an object with respect to those properties (e.g., making a getter private, effectively making the corresponding property "write only").

People deride getters/setters (in the "post-Java" world) because they exemplary of the over-engineering mentality that they believe characterizes many object-oriented programs. Trivial getter/setters, i.e., those that simply get and set some underlying instance variable/field, do not add immediate value and most likely will never be changed. Or if they are changed, will require invasive enough changes elsewhere that the benefit of the abstraction is negligible.

(2) Mutation. Simply put, the ability to change the value of variables in a program can ruin your ability to locally reason about the correctness of your programs. Furthermore, mutation of data in many cases can be considered a low-level concern relatively to the high-level concern of specifying, in a declarative manner, how a computation ought to be carried out. In virtually all programming languages and situations, you want to limit your use of mutation as much as possible in order to preserve local reasoning and be able to state clearly and concisely "what" your code does rather than "how" it should operate.
Azerbaijan
Profile Blog Joined January 2010
United States660 Posts
September 18 2013 23:28 GMT
#7114
So from all this discussion about getters and setters I have gathered that I should generally avoid them except in very specific situations where their use will not cause problems later?
zzdd
Profile Joined December 2010
United States484 Posts
September 18 2013 23:36 GMT
#7115
On September 19 2013 08:28 Azerbaijan wrote:
So from all this discussion about getters and setters I have gathered that I should generally avoid them except in very specific situations where their use will not cause problems later?


You should use them in most situations but it depends on the language and what you're using them for. Programmer are very nitpicky when it comes to certain things and if it doesn't work in certain situations, it becomes their mission in life to make sure that no one ever uses it even if it will be useful.
Kambing
Profile Joined May 2010
United States1176 Posts
September 19 2013 00:49 GMT
#7116
On September 19 2013 08:36 zzdd wrote:
Show nested quote +
On September 19 2013 08:28 Azerbaijan wrote:
So from all this discussion about getters and setters I have gathered that I should generally avoid them except in very specific situations where their use will not cause problems later?


You should use them in most situations but it depends on the language and what you're using them for. Programmer are very nitpicky when it comes to certain things and if it doesn't work in certain situations, it becomes their mission in life to make sure that no one ever uses it even if it will be useful.


The higher order bit here is minimizing state in your program. Try to avoid extraneous state in your programs (e.g., using iterator/algorithms in C++ or comprehensions in Python over for-loops), and when you need to use state, ensure that you limit the scope of who can observe and modify it (e.g., using a getter with a private setter). Getters and setters in this light become a language-dependent implementation detail whose use depends on the context you are programming in.
sluggaslamoo
Profile Blog Joined November 2009
Australia4494 Posts
Last Edited: 2013-09-19 01:37:28
September 19 2013 01:16 GMT
#7117
On September 19 2013 01:58 Morfildur wrote:
Show nested quote +
On September 18 2013 12:57 sluggaslamoo wrote:
On September 18 2013 08:45 berated- wrote:
On September 18 2013 08:33 sluggaslamoo wrote:
On September 17 2013 16:20 WindWolf wrote:
On September 17 2013 15:06 sluggaslamoo wrote:
On September 17 2013 14:56 WindWolf wrote:
One reason I'm wondering why people hate setters and getters:

Lets assume that you've written a RNG that has complex internal code (just as an example) and requires lots of code just to set a seed (once again, just as an example). In that case, I would find it easier to read setSeed() than reading all of the code required to set the seed itself


There are plenty of reasons. In this case it could make the code hard to debug. Someone could easily already be using setSeed() deeply nested within some code file.

Then when your code doesn't randomize differently, you are sitting their scratching your head why your RNG code didn't work when really it was setSeed causing the issue.

The seed should be set in the initialiser or as a config variable or both. You aren't going to need to set the seed on the fly for RNG in a game. When the game starts you initialise the RNG object with the seed using the game's config table, after that it doesn't need to change until the next game. This allows you to isolate the issue and make debugging much easier.

IMO it is also simpler and much more intuitive than using setSeed.

I didn't meant about RNG's specifically, I was just the principal of using a set method for a long piece of code instead of writing all of that code again.

And even is setSeed isn't a good name, what would make for a good name (we can take the debate on whenever you should be able to change the seed of a RGN or not another time)


The same principle still applies for lots of things. You want to reduce the amount of side effects to a minimum in order to make maintenance easier. I can't even remember the last time I have used a setter.


Can you make some clarifications, for those of us in the peanut gallery.

What language do you do most of your programming in? And what are the size of the projects you work on? Also, are you excluding domain objects from outside of your discussions of getters and setters are bad.


I use Ruby mostly, but in certain companies I have had to deal with other languages, and I've worked on both small and behemoth monolithic Java projects.

Basically what phar said. For every time you allow something to change, it means one extra thing you have to think of when debugging.

Setting is best left to initialisers or factories, then the object performs services using those parameters and keeps everything self contained. Instead of using setters, you can create a new object with new parameters.

Think of it as people working within an organisation, you don't set attributes to people, you have different people who are capable of doing different things. "Hire" the people you need, then hand off the work to each of these people to get the work done.

In Ruby I also tend to do a lot of things functionally, which removes the issue of side effects, but you also don't have these hand offs between objects when really you just want something short and sweet.

Its very hard to do right and will take years to figure out how to be able to do it in a way that makes the code look better not worse, let alone figure it out quickly. Just realise that in the long term you are going to be much better off.

The reason people write setters is because they don't realise other patterns exist to allow you to perform operations without them.


While that sounds nice in theory, not using setters at all often produces needless overhead.

Take for an example a simple forum with user management. You have users, users have passwords and passwords are encrypted and occasionally get changed. In your example, everytime a user changes his password, you would clone the old user object with a different password parameter, which is a lot more overhead than taking an existing object and simply changing a property of it through a setter.

A simple "SetPassword(new Password)" method is easier to read, write and understand with exactly zero useless overhead. You can even hide the password encrypting, salting, spicing and garnishing behind it and not use a getter because passwords are 1-way encrypted and reading it doesn't make sense.

Yes, there are lots of getters/setters that are bad and i'm usually in favour of only using what is really neccessary and not go by the internal datastructure of the object, but there are also lots of legitimate reasons to use getters/setters, even in semi-functional languages like ruby. You just always have to think before blindly creating getters and setters, which admittedly is something not a lot of programmers do. Setters should never be a replacement for a proper constructor nor be required to initialize an object; That should always happen in the constructor so the object always has a valid state.


Not trying to sound arrogant or anything, but I think its funny to say "it sounds nice in theory" when it is actually something I do in my profession all the time.

For databasing you should have a library which prevents you from having to use or write setters. Good libraries will provide an abstraction which takes you away from the process of having to manage data manually, mitigating the risk of human error. You then declare you database strategy, your validations, and data processing methods, and your library should handle the rest.

Sure in the end, you are still "setting" data but because it is all automated and declarative you are reducing side-effects to a minimum.

For example you will first be encapsulating the process in a transaction which gives you fail fast and rollback capabilities. Second of all instead of using setPassword, you would be using a system which takes form data and processes it automatically.

Any fields that are changed are processed within a transaction, if there are any errors it is rolled back, if not the transaction is committed and any processing required, e.g creating an encrypted password, is executed without procedural programming involved.

Feel free to keep shooting me examples. Keep in mind we are talking in the context of using OOP, which mostly useful for websites and business applications. The only thing I can think of where setters will be used extensively will be in an application where full OOP is not ideal, like traditional games.

EDIT:
I think I should clarify and answer your question more specifically.

Sometimes its impossible to avoid side-effects or setting data, your example is a good one. When you have to save user data, its physically impossible not to "set" data, it wouldn't make sense not to.

But as I said earlier, the only reason it doesn't make sense not to use a "setter" in this example is because you haven't learned a pattern to avoid doing so. There is always more overhead in writing and using setters.

To provide a solution to your example specifically.
- I would use a database library which handles my form data.
- I write a callback in my database which automatically encrypts and save the password whenever it changes.
- When the user submits form data which includes a password, the database library executes the callback which encrypts the password

This is much more elegant to using setPassword(), which I'm sure you would understand the headaches involved when it would come to maintaining very large program. You would have to make sure setPassword() is called in all the appropriate user save methods, finding those methods would take you hours in a corporate business application, and you would have to make sure that every other programmer knows to use it as well.

Having an on_change callback solves all these problems, if a person puts a password field in a new page that I don't know of, it will still take effect. Now I am completely abstracted away from the database management process. I don't need to use set() or save() anywhere in my code which makes my life much easier when I run into any problems.
Come play Android Netrunner - http://www.teamliquid.net/forum/viewmessage.php?topic_id=409008
berated-
Profile Blog Joined February 2007
United States1134 Posts
September 19 2013 01:47 GMT
#7118
On September 19 2013 10:16 sluggaslamoo wrote:
Show nested quote +
On September 19 2013 01:58 Morfildur wrote:
On September 18 2013 12:57 sluggaslamoo wrote:
On September 18 2013 08:45 berated- wrote:
On September 18 2013 08:33 sluggaslamoo wrote:
On September 17 2013 16:20 WindWolf wrote:
On September 17 2013 15:06 sluggaslamoo wrote:
On September 17 2013 14:56 WindWolf wrote:
One reason I'm wondering why people hate setters and getters:

Lets assume that you've written a RNG that has complex internal code (just as an example) and requires lots of code just to set a seed (once again, just as an example). In that case, I would find it easier to read setSeed() than reading all of the code required to set the seed itself


There are plenty of reasons. In this case it could make the code hard to debug. Someone could easily already be using setSeed() deeply nested within some code file.

Then when your code doesn't randomize differently, you are sitting their scratching your head why your RNG code didn't work when really it was setSeed causing the issue.

The seed should be set in the initialiser or as a config variable or both. You aren't going to need to set the seed on the fly for RNG in a game. When the game starts you initialise the RNG object with the seed using the game's config table, after that it doesn't need to change until the next game. This allows you to isolate the issue and make debugging much easier.

IMO it is also simpler and much more intuitive than using setSeed.

I didn't meant about RNG's specifically, I was just the principal of using a set method for a long piece of code instead of writing all of that code again.

And even is setSeed isn't a good name, what would make for a good name (we can take the debate on whenever you should be able to change the seed of a RGN or not another time)


The same principle still applies for lots of things. You want to reduce the amount of side effects to a minimum in order to make maintenance easier. I can't even remember the last time I have used a setter.


Can you make some clarifications, for those of us in the peanut gallery.

What language do you do most of your programming in? And what are the size of the projects you work on? Also, are you excluding domain objects from outside of your discussions of getters and setters are bad.


I use Ruby mostly, but in certain companies I have had to deal with other languages, and I've worked on both small and behemoth monolithic Java projects.

Basically what phar said. For every time you allow something to change, it means one extra thing you have to think of when debugging.

Setting is best left to initialisers or factories, then the object performs services using those parameters and keeps everything self contained. Instead of using setters, you can create a new object with new parameters.

Think of it as people working within an organisation, you don't set attributes to people, you have different people who are capable of doing different things. "Hire" the people you need, then hand off the work to each of these people to get the work done.

In Ruby I also tend to do a lot of things functionally, which removes the issue of side effects, but you also don't have these hand offs between objects when really you just want something short and sweet.

Its very hard to do right and will take years to figure out how to be able to do it in a way that makes the code look better not worse, let alone figure it out quickly. Just realise that in the long term you are going to be much better off.

The reason people write setters is because they don't realise other patterns exist to allow you to perform operations without them.


While that sounds nice in theory, not using setters at all often produces needless overhead.

Take for an example a simple forum with user management. You have users, users have passwords and passwords are encrypted and occasionally get changed. In your example, everytime a user changes his password, you would clone the old user object with a different password parameter, which is a lot more overhead than taking an existing object and simply changing a property of it through a setter.

A simple "SetPassword(new Password)" method is easier to read, write and understand with exactly zero useless overhead. You can even hide the password encrypting, salting, spicing and garnishing behind it and not use a getter because passwords are 1-way encrypted and reading it doesn't make sense.

Yes, there are lots of getters/setters that are bad and i'm usually in favour of only using what is really neccessary and not go by the internal datastructure of the object, but there are also lots of legitimate reasons to use getters/setters, even in semi-functional languages like ruby. You just always have to think before blindly creating getters and setters, which admittedly is something not a lot of programmers do. Setters should never be a replacement for a proper constructor nor be required to initialize an object; That should always happen in the constructor so the object always has a valid state.


Not trying to sound arrogant or anything, but I think its funny to say "it sounds nice in theory" when it is actually something I do in my profession all the time.

For databasing you should have a library which prevents you from having to use or write setters. Good libraries will provide an abstraction which takes you away from the process of having to manage data manually, mitigating the risk of human error. You then declare you database strategy, your validations, and data processing methods, and your library should handle the rest.

Sure in the end, you are still "setting" data but because it is all automated and declarative you are reducing side-effects to a minimum.

For example you will first be encapsulating the process in a transaction which gives you fail fast and rollback capabilities. Second of all instead of using setPassword, you would be using a system which takes form data and processes it automatically.

Any fields that are changed are processed within a transaction, if there are any errors it is rolled back, if not the transaction is committed and any processing required, e.g creating an encrypted password, is executed without procedural programming involved.

Feel free to keep shooting me examples. Keep in mind we are talking in the context of using OOP, which mostly useful for websites and business applications. The only thing I can think of where setters will be used extensively will be in an application where full OOP is not ideal, like traditional games.

EDIT:
I think I should clarify and answer your question more specifically.

Sometimes its impossible to avoid side-effects or setting data, your example is a good one. When you have to save user data, its physically impossible not to "set" data, it wouldn't make sense not to.

But as I said earlier, the only reason it doesn't make sense not to use a "setter" in this example is because you haven't learned a pattern to avoid doing so. There is always more overhead in writing and using setters.

To provide a solution to your example specifically.
- I would use a database library which handles my form data.
- I write a callback in my database which automatically encrypts and save the password whenever it changes.
- When the user submits form data which includes a password, the database library executes the callback which encrypts the password

This is much more elegant to using setPassword(), which I'm sure you would understand the headaches involved when it would come to maintaining very large program. You would have to make sure setPassword() is called in all the appropriate user save methods, finding those methods would take you hours in a corporate business application, and you would have to make sure that every other programmer knows to use it as well.

Having an on_change callback solves all these problems, if a person puts a password field in a new page that I don't know of, it will still take effect. Now I am completely abstracted away from the database management process. I don't need to use set() or save() anywhere in my code which makes my life much easier when I run into any problems.


This assumes that you are able to use a library like the ones used in ruby, python, or hibernate for java. Sometimes the sql that you have is so complex because you don't control the schema that this isn't possible. The strategy used for these is great for simple schemas, not so much when the schemas get super complex. This isn't an excuse to allow for poorly designed code, just sayin...

The thing to note here is that every in development is about trade offs. Even Holub's article on javaworld touches on this. Encapsulating all the code in the model is great because it allows for you to only touch the parts of code that you want to touch. Using libraries that handle this is even better.

What if you decided though that relational databases are complete shit, and nosql is the way to go? You would either have to redo all the libraries that were done for sql, or you hope that in that instance you wrote your own DAO layer and now you can just swap that out instead of having to touch all the rest of your model code.

Design is hard.
sluggaslamoo
Profile Blog Joined November 2009
Australia4494 Posts
Last Edited: 2013-09-19 02:07:25
September 19 2013 02:05 GMT
#7119
On September 19 2013 10:47 berated- wrote:
Show nested quote +
On September 19 2013 10:16 sluggaslamoo wrote:
On September 19 2013 01:58 Morfildur wrote:
On September 18 2013 12:57 sluggaslamoo wrote:
On September 18 2013 08:45 berated- wrote:
On September 18 2013 08:33 sluggaslamoo wrote:
On September 17 2013 16:20 WindWolf wrote:
On September 17 2013 15:06 sluggaslamoo wrote:
On September 17 2013 14:56 WindWolf wrote:
One reason I'm wondering why people hate setters and getters:

Lets assume that you've written a RNG that has complex internal code (just as an example) and requires lots of code just to set a seed (once again, just as an example). In that case, I would find it easier to read setSeed() than reading all of the code required to set the seed itself


There are plenty of reasons. In this case it could make the code hard to debug. Someone could easily already be using setSeed() deeply nested within some code file.

Then when your code doesn't randomize differently, you are sitting their scratching your head why your RNG code didn't work when really it was setSeed causing the issue.

The seed should be set in the initialiser or as a config variable or both. You aren't going to need to set the seed on the fly for RNG in a game. When the game starts you initialise the RNG object with the seed using the game's config table, after that it doesn't need to change until the next game. This allows you to isolate the issue and make debugging much easier.

IMO it is also simpler and much more intuitive than using setSeed.

I didn't meant about RNG's specifically, I was just the principal of using a set method for a long piece of code instead of writing all of that code again.

And even is setSeed isn't a good name, what would make for a good name (we can take the debate on whenever you should be able to change the seed of a RGN or not another time)


The same principle still applies for lots of things. You want to reduce the amount of side effects to a minimum in order to make maintenance easier. I can't even remember the last time I have used a setter.


Can you make some clarifications, for those of us in the peanut gallery.

What language do you do most of your programming in? And what are the size of the projects you work on? Also, are you excluding domain objects from outside of your discussions of getters and setters are bad.


I use Ruby mostly, but in certain companies I have had to deal with other languages, and I've worked on both small and behemoth monolithic Java projects.

Basically what phar said. For every time you allow something to change, it means one extra thing you have to think of when debugging.

Setting is best left to initialisers or factories, then the object performs services using those parameters and keeps everything self contained. Instead of using setters, you can create a new object with new parameters.

Think of it as people working within an organisation, you don't set attributes to people, you have different people who are capable of doing different things. "Hire" the people you need, then hand off the work to each of these people to get the work done.

In Ruby I also tend to do a lot of things functionally, which removes the issue of side effects, but you also don't have these hand offs between objects when really you just want something short and sweet.

Its very hard to do right and will take years to figure out how to be able to do it in a way that makes the code look better not worse, let alone figure it out quickly. Just realise that in the long term you are going to be much better off.

The reason people write setters is because they don't realise other patterns exist to allow you to perform operations without them.


While that sounds nice in theory, not using setters at all often produces needless overhead.

Take for an example a simple forum with user management. You have users, users have passwords and passwords are encrypted and occasionally get changed. In your example, everytime a user changes his password, you would clone the old user object with a different password parameter, which is a lot more overhead than taking an existing object and simply changing a property of it through a setter.

A simple "SetPassword(new Password)" method is easier to read, write and understand with exactly zero useless overhead. You can even hide the password encrypting, salting, spicing and garnishing behind it and not use a getter because passwords are 1-way encrypted and reading it doesn't make sense.

Yes, there are lots of getters/setters that are bad and i'm usually in favour of only using what is really neccessary and not go by the internal datastructure of the object, but there are also lots of legitimate reasons to use getters/setters, even in semi-functional languages like ruby. You just always have to think before blindly creating getters and setters, which admittedly is something not a lot of programmers do. Setters should never be a replacement for a proper constructor nor be required to initialize an object; That should always happen in the constructor so the object always has a valid state.


Not trying to sound arrogant or anything, but I think its funny to say "it sounds nice in theory" when it is actually something I do in my profession all the time.

For databasing you should have a library which prevents you from having to use or write setters. Good libraries will provide an abstraction which takes you away from the process of having to manage data manually, mitigating the risk of human error. You then declare you database strategy, your validations, and data processing methods, and your library should handle the rest.

Sure in the end, you are still "setting" data but because it is all automated and declarative you are reducing side-effects to a minimum.

For example you will first be encapsulating the process in a transaction which gives you fail fast and rollback capabilities. Second of all instead of using setPassword, you would be using a system which takes form data and processes it automatically.

Any fields that are changed are processed within a transaction, if there are any errors it is rolled back, if not the transaction is committed and any processing required, e.g creating an encrypted password, is executed without procedural programming involved.

Feel free to keep shooting me examples. Keep in mind we are talking in the context of using OOP, which mostly useful for websites and business applications. The only thing I can think of where setters will be used extensively will be in an application where full OOP is not ideal, like traditional games.

EDIT:
I think I should clarify and answer your question more specifically.

Sometimes its impossible to avoid side-effects or setting data, your example is a good one. When you have to save user data, its physically impossible not to "set" data, it wouldn't make sense not to.

But as I said earlier, the only reason it doesn't make sense not to use a "setter" in this example is because you haven't learned a pattern to avoid doing so. There is always more overhead in writing and using setters.

To provide a solution to your example specifically.
- I would use a database library which handles my form data.
- I write a callback in my database which automatically encrypts and save the password whenever it changes.
- When the user submits form data which includes a password, the database library executes the callback which encrypts the password

This is much more elegant to using setPassword(), which I'm sure you would understand the headaches involved when it would come to maintaining very large program. You would have to make sure setPassword() is called in all the appropriate user save methods, finding those methods would take you hours in a corporate business application, and you would have to make sure that every other programmer knows to use it as well.

Having an on_change callback solves all these problems, if a person puts a password field in a new page that I don't know of, it will still take effect. Now I am completely abstracted away from the database management process. I don't need to use set() or save() anywhere in my code which makes my life much easier when I run into any problems.


This assumes that you are able to use a library like the ones used in ruby, python, or hibernate for java. Sometimes the sql that you have is so complex because you don't control the schema that this isn't possible. The strategy used for these is great for simple schemas, not so much when the schemas get super complex. This isn't an excuse to allow for poorly designed code, just sayin...

The thing to note here is that every in development is about trade offs. Even Holub's article on javaworld touches on this. Encapsulating all the code in the model is great because it allows for you to only touch the parts of code that you want to touch. Using libraries that handle this is even better.

What if you decided though that relational databases are complete shit, and nosql is the way to go? You would either have to redo all the libraries that were done for sql, or you hope that in that instance you wrote your own DAO layer and now you can just swap that out instead of having to touch all the rest of your model code.

Design is hard.


When the schemas get super complex that's when this system becomes even better, however there should really never really be a time when your schemas get super complex. If that happens you are really over-engineering. I've worked with some very complex schemas using this system, I myself like to keep things very simple but that's probably due to the fact that things can be much simpler with ODM than SQL.

In my case most libraries reserve the same words as active_record, so swapping between a relational database and a document oriented database is a piece of cake. However in a large application the company wouldn't bother to give you the time to do this anyway.

All I'm doing is giving an appropriate example which in this case I'm assuming Java. Sure there are some edge cases where some obscure programming language won't have an appropriate library to do that, but there's nothing to be gained from using those examples in this debate.

It just seems silly to bring up an edge case example where you have to be dead set on writing setters. You may as well bring up C and ask how to write a website in it without using setters, then when I can't answer you just say "hah! see I told you!".

If an obscure language which nobody uses doesn't provide a library, I think many people would appreciate you writing the library yourself and sharing it . Otherwise don't use that language, any problems you face you brought upon yourself.
Come play Android Netrunner - http://www.teamliquid.net/forum/viewmessage.php?topic_id=409008
berated-
Profile Blog Joined February 2007
United States1134 Posts
Last Edited: 2013-09-19 02:41:01
September 19 2013 02:35 GMT
#7120
On September 19 2013 11:05 sluggaslamoo wrote:
Show nested quote +
On September 19 2013 10:47 berated- wrote:
On September 19 2013 10:16 sluggaslamoo wrote:
On September 19 2013 01:58 Morfildur wrote:
On September 18 2013 12:57 sluggaslamoo wrote:
On September 18 2013 08:45 berated- wrote:
On September 18 2013 08:33 sluggaslamoo wrote:
On September 17 2013 16:20 WindWolf wrote:
On September 17 2013 15:06 sluggaslamoo wrote:
On September 17 2013 14:56 WindWolf wrote:
One reason I'm wondering why people hate setters and getters:

Lets assume that you've written a RNG that has complex internal code (just as an example) and requires lots of code just to set a seed (once again, just as an example). In that case, I would find it easier to read setSeed() than reading all of the code required to set the seed itself


There are plenty of reasons. In this case it could make the code hard to debug. Someone could easily already be using setSeed() deeply nested within some code file.

Then when your code doesn't randomize differently, you are sitting their scratching your head why your RNG code didn't work when really it was setSeed causing the issue.

The seed should be set in the initialiser or as a config variable or both. You aren't going to need to set the seed on the fly for RNG in a game. When the game starts you initialise the RNG object with the seed using the game's config table, after that it doesn't need to change until the next game. This allows you to isolate the issue and make debugging much easier.

IMO it is also simpler and much more intuitive than using setSeed.

I didn't meant about RNG's specifically, I was just the principal of using a set method for a long piece of code instead of writing all of that code again.

And even is setSeed isn't a good name, what would make for a good name (we can take the debate on whenever you should be able to change the seed of a RGN or not another time)


The same principle still applies for lots of things. You want to reduce the amount of side effects to a minimum in order to make maintenance easier. I can't even remember the last time I have used a setter.


Can you make some clarifications, for those of us in the peanut gallery.

What language do you do most of your programming in? And what are the size of the projects you work on? Also, are you excluding domain objects from outside of your discussions of getters and setters are bad.


I use Ruby mostly, but in certain companies I have had to deal with other languages, and I've worked on both small and behemoth monolithic Java projects.

Basically what phar said. For every time you allow something to change, it means one extra thing you have to think of when debugging.

Setting is best left to initialisers or factories, then the object performs services using those parameters and keeps everything self contained. Instead of using setters, you can create a new object with new parameters.

Think of it as people working within an organisation, you don't set attributes to people, you have different people who are capable of doing different things. "Hire" the people you need, then hand off the work to each of these people to get the work done.

In Ruby I also tend to do a lot of things functionally, which removes the issue of side effects, but you also don't have these hand offs between objects when really you just want something short and sweet.

Its very hard to do right and will take years to figure out how to be able to do it in a way that makes the code look better not worse, let alone figure it out quickly. Just realise that in the long term you are going to be much better off.

The reason people write setters is because they don't realise other patterns exist to allow you to perform operations without them.


While that sounds nice in theory, not using setters at all often produces needless overhead.

Take for an example a simple forum with user management. You have users, users have passwords and passwords are encrypted and occasionally get changed. In your example, everytime a user changes his password, you would clone the old user object with a different password parameter, which is a lot more overhead than taking an existing object and simply changing a property of it through a setter.

A simple "SetPassword(new Password)" method is easier to read, write and understand with exactly zero useless overhead. You can even hide the password encrypting, salting, spicing and garnishing behind it and not use a getter because passwords are 1-way encrypted and reading it doesn't make sense.

Yes, there are lots of getters/setters that are bad and i'm usually in favour of only using what is really neccessary and not go by the internal datastructure of the object, but there are also lots of legitimate reasons to use getters/setters, even in semi-functional languages like ruby. You just always have to think before blindly creating getters and setters, which admittedly is something not a lot of programmers do. Setters should never be a replacement for a proper constructor nor be required to initialize an object; That should always happen in the constructor so the object always has a valid state.


Not trying to sound arrogant or anything, but I think its funny to say "it sounds nice in theory" when it is actually something I do in my profession all the time.

For databasing you should have a library which prevents you from having to use or write setters. Good libraries will provide an abstraction which takes you away from the process of having to manage data manually, mitigating the risk of human error. You then declare you database strategy, your validations, and data processing methods, and your library should handle the rest.

Sure in the end, you are still "setting" data but because it is all automated and declarative you are reducing side-effects to a minimum.

For example you will first be encapsulating the process in a transaction which gives you fail fast and rollback capabilities. Second of all instead of using setPassword, you would be using a system which takes form data and processes it automatically.

Any fields that are changed are processed within a transaction, if there are any errors it is rolled back, if not the transaction is committed and any processing required, e.g creating an encrypted password, is executed without procedural programming involved.

Feel free to keep shooting me examples. Keep in mind we are talking in the context of using OOP, which mostly useful for websites and business applications. The only thing I can think of where setters will be used extensively will be in an application where full OOP is not ideal, like traditional games.

EDIT:
I think I should clarify and answer your question more specifically.

Sometimes its impossible to avoid side-effects or setting data, your example is a good one. When you have to save user data, its physically impossible not to "set" data, it wouldn't make sense not to.

But as I said earlier, the only reason it doesn't make sense not to use a "setter" in this example is because you haven't learned a pattern to avoid doing so. There is always more overhead in writing and using setters.

To provide a solution to your example specifically.
- I would use a database library which handles my form data.
- I write a callback in my database which automatically encrypts and save the password whenever it changes.
- When the user submits form data which includes a password, the database library executes the callback which encrypts the password

This is much more elegant to using setPassword(), which I'm sure you would understand the headaches involved when it would come to maintaining very large program. You would have to make sure setPassword() is called in all the appropriate user save methods, finding those methods would take you hours in a corporate business application, and you would have to make sure that every other programmer knows to use it as well.

Having an on_change callback solves all these problems, if a person puts a password field in a new page that I don't know of, it will still take effect. Now I am completely abstracted away from the database management process. I don't need to use set() or save() anywhere in my code which makes my life much easier when I run into any problems.


This assumes that you are able to use a library like the ones used in ruby, python, or hibernate for java. Sometimes the sql that you have is so complex because you don't control the schema that this isn't possible. The strategy used for these is great for simple schemas, not so much when the schemas get super complex. This isn't an excuse to allow for poorly designed code, just sayin...

The thing to note here is that every in development is about trade offs. Even Holub's article on javaworld touches on this. Encapsulating all the code in the model is great because it allows for you to only touch the parts of code that you want to touch. Using libraries that handle this is even better.

What if you decided though that relational databases are complete shit, and nosql is the way to go? You would either have to redo all the libraries that were done for sql, or you hope that in that instance you wrote your own DAO layer and now you can just swap that out instead of having to touch all the rest of your model code.

Design is hard.


When the schemas get super complex that's when this system becomes even better, however there should really never really be a time when your schemas get super complex. If that happens you are really over-engineering. I've worked with some very complex schemas using this system, I myself like to keep things very simple but that's probably due to the fact that things can be much simpler with ODM than SQL.

In my case most libraries reserve the same words as active_record, so swapping between a relational database and a document oriented database is a piece of cake. However in a large application the company wouldn't bother to give you the time to do this anyway.

All I'm doing is giving an appropriate example which in this case I'm assuming Java. Sure there are some edge cases where some obscure programming language won't have an appropriate library to do that, but there's nothing to be gained from using those examples in this debate.

It just seems silly to bring up an edge case example where you have to be dead set on writing setters. You may as well bring up C and ask how to write a website in it without using setters, then when I can't answer you just say "hah! see I told you!".

If an obscure language which nobody uses doesn't provide a library, I think many people would appreciate you writing the library yourself and sharing it . Otherwise don't use that language, any problems you face you brought upon yourself.


I guess your edge is my daily reality, and my edge is your daily reality. It seems silly to me to talk about running a website and calling that the normal case. It's kind of hard to keep running a warehouse a simple process.

I'm by no means saying you're wrong, just simply observing that making an absolute statement like getters and setters are bad, and that languages that use them are _terrible_ languages is pretty bold. The only point I'm trying to raise is that everything is about tradeoffs, and while one might prefer to avoid getters and setters, that doing so should be weighed and debated on a situational basis instead of an absolution.

Edit: I'm far down on the list though, so, I should probably bow out of this conversation. :D
Prev 1 354 355 356 357 358 1031 Next
Please log in or register to reply.
Live Events Refresh
SC Evo League
12:00
S2 Championship: Ro16 Day 2
SteadfastSC82
EnkiAlexander 33
IntoTheiNu 10
Liquipedia
WardiTV Summer Champion…
11:00
Playoffs Day 1
ByuN vs herO
MaxPax vs Zoun
Clem vs NightMare
WardiTV1132
Liquipedia
Sparkling Tuna Cup
10:00
Weekly #103
Solar vs ShoWTimELIVE!
ByuN vs TBD
CranKy Ducklings332
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
Rex 137
BRAT_OK 103
ProTech92
SteadfastSC 82
StarCraft: Brood War
Britney 43288
Larva 906
Killer 393
Hyun 324
Hyuk 295
Last 289
Mini 283
Rush 263
Pusan 252
ggaemo 240
[ Show more ]
firebathero 198
PianO 191
Mind 131
soO 33
Free 23
ajuk12(nOOB) 19
HiyA 15
Noble 10
Sacsri 9
Dota 2
Gorgc10465
qojqva1552
XcaliburYe325
Pyrionflax220
Fuzer 148
League of Legends
Dendi778
Counter-Strike
summit1g9144
olofmeister1836
Super Smash Bros
Mew2King60
Heroes of the Storm
Khaldor211
Other Games
singsing2002
B2W.Neo1111
byalli216
RotterdaM180
KnowMe51
rGuardiaN17
Organizations
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 15 non-featured ]
StarCraft 2
• Reevou 10
• intothetv
• AfreecaTV YouTube
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Dota 2
• C_a_k_e 2849
• WagamamaTV444
League of Legends
• Nemesis4457
• Jankos2595
Upcoming Events
Chat StarLeague
2h 38m
Razz vs Julia
StRyKeR vs ZZZero
Semih vs TBD
Replay Cast
10h 38m
Afreeca Starleague
20h 38m
Queen vs HyuN
EffOrt vs Calm
Wardi Open
21h 38m
RotterdaM Event
1d 1h
Replay Cast
1d 10h
Afreeca Starleague
1d 20h
Rush vs TBD
Jaedong vs Mong
WardiTV Summer Champion…
1d 21h
PiGosaur Monday
2 days
Afreeca Starleague
2 days
herO vs TBD
Royal vs Barracks
[ Show More ]
Replay Cast
3 days
The PondCast
3 days
WardiTV Summer Champion…
3 days
Replay Cast
4 days
LiuLi Cup
4 days
Cosmonarchy
5 days
OyAji vs Sziky
Sziky vs WolFix
WolFix vs OyAji
BSL Team Wars
5 days
Team Hawk vs Team Dewalt
BSL Team Wars
5 days
Team Hawk vs Team Bonyth
SC Evo League
5 days
[BSL 2025] Weekly
6 days
SC Evo League
6 days
Liquipedia Results

Completed

Jiahua Invitational
uThermal 2v2 Main Event
HCC Europe

Ongoing

Copa Latinoamericana 4
BSL 20 Team Wars
KCM Race Survival 2025 Season 3
BSL 21 Qualifiers
ASL Season 20
CSL Season 18: Qualifier 1
Acropolis #4 - TS1
CSLAN 3
SEL Season 2 Championship
WardiTV Summer 2025
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
FISSURE Playground #1
BLAST.tv Austin Major 2025

Upcoming

CSL Season 18: Qualifier 2
CSL 2025 AUTUMN (S18)
LASL Season 20
BSL Season 21
BSL 21 Team A
Chzzk MurlocKing SC1 vs SC2 Cup #2
RSL Revival: Season 2
Maestros of the Game
EC S1
Sisters' Call Cup
IEM Chengdu 2025
PGL Masters Bucharest 2025
MESA Nomadic Masters Fall
Thunderpick World Champ.
CS Asia Championships 2025
Roobet Cup 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.