|
Thread Rules 1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution. 2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20) 3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible. 4. Use [code] tags to format code blocks. |
Yes, if you want Class3 to use the field of Class1 then you should use inheritance. Class3 extends Class1. It isn't going to see thing though; it will be able to use the myString field you created in Class1.
Alternately you should make a 'getter' method in Class1 so that Class3 can use the field of Class1.
Your example code doesn't make any sense at all. What are you trying to accomplish exactly? If you really want to learn Java you need to make something concrete; writing random snippets of code isn't going to teach you anything.
|
On February 09 2015 13:22 travis wrote: java question
If I pass a reference to another instance as an argument, can I make it so that all of the methods in the new instance will recognize the fields in that reference? like for example a string in the original instance, I can print it from within the constructor of the new instance, but if I try to print it from a method that is called in that constructor, it won't work unless I pass it on as a parameter to that method as well. but that seems really redundant if I am going to be using a bunch of methods... having to put the reference as a parameter over and over. does anyone understand what I am talking about?
Why are you calling so many methods from the constructor?
If they are needed in your class then it shouldn't be too redundant if you add your string as a parameter to each one.
The other option is to save the reference as a member of the class.
|
On February 09 2015 03:27 Djagulingu wrote:Show nested quote +On February 09 2015 01:37 Manit0u wrote:On February 09 2015 00:40 Djagulingu wrote: I have this particular question in my mind: I'm building this web application which, in server side, uses nodejs to get constant stream of data from a 3rd party API and store it in my NoSQL database. I have finished developing the server side and now I'll move on with the client side. I'm wondering what I should use for the front end. I'm stuck between:
1- Front end with EmberJS and having a JSON API for database connectivity (or whatever Ember offers for that) 2- Same thing as #1, just replace EmberJS with AngularJS 3- Use Express and eliminate the need for JSON API for DB connectivity.
What are the pros and cons of each one? You could always try some full stack framework, like MEAN. It uses NoSQL (Mongo), Express, Angular and Node so you could have it all in one neat package. I actually want to use couchbase as the nosql database because it is lotsnlots of times faster when it comes to write operations per second and that is what my server is going to do (think of it like some guy who picks up a single paper from an endless pile of papers, stamp it and put it in another pile and thus i don't want my pile to randomly crash).
A year ago we tried to implement couchbase at work to replace some MySQL user-relationship table. It worked perfectly in simulation but instantly broke down under live load when it got 1000 write requests per second. We switched it back to MySQL fast. Since then, couchbase is a forbidden word 
|
On February 09 2015 17:43 Morfildur wrote:Show nested quote +On February 09 2015 03:27 Djagulingu wrote:On February 09 2015 01:37 Manit0u wrote:On February 09 2015 00:40 Djagulingu wrote: I have this particular question in my mind: I'm building this web application which, in server side, uses nodejs to get constant stream of data from a 3rd party API and store it in my NoSQL database. I have finished developing the server side and now I'll move on with the client side. I'm wondering what I should use for the front end. I'm stuck between:
1- Front end with EmberJS and having a JSON API for database connectivity (or whatever Ember offers for that) 2- Same thing as #1, just replace EmberJS with AngularJS 3- Use Express and eliminate the need for JSON API for DB connectivity.
What are the pros and cons of each one? You could always try some full stack framework, like MEAN. It uses NoSQL (Mongo), Express, Angular and Node so you could have it all in one neat package. I actually want to use couchbase as the nosql database because it is lotsnlots of times faster when it comes to write operations per second and that is what my server is going to do (think of it like some guy who picks up a single paper from an endless pile of papers, stamp it and put it in another pile and thus i don't want my pile to randomly crash). A year ago we tried to implement couchbase at work to replace some MySQL user-relationship table. It worked perfectly in simulation but instantly broke down under live load when it got 1000 write requests per second. We switched it back to MySQL fast. Since then, couchbase is a forbidden word  Is it that bad? I mean, all those benchmarks and shit say that couchbase is much much better than mongo and cassandra when the number of client threads go up to 4 digits. I didn't stress*test it myself though.
|
On February 09 2015 13:50 travis wrote: class1{ String mystring="blah" }
class2 { class1 x = new class1 class3 y = new class3(x)}
class3{
class3(class1 thing){ System.out.print(thing.mystring); method(); }
method(){ System.out.print(thing.mystring); }}
hopefully the code is good here. I made it as simple as possible. in this example I believe that mystring would print out as "blah" the first time, but the 2nd time method will not see thing anymore. so it won't print out mystring, it would just have an error. I am wondering if there is an easy way to make method and any other methods in class3 to see thing.
class3{ class1 thing;
class3(class1 thing){ System.out.print(thing.mystring); this.thing = thing; method(); }
method(){ System.out.print(thing.mystring); }}
There are reasons against doing it this way, but it's the "easy way".
|
On February 09 2015 20:48 Djagulingu wrote:Show nested quote +On February 09 2015 17:43 Morfildur wrote:On February 09 2015 03:27 Djagulingu wrote:On February 09 2015 01:37 Manit0u wrote:On February 09 2015 00:40 Djagulingu wrote: I have this particular question in my mind: I'm building this web application which, in server side, uses nodejs to get constant stream of data from a 3rd party API and store it in my NoSQL database. I have finished developing the server side and now I'll move on with the client side. I'm wondering what I should use for the front end. I'm stuck between:
1- Front end with EmberJS and having a JSON API for database connectivity (or whatever Ember offers for that) 2- Same thing as #1, just replace EmberJS with AngularJS 3- Use Express and eliminate the need for JSON API for DB connectivity.
What are the pros and cons of each one? You could always try some full stack framework, like MEAN. It uses NoSQL (Mongo), Express, Angular and Node so you could have it all in one neat package. I actually want to use couchbase as the nosql database because it is lotsnlots of times faster when it comes to write operations per second and that is what my server is going to do (think of it like some guy who picks up a single paper from an endless pile of papers, stamp it and put it in another pile and thus i don't want my pile to randomly crash). A year ago we tried to implement couchbase at work to replace some MySQL user-relationship table. It worked perfectly in simulation but instantly broke down under live load when it got 1000 write requests per second. We switched it back to MySQL fast. Since then, couchbase is a forbidden word  Is it that bad? I mean, all those benchmarks and shit say that couchbase is much much better than mongo and cassandra when the number of client threads go up to 4 digits. I didn't stress*test it myself though.
http://labs.octivi.com/handling-1-billion-requests-a-week-with-symfony2/
That's how you do it.
|
On February 10 2015 02:07 Manit0u wrote:Show nested quote +On February 09 2015 20:48 Djagulingu wrote:On February 09 2015 17:43 Morfildur wrote:On February 09 2015 03:27 Djagulingu wrote:On February 09 2015 01:37 Manit0u wrote:On February 09 2015 00:40 Djagulingu wrote: I have this particular question in my mind: I'm building this web application which, in server side, uses nodejs to get constant stream of data from a 3rd party API and store it in my NoSQL database. I have finished developing the server side and now I'll move on with the client side. I'm wondering what I should use for the front end. I'm stuck between:
1- Front end with EmberJS and having a JSON API for database connectivity (or whatever Ember offers for that) 2- Same thing as #1, just replace EmberJS with AngularJS 3- Use Express and eliminate the need for JSON API for DB connectivity.
What are the pros and cons of each one? You could always try some full stack framework, like MEAN. It uses NoSQL (Mongo), Express, Angular and Node so you could have it all in one neat package. I actually want to use couchbase as the nosql database because it is lotsnlots of times faster when it comes to write operations per second and that is what my server is going to do (think of it like some guy who picks up a single paper from an endless pile of papers, stamp it and put it in another pile and thus i don't want my pile to randomly crash). A year ago we tried to implement couchbase at work to replace some MySQL user-relationship table. It worked perfectly in simulation but instantly broke down under live load when it got 1000 write requests per second. We switched it back to MySQL fast. Since then, couchbase is a forbidden word  Is it that bad? I mean, all those benchmarks and shit say that couchbase is much much better than mongo and cassandra when the number of client threads go up to 4 digits. I didn't stress*test it myself though. http://labs.octivi.com/handling-1-billion-requests-a-week-with-symfony2/That's how you do it.
That's just 1650 requests a second and most likely most of them are read-only, though the actual use case wasn't explained. It's so easy to make numbers sound big and impressive when most you are doing is sending out the same webpage out again and again from a cache.
|
Anyone here has experience with SSL certificates ?
I have a database with 430k entries of PEM encoded certificates, that I need to decode and store back into another DB with a more logical structure (issuer, encrytion algorithm, start date, end date and so on). I can decode .crt file into text then I can parse through that text to find the info the I need. However, since I need to create .crt files to read them and then parse the result and store it in my database.... this seems terribly inefficient.
I'm inexperienced with both SSL certificates and data sets of that size so I don't even really know where am I going to be honest. If anyone has a vague idea, I'm not looking for a step by step, just some random thoughts if you have any.
|
travis, you may benefit from a read through Effective Java by Bloch.
|
This might be a long shot but I guess it's worth a try:
At work we are officially still using vs 2008 (c++) which is obviously quite dated. Last Friday I was kinda out of work for the week and started investigating into how to make our code build with vs 2013. Most of our 3rd party libraries are more or less up to date and built immediately. Others (some of our older solutions still use Qt4.4.3 and Coin3D/SoQt) were a bit more tricky but I managed to make these build too. So after spending some hours over the weekend and the better part of today I actually managed to make everything build. I didn't run any tests yet though so I'm not sure what actually broke.
I found some lists on the internet giving hints about what to look for - some includes that changed, the changed keyword auto, .... stuff like that. I have compiled a list of stuff to check and fix if required. But has anyone here actually done this before and found out some secret traps that I might also fall into?
|
On February 10 2015 04:46 SpiZe wrote: Anyone here has experience with SSL certificates ?
I have a database with 430k entries of PEM encoded certificates, that I need to decode and store back into another DB with a more logical structure (issuer, encrytion algorithm, start date, end date and so on). I can decode .crt file into text then I can parse through that text to find the info the I need. However, since I need to create .crt files to read them and then parse the result and store it in my database.... this seems terribly inefficient.
I'm inexperienced with both SSL certificates and data sets of that size so I don't even really know where am I going to be honest. If anyone has a vague idea, I'm not looking for a step by step, just some random thoughts if you have any.
Can you just pipe the data through with a script without having to create all the extra files? The bottle-neck would probably be in writing everything to disk, so you can avoid that since an SSL cert is more than small enough to be stored in RAM.
On February 08 2015 18:08 nunez wrote: help please.
i need a word with the following meaning: a statement that is not an expression. all i got is: NonExpression... NExpression... Nexpression...
for now i'm going with Nexpression.
I don't know the context of what you're doing, but
premise, statement, state, set, condition.?
|
Hey all - I barely code Java so this is probably old news but I was looking at some java collections extensions - so nice to use~ It's like my python (no comprehensions but I'll survive) + autocompleting since static types. Plus there are some fun exercises they give to expose it. I was looking specifically at goldman-sachs collections but I guess there's also Guava (with a whole lot more then just collections) and fastuil as some others.
The gs collections api (from my 2 minute of using it) just seems so nice to use...
Also frustrations from work = everyone using hashmap in multithreaded environment = infinite loop = sadness
|
On February 10 2015 02:50 Morfildur wrote:Show nested quote +On February 10 2015 02:07 Manit0u wrote:On February 09 2015 20:48 Djagulingu wrote:On February 09 2015 17:43 Morfildur wrote:On February 09 2015 03:27 Djagulingu wrote:On February 09 2015 01:37 Manit0u wrote:On February 09 2015 00:40 Djagulingu wrote: I have this particular question in my mind: I'm building this web application which, in server side, uses nodejs to get constant stream of data from a 3rd party API and store it in my NoSQL database. I have finished developing the server side and now I'll move on with the client side. I'm wondering what I should use for the front end. I'm stuck between:
1- Front end with EmberJS and having a JSON API for database connectivity (or whatever Ember offers for that) 2- Same thing as #1, just replace EmberJS with AngularJS 3- Use Express and eliminate the need for JSON API for DB connectivity.
What are the pros and cons of each one? You could always try some full stack framework, like MEAN. It uses NoSQL (Mongo), Express, Angular and Node so you could have it all in one neat package. I actually want to use couchbase as the nosql database because it is lotsnlots of times faster when it comes to write operations per second and that is what my server is going to do (think of it like some guy who picks up a single paper from an endless pile of papers, stamp it and put it in another pile and thus i don't want my pile to randomly crash). A year ago we tried to implement couchbase at work to replace some MySQL user-relationship table. It worked perfectly in simulation but instantly broke down under live load when it got 1000 write requests per second. We switched it back to MySQL fast. Since then, couchbase is a forbidden word  Is it that bad? I mean, all those benchmarks and shit say that couchbase is much much better than mongo and cassandra when the number of client threads go up to 4 digits. I didn't stress*test it myself though. http://labs.octivi.com/handling-1-billion-requests-a-week-with-symfony2/That's how you do it. That's just 1650 requests a second and most likely most of them are read-only, though the actual use case wasn't explained. It's so easy to make numbers sound big and impressive when most you are doing is sending out the same webpage out again and again from a cache.
It was mentioned that it was some e-commerce stuff. The focus of the article was actually the design part, not actual implementation.
|
On February 10 2015 15:05 teamamerica wrote:Hey all - I barely code Java so this is probably old news but I was looking at some java collections extensions - so nice to use~ It's like my python (no comprehensions but I'll survive) + autocompleting since static types. Plus there are some fun exercises they give to expose it. I was looking specifically at goldman-sachs collections but I guess there's also Guava (with a whole lot more then just collections) and fastuil as some others. The gs collections api (from my 2 minute of using it) just seems so nice to use... Also frustrations from work = everyone using hashmap in multithreaded environment = infinite loop = sadness 
It seems very nice yes... Too bad they didn't integrate with Guava - then it would've been perfect :D
|
"Anti-pattern" new hit singleton by DI SOLID GRASP feat. MVC DRY KISS
![[image loading]](http://i.imgur.com/VXOAWOk.gif)
On a side note, I've found something cool:
class Demo.HelloWorld : GLib.Object {
public static int main(string[] args) {
stdout.printf("Hello, World\n");
return 0; } }
It's like if C had a baby with Java. Just discovered it but I'm liking what I'm seeing so far.
/* atomic types */ unichar c = 'u'; float percentile = 0.75f; const double MU_BOHR = 927.400915E-26; bool the_box_has_crashed = false;
/* defining a struct */ struct Vector { public double x; public double y; public double z; }
/* defining an enum */ enum WindowType { TOPLEVEL, POPUP }
It's a language called Vala.
|
I have an interesting problem that I'm trying to think through:
At work we need a realtime scheduling system where events can be scheduled to happen e.g. 5 seconds from now or theoretically 5 month, 4 days, 3 hours, 2 minutes and 1 second from now, though most events are scheduled for at most a week in the future. The number of scheduled events potentially numbers in the 1000s for every single second, i.e. a span of 10 seconds can have a total of 50000+ scheduled events. Past events have to be processed as long as they aren't past their TTL, i.e. when the schedule misses 10s due to a restart or hiccup, all those events still have to get processed.
Until recently we used a database with a "update X set processed=workerid where processed is null" statement that the ~100 worker processes executed whenever they ran out of jobs. It was far too slow eventhough a garbage collector held the table as small as possible by removing all finished events. We then switched to a Redis based solution, which can handle it but occasionally runs into performance issues as well.
I've now decided that it would be a nice problem to use to get back into C++, so I've started developing a C++ service that takes care of the scheduling. I'm just thinking about how to store the data internally so it is efficient for those large numbers of events. The service just has 2 functions: ScheduleEvent and GetNextEvent. ScheduleEvent adds an event at any point in time, GetNextEvent returns the next scheduled event based on time, variable time-to-live and event priority
I'm currently using a std::map<int, std::vector<Item>*> with the map key being the timestamp and the vector containing the items, but I'm having trouble with the time it takes to seek through the vectors, especially after the workers stopped processing for a few seconds and many events went past their TTL.
|
On February 12 2015 21:58 Morfildur wrote: I have an interesting problem that I'm trying to think through:
At work we need a realtime scheduling system where events can be scheduled to happen e.g. 5 seconds from now or theoretically 5 month, 4 days, 3 hours, 2 minutes and 1 second from now, though most events are scheduled for at most a week in the future. The number of scheduled events potentially numbers in the 1000s for every single second, i.e. a span of 10 seconds can have a total of 50000+ scheduled events. Past events have to be processed as long as they aren't past their TTL, i.e. when the schedule misses 10s due to a restart or hiccup, all those events still have to get processed.
Until recently we used a database with a "update X set processed=workerid where processed is null" statement that the ~100 worker processes executed whenever they ran out of jobs. It was far too slow eventhough a garbage collector held the table as small as possible by removing all finished events. We then switched to a Redis based solution, which can handle it but occasionally runs into performance issues as well.
I've now decided that it would be a nice problem to use to get back into C++, so I've started developing a C++ service that takes care of the scheduling. I'm just thinking about how to store the data internally so it is efficient for those large numbers of events. The service just has 2 functions: ScheduleEvent and GetNextEvent. ScheduleEvent adds an event at any point in time, GetNextEvent returns the next scheduled event based on time, variable time-to-live and event priority
I'm currently using a std::map<int, std::vector<Item>*> with the map key being the timestamp and the vector containing the items, but I'm having trouble with the time it takes to seek through the vectors, especially after the workers stopped processing for a few seconds and many events went past their TTL.
Why don't you have a structure for jobs to be processed, and a structure for jobs to be scheduled in the future?
You have one worker thread which wakes up every second to move jobs from scheduled queue to processing queue.
Then you only have to worry about optimizing adding elements to the queue. If it's a sorted queue removing the items is easily done.
|
I wonder if it would be possible to use fcron for that...
|
am having some fun with gcc plugin and access to ast, can make explicit my earlier point about the implicit argument of class methods.
+ Show Spoiler [class definition] +class Class{ private: int member; public: void operator()(int _member){member=_member;} int operator()()const{return member;} };
+ Show Spoiler [gcc plugin output] +[jeh@gimli tl]$ g++ -I../ -std=c++14 -fplugin=../ast/ast.so -fdiagnostics-color=always -c tl.cpp starting ast processing: tl.cpp
class ::Class at tl.cpp:1 method name: operator() return type: void arg: Class* this arg: int _member
method name: operator() return type: int arg: Class* this
glorious gcc! messy codesnippet, this is covering new ground for me... + Show Spoiler [plugin print method code] +... //type is a class node for( tree method=TYPE_METHODS(type); //iterate through all methods of class method!=0; method=TREE_CHAIN(method) ){ if(!DECL_ARTIFICIAL(method)){ //skip implicitly declared functions (ctors etc...) cerr <<"\tmethod name: " <<IDENTIFIER_POINTER(DECL_NAME(method)) <<endl;
tree return_type=TREE_TYPE(TREE_TYPE(method)); cerr <<"\t\treturn type: " <<IDENTIFIER_POINTER(DECL_NAME(TYPE_NAME(return_type))) <<endl;
for( //iterate through argument declarations of method tree arg_decl=DECL_ARGUMENTS(method); arg_decl!=NULL; arg_decl=TREE_CHAIN(arg_decl) ){
tree arg_type=TREE_TYPE(arg_decl); bool is_ptr=false; if(TREE_CODE(arg_type)==POINTER_TYPE){ arg_type=TREE_TYPE(arg_type); is_ptr=true; } cerr <<"\t\targ: " <<IDENTIFIER_POINTER(DECL_NAME(TYPE_NAME(arg_type))) <<(is_ptr?"* ":" ");
cerr <<IDENTIFIER_POINTER(DECL_NAME(arg_decl)) <<endl;
} cerr<<endl; } }
...
|
On February 13 2015 00:12 Manit0u wrote: I wonder if it would be possible to use fcron for that...
Maybe, but then you would have to keep all your bookkeeping on disk, which means you will have to throw performance away.
Is it a requirement that your scheduling is persistence across system reboots? What happens if system goes down while running a job?
|
|
|
|