|
Thread Rules 1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution. 2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20) 3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible. 4. Use [code] tags to format code blocks. |
@icystorage: Ok, let's solve this together.
On July 11 2011 16:23 waxypants wrote: Your main problem is that the "(N rem 2)" should be the digit added to the left of "Acc". You are adding it to the right. Also, once you figure out how to do that (it's a bit trickier adding the new digit to the left than adding it to the right), I believe your base cases will also need to be fixed up. Hmm, he seems to have a good point. If you look at the algorithm you described and your implementation, you will find out that the current binary digit is indeed the most significant in relation to the digits in acc (you should write the remainder of n/2 at the front of what you have so far). But how can we add it "to the left" of acc?10 * (n rem 2) + acc won't work, because acc keeps growing, and we'd be constantly adding to the second position.
Therefore, you have to multiply (n rem 2) by one in the first step, 10 in the second, 100 in the third, and so on.
Now let's look at the Haskell solution.
d2b :: Int -> Int d2b = d2bh 0 1 d2bh acc _ 0 = acc d2bh acc k n = d2bh ((n `mod` 2) * k + acc) (k * 10) (n `div` 2) Hmm, it look that this guy added a third parameter, k, with gets 10 times bigger with each call. Now we only need to do (n rem 2) * k + acc in d2bh and enventually we'll arrive at the correct answer.
Sorry, I have only played with Erlang for a few minutes before, and I don't have a compiler (interpreter? I don't even know, lol) at hand, but I think you should be able to solve it now.
|
On July 11 2011 20:45 delHospital wrote:@icystorage: Ok, let's solve this together. Show nested quote +On July 11 2011 16:23 waxypants wrote: Your main problem is that the "(N rem 2)" should be the digit added to the left of "Acc". You are adding it to the right. Also, once you figure out how to do that (it's a bit trickier adding the new digit to the left than adding it to the right), I believe your base cases will also need to be fixed up. Hmm, he seems to have a good point. If you look at the algorithm you described and your implementation, you will find out that the current binary digit is indeed the most significant in relation to the digits in acc (you should write the remainder of n/2 at the front of what you have so far). But how can we add it "to the left" of acc? 10 * (n rem 2) + acc won't work, because acc keeps growing, and we'd be constantly adding to the second position. Therefore, you have to multiply (n rem 2) by one in the first step, 10 in the second, 100 in the third, and so on. Now let's look at the Haskell solution. d2b :: Int -> Int d2b = d2bh 0 1 d2bh acc _ 0 = acc d2bh acc k n = d2bh ((n `mod` 2) * k + acc) (k * 10) (n `div` 2) Hmm, it look that this guy added a third parameter, k, with gets 10 times bigger with each call. Now we only need to do (n rem 2) * k + acc in d2bh and enventually we'll arrive at the correct answer. Sorry, I have only played with Erlang for a few minutes before, and I don't have a compiler (interpreter? I don't even know, lol) at hand, but I think you should be able to solve it now.
thank you for helping. i have been thinking about your advice but i think it is not possible to have "k" in my parameters (i dont know why but the compiler doesnt seem to accept it o.o) or i didnt get what you mean lol
-module(d2b). -export([d2b/1]).
d2b(N)-> d2bh(N,1,1). d2bh(0,_Acc, _k)-> 0; d2bh(1,Acc, _k)-> Acc; d2bh(N,Acc, k)-> d2bh(N div 2, ((N rem 2)*k)+Acc, k*10).
as you can see at the 2nd parameter, they both have k and Acc, i think the problem lies there or did i do something wrong? should i multiply k there?
|
Uh, try with K (capital letter)  And I think you should initialize Acc with 0.
|
Hyrule18969 Posts
On July 11 2011 12:33 EvanED wrote:Show nested quote +On July 11 2011 12:27 tofucake wrote: I'm opposed to Python on the simple fact that I do not want tabbing to be part of the lexicon. I've come to dislike that particular choice (at least in it's specifics), but IMO disqualifying a language because of one thing that you view as a poor design choice seems really silly. I could list 10 objections to any language I've ever used that I would view as much worse offenses. To concentrate on that particular issue ignores everything they did right, which with the tool and library support that's out there for it makes Python among very few peers if you want a language for lightweight to moderate programming. It's not alone in that area, but there are not many others that I would put on par. I know enough languages to do what I need to do, and I've never had a good enough reason to pick up Python. The fact that tabbing is actually part of the code is one of the reasons I haven't learned it "just because".
|
I read that. It's not dynamic. I still need help and that was a very rude answer.
|
On July 11 2011 21:22 tofucake wrote:Show nested quote +On July 11 2011 12:33 EvanED wrote:On July 11 2011 12:27 tofucake wrote: I'm opposed to Python on the simple fact that I do not want tabbing to be part of the lexicon. I've come to dislike that particular choice (at least in it's specifics), but IMO disqualifying a language because of one thing that you view as a poor design choice seems really silly. I could list 10 objections to any language I've ever used that I would view as much worse offenses. To concentrate on that particular issue ignores everything they did right, which with the tool and library support that's out there for it makes Python among very few peers if you want a language for lightweight to moderate programming. It's not alone in that area, but there are not many others that I would put on par. I know enough languages to do what I need to do, and I've never had a good enough reason to pick up Python. The fact that tabbing is actually part of the code is one of the reasons I haven't learned it "just because". Once you get used to it, you'll be like "ughh, I can't look at these disgusting brackets". btw, "tabbing" is also present in Haskell, which you were interested in judging by the looks of the OP (and it's fucked up in GHC <7)
Edit: I think I misunderstood, nvm
@obesechicken13: -___-
|
omg, noob mistake, i got it now and thank you very much for being patient with me. it is much appreciated! i learned so much.
|
On July 11 2011 21:22 obesechicken13 wrote:I read that. It's not dynamic. I still need help and that was a very rude answer.
Well have you tried just accesing the array?
|
On July 11 2011 19:34 uzyszkodnik wrote:Not really, the syntax of C++ and C# is similar and telling that c# does teh magic and c++ doesnt is false - it all depends on what libraries you use in c++ ( yep you can even have garbage collector in c++ ) . The main difference is speed of execution, nothing else. Only barely. The thing that makes GC good in "real" GC'd languages like C# and Java is the ability to move objects around; you can't really do that in C++. (It may technically be possible but no one has done it.)
Also, I would most certainly call memory safety a huge difference.
On July 11 2011 20:35 heishe wrote: Also, in real-world applications, even just decent C/C++ code will produce much, much faster applications. That just isn't true. (I'm one of those C#/Java lobbyists. :-)) We're talking a factor of 2, maybe 3 total. For most real-wold applications, that just won't matter.
plus C/C++ compilers (and linkers for link-state optimization) are just miles ahead of C#/Java compilers/interpreters in terms of code optimization (For example, the compilation step in javac doesn't optimize the code at all) so it's much easier to make performance costly mistakes in C#/Java than in C++. Somewhat true, but on the flip side, when you're JITing, you have lots of opportunites to do things that a C++ compiler flat out cannot. Have tight loop with a virtual method call? If you have a JIT you can inline it, because you can look at the dynamic type and pick the right function. A static compiler can't.
So in the gaming market, where AAA titles have to be optimized to the last bit in order to show awesome graphics on relatively weak hardware (360/PS3, etc.) people are of course going to use C++ (indie devs still use C# with XNA on the 360 though, since the C++ SDK for 360 is only available for lots of money and AAA developers). Now this I actually agree with. Games have the right balance of needing speed and being relatively unimportant where C++ makes sense.
On July 11 2011 21:33 delHospital wrote: Once you get used to it, you'll be like "ughh, I can't look at these disgusting brackets". Don't be so sure. Like I said, I've come to rather dislike the way Python does things -- and in particular, it's the missing brackets that cause me trouble.
(If you want syntactic whitespace I'm not necessarily opposed to the idea, but I think it should merely take what's in, say, C and enforce that it's actually indented in agreement with the braces.)
|
On July 11 2011 21:59 uzyszkodnik wrote:Show nested quote +On July 11 2011 21:22 obesechicken13 wrote:I read that. It's not dynamic. I still need help and that was a very rude answer. Well have you tried just accesing the array? Can you explain to me how exactly? I've thought about using this method, however there is a problem.
When you pass information with post using html forms to php, you do it with "name" right? Well the different information here is being stored in "id" rather than name so I'm rather confused how to get the "id" array. I'm very new to PHP.
|
On July 11 2011 22:11 EvanED wrote:Show nested quote +On July 11 2011 20:35 heishe wrote: Also, in real-world applications, even just decent C/C++ code will produce much, much faster applications. That just isn't true. (I'm one of those C#/Java lobbyists. :-)) We're talking a factor of 2, maybe 3 total. For most real-wold applications, that just won't matter. I always like this part of the language discussion where everyone agrees on the facts but weights them differently :D.
For most "real world applications" you have to weight development cost vs. speed. So for a lot of Applications it is cheaper to just throw twice the hardware at the problem and live with "slow" Java/C# code instead of more than doubling the development cost/time by using C++. This approach obviously doesn't work for games since halving the cost of the game wouldn't be enough to compensate the cost the individual gamer has by requiring the more expensive hardware (also the programming isn't what makes a game expensive... art/assets and marketing are what's costly). Also the lower maintenance cost of Java/C# really pays off for projects that are used over years since development and maintenance always costs the same while performance problems tend to sort themselves out over time thanks to increasing computing power and falling hardware prices.
Somewhat true, but on the flip side, when you're JITing, you have lots of opportunites to do things that a C++ compiler flat out cannot. Have tight loop with a virtual method call? If you have a JIT you can inline it, because you can look at the dynamic type and pick the right function. A static compiler can't.
And then, quite some polymorphism that is required in managed languages can be replaced by "static polymorphism" via templates in C++ which allows even more aggressive inlining . This type of listing individual advantages/disadvantages especially when it comes to optimization doesn't really take you anywhere. It kinda remembers me of the time where some Pascal fanboy wanted to convince me that C sucks because there is no "closed iteration" (he just didn't like the for-loop syntax in C...)
Discussing programming languages without the context of an actual application just doesn't make sense. It's like discussing whether airplanes or cars are the better means of transportation without specifying where you want to go. You obviously cant drive overseas while flying to the supermarket would be equally stupid.
|
Hyrule18969 Posts
On July 11 2011 22:11 obesechicken13 wrote:Show nested quote +On July 11 2011 21:59 uzyszkodnik wrote:On July 11 2011 21:22 obesechicken13 wrote:I read that. It's not dynamic. I still need help and that was a very rude answer. Well have you tried just accesing the array? Can you explain to me how exactly? I've thought about using this method, however there is a problem. When you pass information with post using html forms to php, you do it with "name" right? Well the different information here is being stored in "id" rather than name so I'm rather confused how to get the "id" array. I'm very new to PHP. It sounds like you need Javascript (<3 jQuery) to add in new fields as needed. The files can be accessed easily with a foreach over $_FILES['userfile']. You can then use file_get_contents to get the data directly, or you can copy the file to another location if you only want to save it.
|
On July 11 2011 22:59 japro wrote:Show nested quote +On July 11 2011 22:11 EvanED wrote:On July 11 2011 20:35 heishe wrote: Also, in real-world applications, even just decent C/C++ code will produce much, much faster applications. That just isn't true. (I'm one of those C#/Java lobbyists. :-)) We're talking a factor of 2, maybe 3 total. For most real-wold applications, that just won't matter. I always like this part of the language discussion where everyone agrees on the facts but weights them differently :D. Heh. See, I'd save "much, much" faster for comparisons against, say, Python. If 2x is "much, much faster", what's 30x? "much much much much much much much much much much faster"? :D (BTW that's assuming each "much" multiplies the time by 1.4; 1.4^10=28.9.)
And then, quite some polymorphism that is required in managed languages can be replaced by "static polymorphism" via templates in C++ which allows even more aggressive inlining  . At the cost of blowing your i-cache. :-) (One of the projects in my research group has an 86 mb executable -- and it's largely templates. (That's unoptimized but stripped.))
This type of listing individual advantages/disadvantages especially when it comes to optimization doesn't really take you anywhere. My goal is not so much to argue that C#/Java afford more optimization opportunities so much as to show that the balance between what can be done in such a language and what can be done in C++ is a lot more rich than is sometimes put forth.
(It's like GC. Unlike what some people would have you believe, GC is not a complete loss. Allocation in a GC'd language is so fast as to be nearly a no-op (probably <= 5 instructions incl only 2 memory accesses if your threads have separate heaps), and the copying actually works to improve cache locality and decrease fragmentation over time. These benefits can sometimes be achieved in C if you use memory pools and whatnot, but not always, and it takes a lot more effort. Is GC still a net loss? Probably, but the picture is not nearly as clear as some people make it.)
Edit: There are some other interesting tradeoffs too. For instance, take Singularity. It's an OS written in C#. (Well, C# with some modifications. But we'll just say C#.)
If you were using Singularity as your OS, you could run other C# programs in the same memory space and protection domain (as far as the CPU is concerned) as your OS! What does this mean? It means context switches become way way cheaper -- no TLB flush. It means system calls become even more way way cheaper -- no protection domain change. The protection is entirely software-enforced -- and because C# is memory-safe (presumably `unsafe' is one of the things they don't allow), this gets you everything that the CPU's page-based protection gets you now since it can't bypass the reference monitor checks.
Assuming the software works. But you could even imagine that if Singularity became a commercial OS, in Control Panel somewhere there could be a little slider that would move between "safe" and "fast but susceptible to bugs in the VM runtime". Under "safe" it would run everything under different hardware domains -- use separate memory spaces even for drivers, do TLB flushes when changing processes, run everything but the kernel proper in ring 3, etc. Move it a bit towards "fast" and drivers move into the kernel's address space. Move it all the way over and everything runs in the kernel's address space.
In short, I'm a big proponent of managed languages for nearly everything -- including things like the OS that would get lots of people in a tizzy. Games and embedded programming are the main exceptions.
|
On July 11 2011 23:27 EvanED wrote:Show nested quote +On July 11 2011 22:59 japro wrote:On July 11 2011 22:11 EvanED wrote:On July 11 2011 20:35 heishe wrote: Also, in real-world applications, even just decent C/C++ code will produce much, much faster applications. That just isn't true. (I'm one of those C#/Java lobbyists. :-)) We're talking a factor of 2, maybe 3 total. For most real-wold applications, that just won't matter. I always like this part of the language discussion where everyone agrees on the facts but weights them differently :D. Heh. See, I'd save "much, much" faster for comparisons against, say, Python. I'm not the one saying C++ is "much much" faster . I think comparing the "speed of programming languages" is questionable anyway. Speed just isn't an intrinsic property of a programming language. There is no reason why it shouldn't be possible to build a java/C# compiler that optimizes as agressively as a C++ compiler and if you want to you can run C++ in an interpreter that is slower than qbasic... I personally use Java and C++ depending on what I'm doing. I don't even bother doing GUI stuff in C++ apart from games for which I use opengl/sfml. On the other hand using Java/C# for the HPC stuff I'm doing at "work" is completely out of the question since the framework I work on needs tight memory management and doubling the performance is quite relevant when you are targeting simulations that run on thousands of cores...
|
Discussing programming languages without the context of an actual application just doesn't make sense. It's like discussing whether airplanes or cars are the better means of transportation without specifying where you want to go. You obviously cant drive overseas while flying to the supermarket would be equally stupid.
Yes, I have to add though that I was talking in the context of games, since I believed delHospital was the guy with a learning disability who wanted to get into the gaming industry - which I know see is wrong (I was thinking of RedJustice). Sorry for that.
For general applications, like your average business-suit GUI application, I think the large majority of developers actually use C# or Java since they have superior frameworks to C++ (in terms of usability) in that regard.
|
Anyone here ever try using UDK for mobile development? I have a few questions about it since I'd like to start using it.
|
On July 11 2011 23:58 heishe wrote:Show nested quote +
Discussing programming languages without the context of an actual application just doesn't make sense. It's like discussing whether airplanes or cars are the better means of transportation without specifying where you want to go. You obviously cant drive overseas while flying to the supermarket would be equally stupid.
Yes, I have to add though that I was talking in the context of games, since I believed delHospital was the guy with a learning disability who wanted to get into the gaming industry - which I know see is wrong (I was thinking of RedJustice). Sorry for that. For general applications, like your average business-suit GUI application, I think the large majority of developers actually use C# or Java since they have superior frameworks to C++ (in terms of usability) in that regard.
For games, you are pretty much right, IF it is a 3D game which will be running very demanding content.
But the guy you quoted, and especially what you quoted is one of the best statements in the whole X vs Y debate. Many people just completely disregard the circumstances in which a language is to be used, which is very bad. Assembler is faster than both C and Java, but to manage it is a horrible task. Maybe in some years we'll argue that Java was way faster than <new language>.. :D
|
On July 11 2011 23:48 japro wrote: There is no reason why it shouldn't be possible to build a java/C# compiler that optimizes as agressively as a C++ compiler... Well, that's not quite true, unfortunately.
C++ compilers have the luxury of (1) being able to take as much time for optimization as they want (to a loose approximation) , (2) being less constrained about how what they output behaves, and (3) being less constrained by requirements of the language semantics.
For instance, take the following code:
void foo() { int x = 5; print(x); double y = 4.0; print(y); } (and ignore for the moment the fact that the complier can do constant propagation).
A C++ compiler can very easily reuse the same memory space for x and y. I'm not 100% positive, but I think this optimization is unavailable to the Java or C# AOT compilers (the ones that go to bytecode) because the variables have different types, and if the compiler did try to reuse memory the resulting class file wouldn't pass the bytecode verifier. Thus as far as the AOT complier has to say about it, the Java version takes more memory for that function. That's where (2) comes in -- a C program that never invokes undefined behavior won't be able to distinguish whether the compiler has shared memory between x and y, but a Java program will (because it will always say "no"). (If you drop the bytecode verifier, IMO you lose most of the point of using a managed language in the first place.)
Well, what about the JIT? The JIT certainly could certainly make that change. But that's where (1) comes into play. The JIT can't exactly perform super heavy-weight dataflow analyses -- because whatever it does is coming out of the runtime of the program. In this particular case it actually probably would be able to share those spots, but there are plenty of optimizations that are too slow to run at runtime.
Finally, what about (3)? Turns out that all the "undefined behavior" in C++ can lead to optimization opportunities not available to Java/C#. Of course, the most obvious one of these is buffer checks. Dataflow analysis can certainly establish that some buffer checks are not needed. (Though you have to keep in mind constraints (1) and (2) -- the AOT compiler can't omit them (in fact they aren't even explicit, so there's not even a way to omit them even if javac wanted to) and the JIT has to do its analysis at runtime.) But there are still lots of times when it can't, and those checks are going to make your Java/C# program slower.
There are more subtle ones as well. For instance, Java's semantics guarantees that integers are represented as 32-bit two's complement, and that, for example, INT_MAX+1 will overflow around to INT_MIN. So for instance, the check if (x+1 > x) is functionally equivalent to if (x == INT_MAX). C++ makes no such guarantee. Compilers can and do optimize that condition to true (and then of course optimize away the if statement entirely).
Chris Lattner of the LLVM project has posted a wonderful series of blog posts about this topic, and the benefits and drawbacks of these types of optimizations. This post by John Rehger is also quite good.
|
On July 12 2011 00:31 EvanED wrote:Show nested quote +On July 11 2011 23:48 japro wrote: There is no reason why it shouldn't be possible to build a java/C# compiler that optimizes as agressively as a C++ compiler... Well, that's not quite true, unfortunately. ... What I meant was actually compiling Java/C# directly to an executable and not to bytecode.
|
On July 12 2011 00:31 EvanED wrote:Show nested quote +On July 11 2011 23:48 japro wrote: There is no reason why it shouldn't be possible to build a java/C# compiler that optimizes as agressively as a C++ compiler... Well, that's not quite true, unfortunately. C++ compilers have the luxury of (1) being able to take as much time for optimization as they want (to a loose approximation) , (2) being less constrained about how what they output behaves, and (3) being less constrained by requirements of the language semantics. For instance, take the following code: void foo() { int x = 5; print(x); double y = 4.0; print(y); } (and ignore for the moment the fact that the complier can do constant propagation). A C++ compiler can very easily reuse the same memory space for x and y. I'm not 100% positive, but I think this optimization is unavailable to the Java or C# AOT compilers (the ones that go to bytecode) because the variables have different types, and if the compiler did try to reuse memory the resulting class file wouldn't pass the bytecode verifier. Thus as far as the AOT complier has to say about it, the Java version takes more memory for that function. That's where (2) comes in -- a C program that never invokes undefined behavior won't be able to distinguish whether the compiler has shared memory between x and y, but a Java program will (because it will always say "no"). (If you drop the bytecode verifier, IMO you lose most of the point of using a managed language in the first place.) Well, what about the JIT? The JIT certainly could certainly make that change. But that's where (1) comes into play. The JIT can't exactly perform super heavy-weight dataflow analyses -- because whatever it does is coming out of the runtime of the program. In this particular case it actually probably would be able to share those spots, but there are plenty of optimizations that are too slow to run at runtime. Finally, what about (3)? Turns out that all the "undefined behavior" in C++ can lead to optimization opportunities not available to Java/C#. Of course, the most obvious one of these is buffer checks. Dataflow analysis can certainly establish that some buffer checks are not needed. (Though you have to keep in mind constraints (1) and (2) -- the AOT compiler can't omit them (in fact they aren't even explicit, so there's not even a way to omit them even if javac wanted to) and the JIT has to do its analysis at runtime.) But there are still lots of times when it can't, and those checks are going to make your Java/C# program slower. There are more subtle ones as well. For instance, Java's semantics guarantees that integers are represented as 32-bit two's complement, and that, for example, INT_MAX+1 will overflow around to INT_MIN. So for instance, the check if (x+1 > x) is functionally equivalent to if (x == INT_MAX). C++ makes no such guarantee. Compilers can and do optimize that condition to true (and then of course optimize away the if statement entirely). Chris Lattner of the LLVM project has posted a wonderful series of blog posts about this topic, and the benefits and drawbacks of these types of optimizations. This post by John Rehger is also quite good.
you really can't compare anything with thise code example ;P these are local variables, which all compilers should contain on the stack and not put on the heap, so there is no real point in memory reuse (actually, java being a stack machine would probably even do better in this example because it doesn't need to duplicate the 5 when calling print, so when print returns, the 5 is gone from the stack - c(++) would probably retain the stack space until the method finishes)
and just to clear this too: JITs do escape analysis on runtime, while inlining methods. You can dig into a lot of FUN if you like, just have a look at what the JikesRVM does. it's actually pretty ridiculous.
and regarding your overflow example: this may lead to slightly better performance, but i bet there is a TON of code which explicitly relies on overflows and thus fails when run through the wrong compilers. design is always more important than speed (unless you're writing a scene demo), which, to a certain degree, conflicts with such undefined things.
|
|
|
|