|
Thread Rules 1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution. 2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20) 3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible. 4. Use [code] tags to format code blocks. |
I'm not converting it to DateTime because it's for a program that has to convert between two file formats and I have to get the strings and then reformat them certain way. All the values there are strings for simplicity (even values that would normally be ints). And the program has to do it both ways and people who're using it have no standards (on one end at least) and write their date using '.' or '-' or '/' as a separator or without any separator at all, which is most annoying but if it's a string I can easily fix that.
Also, a stupid question perhaps, but related to the subject. Does C# have a method that would convert '000018920300' into '189203.00'?
The rather dirty way for me to do it was take it as a string, strip leading zeros, split into two substrings and join them with '.' to get the final string.
|
|
|
On April 15 2014 03:19 Nesserev wrote:Show nested quote +On April 15 2014 03:13 Manit0u wrote: Also, a stupid question perhaps, but related to the subject. Does C# have a method that would convert '000018920300' into '189203.00'?
The rather dirty way for me to do it was take it as a string, strip leading zeros, split into two substrings and join them with '.' to get the final string. Use regular expressions?
The dirty way seems fine to me. It's equivelently understandable to the way using the C# double library: http://msdn.microsoft.com/en-us/library/system.double.aspx
|
On April 14 2014 06:43 Manit0u wrote:Show nested quote +On April 14 2014 05:23 spinesheath wrote:On April 14 2014 04:58 whiteLotus wrote: just a dumb question, is it theoretically possible to actually learn programming through google, various online courses and tutorials, like lets say java,php,jq, maybe some python or C, and actually get a job as a web programmer or smth at some shitty ass company, or you would still ideally need some university courses like advanced math and shit? i dont mean anything like software developer at Blizzard or anything like that lol. You certainly can become good enough, the question is whether you can get an invitation to an interview. The majority of my knowledge is from the internet and books, university was a painful waste of time. Proof techniques, combinatorics and matrices are pretty much the only things I learnt at univeristy and consider relevant. For comp sci stuff, it was far more efficient to just read it up myself. But I don't know how good universities are where you live, and having a piece of paper saying "I provably spent some time on learning moderately relevant stuff" is rather effective for getting a job. It all depends. I have a friend who quit uni because he couldn't be bothered with this kind of stuff and now is a top notch programmer working for big companies (Symantec, Oracle and such). What's most important in this business is the experience, nothing beats that. You can learn all you want from books/courses/studies but at the end of the day it all comes down to 'Are you experienced enough to do it without really thinking about it?'. That's the feeling I get when I'm asking my friends who are established programmers for some help. They're able to solve the problems that take me several hours in 5 minutes and do it way more efficiently and elegantly than I could ever hope to do now. Because they all have 10+ years in doing this stuff under their belts they know the realities, all the design patterns and best ways to do things inside out. That's the thing, you don't really get a lot of practical experience at uni; at least not if it's anything like my uni. If you want to learn it by yourself, you better do it while actually developing a real application, then throw it into the trash bin and code it again from scratch. Just better. And if you go for a degree, you should still do the same.
That's pretty much what I did while completing my bachelor's. Quickly got a Job in the C# area. Aside from being a formal requirement to the job offer, the degree didn't matter in the least. I never even had to actually show it to anyone.
On April 15 2014 03:13 Manit0u wrote:Also, a stupid question perhaps, but related to the subject. Does C# have a method that would convert '000018920300' into '189203.00'?
The rather dirty way for me to do it was take it as a string, strip leading zeros, split into two substrings and join them with '.' to get the final string. "Dirty", but separate it into a clean method with a nice name and documentation. The next time you need to adjust it or find you need something similar, see how you could generalize it. There's no point in writing a regex - which is hard to grasp unless you use them a lot - for something that amounts to a short and easy to understand method. And even if you someday discover a built in method in C# that does exactly that, there's still an argument to be made for having a method wrapped around it: Your intent will be clearer and you will have less direct API calls throughout your code.
|
On April 14 2014 22:04 misirlou wrote:I made a sieve with some optimizations, source at https://github.com/Misirlou/CparParallelSieve/blob/master/single.cThere are 3 different functions, named 1,2 and 3  . The first function is terribly slow for anything greater than n=2^27, that is computing all primes lower than 2^27. I can get the other ones to run till 2^33 under a minute. The 3rd function takes an extra argument that defines a block size. This is used to split the sieve into little pieces so the array gets stored in cache instead of RAM, which is a much faster memory, as the results I collected can prove, on google spreadsheetI also stored the primes not as an int array but in a char array. Every bit of a char represents a number, so one char can hold 8 "numbers". If it's unmarked when the computation is finished, it means that number is a prime. I also don't have a position for even numbers, I skip them entirely because the only even number that is prime is 2. All of this was done in an effort to use the least amount of memory. The code gets a little more tricky and it takes more operations to access an element but these operations are meaningless compared to what I save in memory access. On your code shouldn't marked and sieve be the same array?
Them cache/memory blocks. Such char array. Much advanced. Wow. :|
Hah, my code is pretty terrible in efficiency (like not combining marked and sieve array). Probably the only thing I had going for it was skipping even numbers when initializing the sieve array.
It was something I wrote up a year ago or so as a supernoob that I only recently revisited. Actually one of the quick fixes I did was to stop storing stuff in a HashMap (to answer stuff like 'given prime number p, how many prime numbers are smaller') and moving it to primitive array for binary search. Was like 'gg I just knocked off 90% of the run time'.
|
United States10328 Posts
On April 14 2014 22:04 misirlou wrote:I made a sieve with some optimizations, source at https://github.com/Misirlou/CparParallelSieve/blob/master/single.cThere are 3 different functions, named 1,2 and 3  . The first function is terribly slow for anything greater than n=2^27, that is computing all primes lower than 2^27. I can get the other ones to run till 2^33 under a minute. The 3rd function takes an extra argument that defines a block size. This is used to split the sieve into little pieces so the array gets stored in cache instead of RAM, which is a much faster memory, as the results I collected can prove, on google spreadsheetI also stored the primes not as an int array but in a char array. Every bit of a char represents a number, so one char can hold 8 "numbers". If it's unmarked when the computation is finished, it means that number is a prime. I also don't have a position for even numbers, I skip them entirely because the only even number that is prime is 2. All of this was done in an effort to use the least amount of memory. The code gets a little more tricky and it takes more operations to access an element but these operations are meaningless compared to what I save in memory access. On your code shouldn't marked and sieve be the same array?
In C++ you can use things like bitsets to do this more easily
|
|
|
Hyrule19167 Posts
Doing math in functional languages is simpler.
|
On April 15 2014 09:54 tofucake wrote: Doing math in functional languages is simpler.
Might be a reason why most operating systems run on C and not higher-level stuff.
|
On April 15 2014 11:47 Manit0u wrote:Show nested quote +On April 15 2014 09:54 tofucake wrote: Doing math in functional languages is simpler. Might be a reason why most operating systems run on C and not higher-level stuff. I thought higher level stuff had performance issues and that was the reason. Doing math in C is horrible compared to in R or matlab.
|
On April 15 2014 13:33 obesechicken13 wrote:Show nested quote +On April 15 2014 11:47 Manit0u wrote:On April 15 2014 09:54 tofucake wrote: Doing math in functional languages is simpler. Might be a reason why most operating systems run on C and not higher-level stuff. I thought higher level stuff had performance issues and that was the reason. Doing math in C is horrible compared to in R or matlab.
Well, that's why Java still keeps primitive data types despite Integer/Boolean/etc being redundant.
|
On April 16 2014 02:47 darkness wrote:Show nested quote +On April 15 2014 13:33 obesechicken13 wrote:On April 15 2014 11:47 Manit0u wrote:On April 15 2014 09:54 tofucake wrote: Doing math in functional languages is simpler. Might be a reason why most operating systems run on C and not higher-level stuff. I thought higher level stuff had performance issues and that was the reason. Doing math in C is horrible compared to in R or matlab. Well, that's why Java still keeps primitive data types despite Integer/Boolean/etc being redundant.
Java keeps primitive types because its too much of a hassle to remove it from the language, even though the big Java guys wants to remove them.
|
On April 15 2014 13:33 obesechicken13 wrote:Show nested quote +On April 15 2014 11:47 Manit0u wrote:On April 15 2014 09:54 tofucake wrote: Doing math in functional languages is simpler. Might be a reason why most operating systems run on C and not higher-level stuff. I thought higher level stuff had performance issues and that was the reason. Doing math in C is horrible compared to in R or matlab.
Never understood why people would go for matlab/octave. If you're looking for speed and serious math libraries (eigenvectors, differential equations, ...) then Fortran isn't that really uglier and at least it has some consistency to it. If you're looking for ease of use and "fun" libraries (plotting and all things data presentation) then use some really dev-friendly, modern language like python with numpy or w/e
|
On April 16 2014 03:16 windzor wrote:Show nested quote +On April 16 2014 02:47 darkness wrote:On April 15 2014 13:33 obesechicken13 wrote:On April 15 2014 11:47 Manit0u wrote:On April 15 2014 09:54 tofucake wrote: Doing math in functional languages is simpler. Might be a reason why most operating systems run on C and not higher-level stuff. I thought higher level stuff had performance issues and that was the reason. Doing math in C is horrible compared to in R or matlab. Well, that's why Java still keeps primitive data types despite Integer/Boolean/etc being redundant. Java keeps primitive types because its too much of a hassle to remove it from the language, even though the big Java guys wants to remove them.
I think this is a better to say it. http://stackoverflow.com/a/5199425/1091781
|
On April 15 2014 13:33 obesechicken13 wrote:Show nested quote +On April 15 2014 11:47 Manit0u wrote:On April 15 2014 09:54 tofucake wrote: Doing math in functional languages is simpler. Might be a reason why most operating systems run on C and not higher-level stuff. I thought higher level stuff had performance issues and that was the reason. A few years ago that certainly was the case. Haskell, for instance, used to be dreadfully slow. However, now it is approaching being nearly as fast as C, and a good Haskell programmer, just by virtue of the language, can write something significantly faster than a person doing something in C. Back in the day, computers used to be rather expensive so making code run as fast as possible was ideal. However, now we have fairly cheap computers that are quite powerful so the larger cost to a company will be the programmer, not the equipment used by that person, so if they can make that person more productive, it gives the company a better return for their money. The downside is that functional languages are really tricky to learn so you typically have to get really smart people, who might cost more money. But they will be able to code things much faster.
Of course when it comes to OSes and things like that, speed is still king, so the fastest languages will still be dominant. Being nearly as fast as C isn't the same as being as fast as C.
Which leads to this:
On April 15 2014 09:50 Nesserev wrote: Can anyone tell me when it's better/more preferable to use a functional programming language like Haskell over an OO/Imperative programming language?
Some things are really easy/fast to code in functional programming languages compared to OO/Imperative programming languages, but I can't put a finger on what it is, or when this is the case. I had to program Battleships in Haskell, which was a total b!tch to be honest... Code bloat and productivity of the programmer is what was often mentioned to us in my class that focused on functional languages. As you have probably experienced, code done in functional languages is significantly shorter than code from many imperative languages (for me, probably 2/3 to 3/4 of each haskell assignment was tests with maybe 50-100 lines of actual program code). This makes it significantly easier to maintain a database of code over time since all of the different pieces of code are upwards of 50-90% shorter than their C/Java counterparts. Also, due to the nature of functional languages, it is much less likely for there to be weird bugs introduced, simply because the code probably wouldn't compile due to how functional languages are structured. Of course I'm more talking things like merge mess ups or copy/paste errors. Stuff like the goto fail error we saw on the open source SSL implementation Apple was using would not likely have happened in Haskell because GHC would have complained. Probably the same for the 8 year old GnuTLS issues (From what I have read, those were pretty rough. C library functions that should not have been in such critical code, pieces of code for encrypting that didn't do what they were supposed to upon testing). This is all combined with the code being much simpler to read (after you know the language well of course) makes it so you can be much more productive with functional languages. Of course the downside is you have to be insanely smart to use functional languages effectively. But once you do, you can do some pretty crazy things with them. We built another programming language in Haskell in about 40 lines in total.
Though, with all of the huge security issues we've been seeing with stuff like Heartbleed and the above goto fail and GnuTLS, I wouldn't be shocked if certified programming becomes much more prevalent fairly soon. I've done a bit of it and it is pretty cool. The idea behind certified programming is basically that you can prove something does exactly what you say it does. This is done so mathematically via proofs. One of the more common certified programming languages, Coq (yup, that's the name), is very Haskell-like. It is a functional language in which types are handled in a very similar fashion. A lot of the code I have seen is basically Haskell code with a slightly more verbose syntax.
|
On April 16 2014 18:09 Ben... wrote:Show nested quote +On April 15 2014 13:33 obesechicken13 wrote:On April 15 2014 11:47 Manit0u wrote:On April 15 2014 09:54 tofucake wrote: Doing math in functional languages is simpler. Might be a reason why most operating systems run on C and not higher-level stuff. I thought higher level stuff had performance issues and that was the reason. A few years ago that certainly was the case. Haskell, for instance, used to be dreadfully slow. However, now it is approaching being nearly as fast as C, and a good Haskell programmer, just by virtue of the language, can write something significantly faster than a person doing something in C. Back in the day, computers used to be rather expensive so making code run as fast as possible was ideal. However, now we have fairly cheap computers that are quite powerful so the larger cost to a company will be the programmer, not the equipment used by that person, so if they can make that person more productive, it gives the company a better return for their money. The downside is that functional languages are really tricky to learn so you typically have to get really smart people, who might cost more money. But they will be able to code things much faster. Of course when it comes to OSes and things like that, speed is still king, so the fastest languages will still be dominant. Being nearly as fast as C isn't the same as being as fast as C. Which leads to this: Show nested quote +On April 15 2014 09:50 Nesserev wrote: Can anyone tell me when it's better/more preferable to use a functional programming language like Haskell over an OO/Imperative programming language?
Some things are really easy/fast to code in functional programming languages compared to OO/Imperative programming languages, but I can't put a finger on what it is, or when this is the case. I had to program Battleships in Haskell, which was a total b!tch to be honest... Though, with all of the huge security issues we've been seeing with stuff like Heartbleed and the above goto fail and GnuTLS, I wouldn't be shocked if certified programming becomes much more prevalent fairly soon. I've done a bit of it and it is pretty cool. The idea behind certified programming is basically that you can prove something does exactly what you say it does. This is done so mathematically via proofs. One of the more common certified programming languages, Coq (yup, that's the name), is very Haskell-like. It is a functional language in which types are handled in a very similar fashion. A lot of the code I have seen is basically Haskell code with a slightly more verbose syntax.
Well, Formal Methods comes to mind when you say 'certified programming'. In particular, Promela
|
Coq has also been used to make a formal proof of the Feit-Thompson Theorem (written proof is equivalent of 2 books) and found errors in the written proof. It is also used to make a certified C compiler (CompCert).
|
I have an OOP question. For reference, I'm using C#.net
I'm trying to discipline myself by sticking to the SOLID principles as much as I can. For example, I know that when one particular object has a dependency on another, it's better to extract that dependency out and provide an interface to an implementation, so the implementation can change without having to change the object.
But what happens when I have a need to combine multiple objects to perform a higher level task. Say that I have Object A which does Task A, and Object B which does Task B, but I need to do Task C, which depends on the implementations of Task A and Task B. Should Task C be done by a completely new object, like Object C? If so, should Object C have it's own interfaces that Object A and Object B implement? Or is there some other way? Hope that makes sense. It's hard to describe my exact problem without a lot of details.
|
On April 16 2014 18:09 Ben... wrote:Show nested quote +On April 15 2014 13:33 obesechicken13 wrote:On April 15 2014 11:47 Manit0u wrote:On April 15 2014 09:54 tofucake wrote: Doing math in functional languages is simpler. Might be a reason why most operating systems run on C and not higher-level stuff. I thought higher level stuff had performance issues and that was the reason. A few years ago that certainly was the case. Haskell, for instance, used to be dreadfully slow. However, now it is approaching being nearly as fast as C, and a good Haskell programmer, just by virtue of the language, can write something significantly faster than a person doing something in C. Back in the day, computers used to be rather expensive so making code run as fast as possible was ideal. However, now we have fairly cheap computers that are quite powerful so the larger cost to a company will be the programmer, not the equipment used by that person, so if they can make that person more productive, it gives the company a better return for their money. The downside is that functional languages are really tricky to learn so you typically have to get really smart people, who might cost more money. But they will be able to code things much faster. Of course when it comes to OSes and things like that, speed is still king, so the fastest languages will still be dominant. Being nearly as fast as C isn't the same as being as fast as C. Which leads to this: Show nested quote +On April 15 2014 09:50 Nesserev wrote: Can anyone tell me when it's better/more preferable to use a functional programming language like Haskell over an OO/Imperative programming language?
Some things are really easy/fast to code in functional programming languages compared to OO/Imperative programming languages, but I can't put a finger on what it is, or when this is the case. I had to program Battleships in Haskell, which was a total b!tch to be honest... Code bloat and productivity of the programmer is what was often mentioned to us in my class that focused on functional languages. As you have probably experienced, code done in functional languages is significantly shorter than code from many imperative languages (for me, probably 2/3 to 3/4 of each haskell assignment was tests with maybe 50-100 lines of actual program code). This makes it significantly easier to maintain a database of code over time since all of the different pieces of code are upwards of 50-90% shorter than their C/Java counterparts. Also, due to the nature of functional languages, it is much less likely for there to be weird bugs introduced, simply because the code probably wouldn't compile due to how functional languages are structured. Of course I'm more talking things like merge mess ups or copy/paste errors. Stuff like the goto fail error we saw on the open source SSL implementation Apple was using would not likely have happened in Haskell because GHC would have complained. Probably the same for the 8 year old GnuTLS issues (From what I have read, those were pretty rough. C library functions that should not have been in such critical code, pieces of code for encrypting that didn't do what they were supposed to upon testing). This is all combined with the code being much simpler to read (after you know the language well of course) makes it so you can be much more productive with functional languages. Of course the downside is you have to be insanely smart to use functional languages effectively. But once you do, you can do some pretty crazy things with them. We built another programming language in Haskell in about 40 lines in total. Though, with all of the huge security issues we've been seeing with stuff like Heartbleed and the above goto fail and GnuTLS, I wouldn't be shocked if certified programming becomes much more prevalent fairly soon. I've done a bit of it and it is pretty cool. The idea behind certified programming is basically that you can prove something does exactly what you say it does. This is done so mathematically via proofs. One of the more common certified programming languages, Coq (yup, that's the name), is very Haskell-like. It is a functional language in which types are handled in a very similar fashion. A lot of the code I have seen is basically Haskell code with a slightly more verbose syntax.
Good points regarding Haskell and perhaps other functional languages starting to take a larger role in bigger projects.
C is used for OS development not just because of speed, but also because it's easy to cross compile for a different target architecture, doesn't require much as far as run-time initialization, and executes in a predictable, easy to debug fashion. It also has a tons of previous support in terms of drivers and libraries for linux, solaris, etc.
I look forward to the next-generation of C-like-but-strongly-typed languages of Rust and Go for new operating systems such as Firefox OS. Note that, because the compiler technology for these languages is ~20 years behind C, it will be some time before it reaches performance parity, even if its a better language for writing faster code.
As far as certified programming languages go, there are different levels depending on the security or safety of the system. C is actually one of the better examples of a language that's capable of certifying. For example, if you're going to write code to go to space, it's probably had some Formal Methods applied to it. For automotive code, there is a code standard called MISRA. Locomotive, aerospace, medical, and security critical all have their own ways of certifying code.
You can use a C syntax checker such as Lint to apply the rules from a certification to your code base to check for things like "goto fail" being in there twice. In fact, it's kind of baffling that Apple didn't have warnings turned on, which would have certainly shown that code-smell that half of a function had no entry point for which it could be executed from.
Note that certification has much more to do with how you test code rather than the code itself, with the notable exception of MISRA.
I guess my point is that it's not the languages fault for heartbleed and goto fail;, etc... One of C's biggest strengths is also its weakness: it allows the programmer extreme liberties such that they can get themselves in trouble easily.
|
I need some help with my hobby project. I'm trying to make a pentomino puzzle AI based on genetic programming in python.
![[image loading]](http://www.cimt.plymouth.ac.uk/resources/puzzles/pentoes/pent55.gif)
I'll get to my question after a somewhat brief introduction. Workflow is: 0. Initiate a empty game grid, for example array of 5x5 with elements equal to -1. 1. The AI will get a random piece. 2. The AI will then need to test every possible way to place the piece, and for each way calculate a score. (The score will be calculated from (for example) number of holes and number of edges touching the piece. 3. Place piece, return to 1 until no empty squares.
Rules: No collisions, so possible to place pieces over each other!
I've stored the pieces in 5x5 arrays using numpy. It was easy because then I only needed to enter a piece once. The pentomino 'P' looks something like this: [[0,0,0,0,0], [0,0,1,0,0], [0,0,1,1,0], [0,0,1,1,0], [0,0,1,1,0]]
Problem: Solving the possible ways to place a piece: I'm a bit stumped over getting this done as quickly as possible. My current idea is to expand the 5x5 game grid to 7x7 by padding it with zeros. Then iterate over all elements and if they are equal to -1 I'll smack my piece over it and calculate a score. But if the game grid gets larger, like 10x10, that means going over 100 elements 8 times for piece 'P'. Anyone have an idea of an alternative way of solving this?
|
|
|
|
|
|