|
Thread Rules 1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution. 2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20) 3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible. 4. Use [code] tags to format code blocks. |
On February 15 2017 00:43 Silvanel wrote:Show nested quote +On February 15 2017 00:33 Acrofales wrote:On February 15 2017 00:08 Silvanel wrote: That is realy important. I had a great prof teaching us logic and he used to define every symbol he will be using at begining of each chapter. Really good practice. Also eye opening. He sometimes used different symbols than commonly used to force students to think about underlying meaning of the symbol rather than symbol itself. Great guy. Not sure how useful that is. Yes, you have to understand the underlying meaning, but we use symbols as shorthand for that meaning, and the reason symbols are useful at all is because of that agreed upon meaning. I could start my class redefining + to mean - and vice versa. Well i will repeat "logic" class. BTW: You do know that "+" doesnt mean the same thing in every theory right? PS. Also i never said my prof would fail someone for that. Our exams were both written and oral. You could write everything perfectly and fail if You couldnt show that You actualy understand what You have written. Also You could pass with some written mistakes if Your oral part went well and You could demonstrate understanding of the subject. I'm not following you. Logic class or otherwise. I'm going to assume that your professor didn't use non-standard notation. He just used different notation than the book. For instance \supset instead of \rightarrow for material implication. Both of which are quite normal.
Regarding +, I cannot think of a single context in which its meaning is not clear other than javascript (insert wat video here). In math, the algebraic operator of addition, and in some programming languages, string concatenation (and addition).
|
On February 14 2017 22:38 mantequilla wrote:Show nested quote +On February 14 2017 09:17 travis wrote: wtf
I have no idea what this question is asking for
"take the truth table for a 2 bit adder and make a circuit with 2 inputs and 3 outputs. then... [answer some questions about it]"
how the hell do you make a 2 bit adder with 2 inputs. it requires 4 inputs. it adds 2 bit numbers together. with 2 inputs your prof may have tried to tell, not 2 'bit' wide input, just 2 inputs, each 2 bit wide. You know it could have been a two bit adder with 3 inputs, which adds 3 2bit numbers. Dunno what can be 3 outputs. Maybe he has additional outputs other than the result of the summation.
On February 14 2017 22:38 Hanh wrote: Well, it could be 2 inputs (of 2 bits) and 3 outputs (of 1 bit)...
I don't understand what you are saying. What is an "input of 2 bits" ? A bit is either true or false. An input is also either true or false. This would mean that an input 10 is false, as is 01. Which would mean that adding an input of 01 and 10 would result in an output of 0. The only way you would ever get a non-zero output would be by adding 11 and 11. the whole thing makes no sense.
|
On February 15 2017 03:00 Manit0u wrote:Show nested quote +On February 15 2017 02:02 spinesheath wrote:On February 15 2017 01:25 Manit0u wrote:On February 15 2017 00:43 Blisse wrote:On February 14 2017 18:37 Manit0u wrote:http://www.rubyraptor.org/how-we-made-raptor-up-to-4x-faster-than-unicorn-and-up-to-2x-faster-than-puma-torquebox/A very nice read for anyone interested in web servers and building low level high performance applications. There's also plenty of good things to read in the links in this article. Sample: [...] One of the powers granted by the low-levelness of pointers, is a technique called “pointer tagging”. It turns out that – on many popular platforms (including x86 and x86_64) – not all data stored in a pointer is actually used. That means that a pointer contains a bit of free space that we can use for our own purposes. A pointer in which we’ve inserted our own custom data is called a “tagged pointer”. Why does a pointer contain free space? This has to do with data alignment. CPUs don’t like accessing data from arbitrary memory addresses. They would rather prefer to access data at memory offsets equal to some power-of-two multiple. For example, on x86 and x86-64, 32-bit integers should be aligned on a multiple of 32 bits (4 bytes). So an integer should be stored at memory address 0, or 4, or 8, or 12, …etc. On x86 and x86-64, accessing data at unaligned addresses results in a performance penalty. On other CPU architectures it’s even worse: accessing data at unaligned addresses would crash the program. This is why the memory allocator always aligns the requested allocation. The size of the alignment depends on many different factors, and is subject to C structure packing rules. Suffice to say that, in the context of Phusion Passenger, the pointers that we tag refer to memory addresses that are aligned on at least a multiple of 4 bytes. Being aligned on a multiple of 4 bytes has an interesting implication: it means that the lower 2 bits of a pointer are always zero. That’s our free space in the pointer: we can use the lower 2 bits for our own purposes. [...] 2 bits isn’t a lot, but it’s enough to store certain state information. For example, each client in Phusion Passenger can be in one of 3 possible connection states. Because there are only 3 possibilities, this information can be represented in 2 bits – exactly the amount of free space available in a pointer. void setConnState(ConnState state) { // This code is not strictly conforming C++. // It has been simplified to make it more // readable.
// Clear lower 2 bits of the server pointer. // 0x3 == 00000011 in binary. server = server & ~0x3;
// Store state information in lower 2 bits // with bitwise OR. server = server | state; }
ConnState getConnState() const { // This code is not strictly conforming C++. // It has been simplified to make it more // readable.
// Extracts the lower 2 bits from the server pointer. // 0x3 == 00000011 in binary. return server & 0x3; }
[...] Mind == blown. Honestly it's a neat trick. But it's the stupidest thing I've ever heard. So how much memory do we save by using this technique? It depends, and the answer is complicated. Normally, saving the state information in its own field requires 1 byte of memory. However, in order to satisfy alignment requirements, C data structures are padded – i.e. certain memory is deliberately reserved but not used. The Client structure contains many pointers and integers, so on x86-64, Client objects have to be aligned on a multiple of at least 8 bytes (that’s how big pointers are). Depending on where exactly the field is inserted inside a structure, the field could increase the size of the structure by 8 bytes in the worst case (again, assuming x86-64; the specifics depend on the CPU platform).
In case of Phusion Passenger, the Client structure happened to be in such a way that we were able to save 8 bytes per Client object by storing the state information in a tagged pointer. Besides the Client structure, there are also other places where we apply this technique.
Granted, 8 bytes don’t seem much, but Phusion Passenger 5 is fast because we’ve accumulated a ton of these kinds of micro-optimizations. This technique by itself doesn’t help much, but it’s about all the optimizations combined. And because CPU caches are already so tiny, every bit of memory reduction is welcome.
Honestly sounds like an engineer trying to justify a cool trick. Like they know that they could re-arrange the order of the other member variables in the class declaration to achieve the same effect? I'm sure that there was some gotcha where they needed to fit the whole 1-byte value somewhere, but c'mon this is silly. CPU caches are on the order of KB and MB. It's only justifiable in that they say everything they do is like this. Rest of the article is dope. lol linked strings. Fav quote This Least-Recently-Used policy of cache eviction is very simple, but has proven to be very effective in practice. Basically my OS class - LRU is somehow fastest in practice, second fastest is random. Well, they specifically said they were only interested in L1 cache. i7 has 64KB of L1 cache per core. This isn't much, especially if your application has to instantiate 60k+ objects simultaneously - if you can shave off 8 bytes off of each of them this is pretty huge for heavy CPU cache utilization. I can see this sort of thing to make sense for a framework like theirs. But you have to be really careful. If you end up exceeding the cache size anyways, all potential gains may be lost and you might even take a hit because you now need extra instructions to untangle you data. Those extra instructions might also cause you to load stuff into the instruction cache more frequently, which can be a huge performance penalty as well. It's also very dependent on external parameters like the kind of CPU you're currently running on and whether your application is the only thing running on that CPU or not. Certainly not something you should consider in a run of the mill enterprise application. I'm no expert on the subject and I believe they did do extensive testing and have people way smarter than me working on it. Thought it was an interesting article with some cool concepts to keep in mind 
Yeah I like a lot of the other things.
I'd be more convinced of the usefulness of this if they actually posted data. But to me when the article says, "this technique by itself doesn’t help much, but it’s about all the optimizations combined", instead of something like "fit 500 more clients into cache and improved maximum connections per second from 1000 to 20000", the entire technique sounds basically like an engineer learning something cool about tagged pointers and trying to justify the usage, versus any real trade-off explanation about extra instructions from decoding the tags, where cache hits and misses are occurring and where the speed-ups are coming from.
|
@travis
4 wires are going into the circuit. For a single wire, 1 means there's current, 0 means there isn't.
separate them into 2 groups, each two wires. Each group is an 'input'. Therefore one input can be two bits wide.
say two inputs are 11 = 3 and 10 = 2 (in binary system). Summing them you find 5 which is 101 in binary system.
not sure about the output. gotta read full question for that
|
On February 15 2017 00:59 Buckyman wrote: The pointer tagging technique seems like an elegant way to implement garbage collection. One of the free bits (alternating per collection) flags a pointer as live, the other is zeroed at the same time to avoid the need to clean up separately.
Did reading and apparently this was used in iOS at one point.
A significant example of the use of tagged pointers is the Objective-C runtime on iOS 7 on ARM64, notably used on the iPhone 5S. In iOS 7, virtual addresses are 33 bits (byte-aligned), so word-aligned addresses only use 30 bits (3 least significant bits are 0), leaving 34 bits for tags. Objective-C class pointers are word-aligned, and the tag fields are used for many purposes, such as storing a reference count and whether the object has a destructor.
Early versions of MacOS used tagged addresses called Handles to store references to data objects. The high bits of the address indicated whether the data object was locked, purgeable, and/or originated from a resource file, respectively. This caused compatibility problems when MacOS addressing advanced from 24 bits to 32 bits in System 7.
https://en.wikipedia.org/wiki/Tagged_pointer
|
On February 15 2017 03:52 Acrofales wrote: Regarding +, I cannot think of a single context in which its meaning is not clear other than javascript (insert wat video here). In math, the algebraic operator of addition, and in some programming languages, string concatenation (and addition).
Or a positive sign indicator. Or as one of the general operators of a field.
|
On February 15 2017 06:04 Buckyman wrote:Show nested quote +On February 15 2017 03:52 Acrofales wrote: Regarding +, I cannot think of a single context in which its meaning is not clear other than javascript (insert wat video here). In math, the algebraic operator of addition, and in some programming languages, string concatenation (and addition).
Or a positive sign indicator. Or as one of the general operators of a field.
in regex it totally means another thing iirc? like +(abc) beans abc sequence may occur 1 or more times, but not zero times
|
On February 15 2017 06:13 mantequilla wrote:Show nested quote +On February 15 2017 06:04 Buckyman wrote:On February 15 2017 03:52 Acrofales wrote: Regarding +, I cannot think of a single context in which its meaning is not clear other than javascript (insert wat video here). In math, the algebraic operator of addition, and in some programming languages, string concatenation (and addition).
Or a positive sign indicator. Or as one of the general operators of a field. in regex it totally means another thing iirc? like +(abc) beans abc sequence may occur 1 or more times, but not zero times Okay okay. But in any case, those are all standardized use of the + operator. It's not some professor inventing shit in his class to confuse those who don't attend.
|
ok lets move on past my last stupid question
now we have a question posted:
"Assume x,y,z are all even. The largest number that we KNOW divides (x*y*z) is what?"
My guess is that the answer is 8? If x, y, and z are all even, then if any of them are 0, x*y*z is 0. Any integer divides 0. If they are not zero, then we know they are at minimum 2. 2*2*2 = 8. If they are bigger than 2, then the solution is still a multiple of 8. Example, 2*4*2 = 16.
Look right?
|
Since x, y and z can very well all be 2, and no number greater than 8 divides 8, we know the number can't be greater than 8. And since all even numbers are multiples of 2, we know that all x*y*z are divisable by 8. So we have an upper bound verified that the upper bound solves the problem.
|
On February 15 2017 07:50 travis wrote: ok lets move on past my last stupid question
now we have a question posted:
"Assume x,y,z are all even. The largest number that we KNOW divides (x*y*z) is what?"
My guess is that the answer is 8? If x, y, and z are all even, then if any of them are 0, x*y*z is 0. Any integer divides 0. If they are not zero, then we know they are at minimum 2. 2*2*2 = 8. If they are bigger than 2, then the solution is still a multiple of 8. Example, 2*4*2 = 16.
Look right?
Yes.
By definition, x = 2n for some integer n. The same applies to y = 2m and z = 2k
It follows that xyz = (2n)(2m)(2k) = 8nmk, which is as you say a multiple of 8.
|
On February 15 2017 07:50 travis wrote: ok lets move on past my last stupid question
now we have a question posted:
"Assume x,y,z are all even. The largest number that we KNOW divides (x*y*z) is what?"
My guess is that the answer is 8? If x, y, and z are all even, then if any of them are 0, x*y*z is 0. Any integer divides 0. If they are not zero, then we know they are at minimum 2. 2*2*2 = 8. If they are bigger than 2, then the solution is still a multiple of 8. Example, 2*4*2 = 16.
Look right? Any non-0 even number can be rewritten as 2* some other number. So we have: (2x')*2(y')*(2x')=8x'y'z'
So... given we know nothing about x', y' or z', and they may very well all be 1, you figured it out: answer is 8.
E: ninja'd
|
I think it is interesting to sometimes step back and question what a symbol really means. For example, one of my biggest OMG moment in math was more than 20 years ago when I was shown the construction of the standard sets. Starting with N, with implies a single axiom (Peano's), you proceed with building Z, Q, R, and C. Z and Q are fairly intuitive. Negative numbers and fractions. But what makes R? Why is \pi in R and not 'banana'. Why is R better than Q? Why continue to C and not beyond? It was eye opening enough that these are the few results than I can still remember after this time.
|
On February 15 2017 09:18 Hanh wrote: I think it is interesting to sometimes step back and question what a symbol really means. For example, one of my biggest OMG moment in math was more than 20 years ago when I was shown the construction of the standard sets. Starting with N, with implies a single axiom (Peano's), you proceed with building Z, Q, R, and C. Z and Q are fairly intuitive. Negative numbers and fractions. But what makes R? Why is \pi in R and not 'banana'. Why is R better than Q? Why continue to C and not beyond? It was eye opening enough that these are the few results than I can still remember after this time.
Still not sure what this has to do with the symbols. Would your eyes have been less opened if he had never mentioned N, but always referred to it as "the set of natural numbers"? Don't get me wrong: learning the concepts behind the things we take for granted is both very interesting and absolutely necessary for a deeper understanding of those concepts, but it is the concepts you need to understand, and the symbol is simply shorthand for them. That said, the symbol is important. Referring to the set of natural numbers as \phi would be completely valid if you define it such, but it would make your text needlessly hard to read. N symbolizes the set of natural numbers through convention, and it's a part of science education to teach you the science, as well as the jargon and conventions used in the field.
I have rejected papers for needlessly inventing its own notation (granted, most of the time when people do that, there is a lot more wrong with the paper than just weird notation: the weird notation is simply an obvious sign that the author didn't know what he's talking about).
|
I wasn't replying to your comment...
I'm not saying there is any value in randomly calling something differently. But there is a purpose in not using the classic symbol if you aim to define from scratch.
|
|
What level is your cousin at when it comes to math? I'd recommend a background of at least pre-calculus but preferrably some calculus as well. If he goes to a school like mine, not having enough depth to his math background will slow him down.
|
require 'benchmark'
arr = Array(1..10_000_000)
def test_fun_guard(ary) return false if ary.empty?
ary.dup.reverse end
def test_fun_standard(ary) if ary.empty? false else ary.dup.reverse end end
Benchmark.bmbm(10) do |bm| bm.report('guard') do test_fun_guard arr end
bm.report('standard') do test_fun_standard arr end end
It's astounding how seemingly identical code (same logic basically) can impact performance:
Rehearsal ---------------------------------------------- guard 0.010000 0.020000 0.030000 ( 0.020905) standard 0.030000 0.010000 0.040000 ( 0.042935) ------------------------------------- total: 0.070000sec user system total real guard 0.000000 0.000000 0.000000 ( 0.018477) standard 0.010000 0.010000 0.020000 ( 0.018885)
I should definitely test if other interpreted languages behave in this way. If so, guard clauses all the way!
|
Has your cousin done any programming? Things have probably changed a lot, and you learn a bit of programming at school (although I guess that also depends on what school and where), but most of the people who did well and enjoyed CS at an undergrad level at my uni, had some prior experience with programming (caveat: this was 16 years ago). If he just likes playing computer games, it's not immediately clear that that will translate into enjoying studying CS, let alone working in the field. Things he might have done: create the website for his school sports team, create a script to automate some task that he thought was tedious, create a mod for a game he enjoys, or just programmed something else that he thought would be fun to do.
If he hasn't done any programming at all, maybe have him look at scratch (code.org has a pretty good introduction... aimed at an ~12y.o. audience and their educators), codecombat.com or one of the other millions of somewhat gamified introductions to programming (there's an app called lightbot which is pretty good for teaching the logic of programming, but is imho a bit too abstracted from actual coding for what your cousin would want to know). Most of the people on this forum probably learned with a book, but imho there are better resources out there right now to get "into it".
|
On February 15 2017 22:16 Manit0u wrote: require 'benchmark'
arr = Array(1..10_000_000)
def test_fun_guard(ary) return false if ary.empty?
ary.dup.reverse end
def test_fun_standard(ary) if ary.empty? false else ary.dup.reverse end end
Benchmark.bmbm(10) do |bm| bm.report('guard') do test_fun_guard arr end
bm.report('standard') do test_fun_standard arr end end
It's astounding how seemingly identical code (same logic basically) can impact performance: Rehearsal ---------------------------------------------- guard 0.010000 0.020000 0.030000 ( 0.020905) standard 0.030000 0.010000 0.040000 ( 0.042935) ------------------------------------- total: 0.070000sec user system total real guard 0.000000 0.000000 0.000000 ( 0.018477) standard 0.010000 0.010000 0.020000 ( 0.018885)
I should definitely test if other interpreted languages behave in this way. If so, guard clauses all the way!
Would be interesting to not only test this with other interpreted languages, but also with other implementations of ruby.
Btw, I think this kind of non-sense performance buff, is a good case for compilers. Programmers should not have to think about performance on that abstraction level of coding. They should think about correctness, readability, extensibility etc. (Yes, dear C programmers. I mean in a language like ruby that is clearly not meant for performance optimizations like that ) Unfortunately, this kind of thinking leaks through all of ruby: Why are there two syntaxes for passing a block to a method? (I mean yield vs &block) Should I not be bothered with the detail, that its faster if the block never gets a true reference but is rather just thrown onto the stack? In my humble opinion it would be way better to have only the &block syntax and leave the optimization to the compiler.
Thanks for sharing anyways. I think this kind of stuff is quite interesting and a nice playground. 
|
|
|
|