|
Thread Rules 1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution. 2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20) 3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible. 4. Use [code] tags to format code blocks. |
|
wtf
I have no idea what this question is asking for
"take the truth table for a 2 bit adder and make a circuit with 2 inputs and 3 outputs. then... [answer some questions about it]"
how the hell do you make a 2 bit adder with 2 inputs. it requires 4 inputs. it adds 2 bit numbers together.
|
On February 14 2017 07:26 travis wrote: find an infinite domain where the statement is true OR show that there is NO such domain. explain. then do the same but attempting to show that the statement is false.
(for all x)(there exists y)[(x < y < 1 ) and (not (x < z < y) ]
You're looking for a domain where elements can be arbitrary close to 1 but with gaps between them.
X = {1-1/n, for n in N*} will work
given x = 1-1/n, pick y = 1-1/(n+1)
|
wow man that's seriously clever ty
did the prof really expect us to get that? geeze
|
http://www.rubyraptor.org/how-we-made-raptor-up-to-4x-faster-than-unicorn-and-up-to-2x-faster-than-puma-torquebox/
A very nice read for anyone interested in web servers and building low level high performance applications.
There's also plenty of good things to read in the links in this article.
Sample:
[...] One of the powers granted by the low-levelness of pointers, is a technique called “pointer tagging”. It turns out that – on many popular platforms (including x86 and x86_64) – not all data stored in a pointer is actually used. That means that a pointer contains a bit of free space that we can use for our own purposes. A pointer in which we’ve inserted our own custom data is called a “tagged pointer”. Why does a pointer contain free space? This has to do with data alignment. CPUs don’t like accessing data from arbitrary memory addresses. They would rather prefer to access data at memory offsets equal to some power-of-two multiple. For example, on x86 and x86-64, 32-bit integers should be aligned on a multiple of 32 bits (4 bytes). So an integer should be stored at memory address 0, or 4, or 8, or 12, …etc. On x86 and x86-64, accessing data at unaligned addresses results in a performance penalty. On other CPU architectures it’s even worse: accessing data at unaligned addresses would crash the program. This is why the memory allocator always aligns the requested allocation. The size of the alignment depends on many different factors, and is subject to C structure packing rules. Suffice to say that, in the context of Phusion Passenger, the pointers that we tag refer to memory addresses that are aligned on at least a multiple of 4 bytes. Being aligned on a multiple of 4 bytes has an interesting implication: it means that the lower 2 bits of a pointer are always zero. That’s our free space in the pointer: we can use the lower 2 bits for our own purposes. [...] 2 bits isn’t a lot, but it’s enough to store certain state information. For example, each client in Phusion Passenger can be in one of 3 possible connection states. Because there are only 3 possibilities, this information can be represented in 2 bits – exactly the amount of free space available in a pointer. void setConnState(ConnState state) { // This code is not strictly conforming C++. // It has been simplified to make it more // readable.
// Clear lower 2 bits of the server pointer. // 0x3 == 00000011 in binary. server = server & ~0x3;
// Store state information in lower 2 bits // with bitwise OR. server = server | state; }
ConnState getConnState() const { // This code is not strictly conforming C++. // It has been simplified to make it more // readable.
// Extracts the lower 2 bits from the server pointer. // 0x3 == 00000011 in binary. return server & 0x3; }
[...]
Mind == blown.
|
|
On February 14 2017 07:26 travis wrote: Let's go over some discrete math questions I am unsure about!
give an infinite subset X in Q that contains no elements in set N my answer: Q less than 0 (this one seemed easy but it's weird so im checking)
That works. Alternatively Q - N 
give an infinite subset X in R that contains no elements in set Q my answer: the set of irrational numbers? though I know of no actual proof that this set is infinite in size, I am just guessing it is
R - Q is infinite, and the proof is basically just a corrollary of the proof that there are irrational numbers in the first place.
find an infinite domain where the statement is true OR show that there is NO such domain. explain. then do the same but attempting to show that the statement is false.
(for all x)(there exists y)[(x < y < 1 ) and (not (x < z < y) ]
for False, I say the natural numbers. If y < 1, then y must be 0. So x cannot be less than y.
That works.
for True, we know we our domain must not be a dense set (I've got this right, correct?) since there is no z between x and y. So we are dealing with integers or some such. But there doesn't seem to be any domain that actually works here, because we can't find a y > x and less than 1 in a set that isn't dense.
Hanh already answered you.
On February 14 2017 09:17 travis wrote: wtf
I have no idea what this question is asking for
"take the truth table for a 2 bit adder and make a circuit with 2 inputs and 3 outputs. then... [answer some questions about it]"
how the hell do you make a 2 bit adder with 2 inputs. it requires 4 inputs. it adds 2 bit numbers together.
Agreed. Ask your professor. Unless he literally means a circuit to add 2 bits together, but then you don't need 3 outputs...
|
On February 14 2017 09:17 travis wrote: wtf
I have no idea what this question is asking for
"take the truth table for a 2 bit adder and make a circuit with 2 inputs and 3 outputs. then... [answer some questions about it]"
how the hell do you make a 2 bit adder with 2 inputs. it requires 4 inputs. it adds 2 bit numbers together.
with 2 inputs your prof may have tried to tell, not 2 'bit' wide input, just 2 inputs, each 2 bit wide. You know it could have been a two bit adder with 3 inputs, which adds 3 2bit numbers.
Dunno what can be 3 outputs. Maybe he has additional outputs other than the result of the summation.
|
Well, it could be 2 inputs (of 2 bits) and 3 outputs (of 1 bit)...
|
On February 14 2017 22:38 Hanh wrote: Well, it could be 2 inputs (of 2 bits) and 3 outputs (of 1 bit)... That would have been my assumption too. But having to assume such things is not a sign for a well-phrased question.
|
On February 14 2017 22:53 Khalum wrote:Show nested quote +On February 14 2017 22:38 Hanh wrote: Well, it could be 2 inputs (of 2 bits) and 3 outputs (of 1 bit)... That would have been my assumption too. But having to assume such things is not a sign for a well-phrased question.
I had a professor, who used a symbol for a variable during lessons, and expected us to know it during the exam. And it wasn't a universally recognized symbol too, like 'a' for acceleration.
Like say I'm using the letter 'u' to denote density at the class, and on the exam, without any explanation of 'u' means density, I'm expecting students to understand it, since they must have attended the class. Jeez. 
ps:happy cakes
|
That is realy important. I had a great prof teaching us logic and he used to define every symbol he will be using at begining of each chapter. Really good practice. Also eye opening. He sometimes used different symbols than commonly used to force students to think about underlying meaning of the symbol rather than symbol itself. Great guy.
|
On February 15 2017 00:08 Silvanel wrote: That is realy important. I had a great prof teaching us logic and he used to define every symbol he will be using at begining of each chapter. Really good practice. Also eye opening. He sometimes used different symbols than commonly used to force students to think about underlying meaning of the symbol rather than symbol itself. Great guy. Not sure how useful that is. Yes, you have to understand the underlying meaning, but we use symbols as shorthand for that meaning, and the reason symbols are useful at all is because of that agreed upon meaning. I could start my class redefining + to mean - and vice versa. That would definitely screw everybody at the math exam who didn't attend class. But what is the point? Why should the professor care whether or not people attend class? If they show up for the exam and pass, then that's good on them. If they show up for the exam and fail, that is (possibly) a problem for the professor, because he may be evaluated on passing % and such, but isn't it even worse if people would have passed the exam if it used standard symbols, but failed because the professor isn't actually asking whether you know the subject matter, but whether you attended class?
So there are plenty of situations where symbolism is not unique... for instance, I know of 3 separate symbols that are all used in a standard manner to mean material implication. And the teacher using one, and the book using another is fine. But thinking up non-standard symbols just to screw with anybody who misses a class at a university level course seems petty, and stupid. You want people to attend class? Provide added value over the written text.
|
On February 14 2017 18:37 Manit0u wrote:http://www.rubyraptor.org/how-we-made-raptor-up-to-4x-faster-than-unicorn-and-up-to-2x-faster-than-puma-torquebox/A very nice read for anyone interested in web servers and building low level high performance applications. There's also plenty of good things to read in the links in this article. Sample: Show nested quote +[...] One of the powers granted by the low-levelness of pointers, is a technique called “pointer tagging”. It turns out that – on many popular platforms (including x86 and x86_64) – not all data stored in a pointer is actually used. That means that a pointer contains a bit of free space that we can use for our own purposes. A pointer in which we’ve inserted our own custom data is called a “tagged pointer”. Why does a pointer contain free space? This has to do with data alignment. CPUs don’t like accessing data from arbitrary memory addresses. They would rather prefer to access data at memory offsets equal to some power-of-two multiple. For example, on x86 and x86-64, 32-bit integers should be aligned on a multiple of 32 bits (4 bytes). So an integer should be stored at memory address 0, or 4, or 8, or 12, …etc. On x86 and x86-64, accessing data at unaligned addresses results in a performance penalty. On other CPU architectures it’s even worse: accessing data at unaligned addresses would crash the program. This is why the memory allocator always aligns the requested allocation. The size of the alignment depends on many different factors, and is subject to C structure packing rules. Suffice to say that, in the context of Phusion Passenger, the pointers that we tag refer to memory addresses that are aligned on at least a multiple of 4 bytes. Being aligned on a multiple of 4 bytes has an interesting implication: it means that the lower 2 bits of a pointer are always zero. That’s our free space in the pointer: we can use the lower 2 bits for our own purposes. [...] 2 bits isn’t a lot, but it’s enough to store certain state information. For example, each client in Phusion Passenger can be in one of 3 possible connection states. Because there are only 3 possibilities, this information can be represented in 2 bits – exactly the amount of free space available in a pointer. void setConnState(ConnState state) { // This code is not strictly conforming C++. // It has been simplified to make it more // readable.
// Clear lower 2 bits of the server pointer. // 0x3 == 00000011 in binary. server = server & ~0x3;
// Store state information in lower 2 bits // with bitwise OR. server = server | state; }
ConnState getConnState() const { // This code is not strictly conforming C++. // It has been simplified to make it more // readable.
// Extracts the lower 2 bits from the server pointer. // 0x3 == 00000011 in binary. return server & 0x3; }
[...] Mind == blown.
Honestly it's a neat trick.
But it's the stupidest thing I've ever heard.
So how much memory do we save by using this technique? It depends, and the answer is complicated. Normally, saving the state information in its own field requires 1 byte of memory. However, in order to satisfy alignment requirements, C data structures are padded – i.e. certain memory is deliberately reserved but not used. The Client structure contains many pointers and integers, so on x86-64, Client objects have to be aligned on a multiple of at least 8 bytes (that’s how big pointers are). Depending on where exactly the field is inserted inside a structure, the field could increase the size of the structure by 8 bytes in the worst case (again, assuming x86-64; the specifics depend on the CPU platform).
In case of Phusion Passenger, the Client structure happened to be in such a way that we were able to save 8 bytes per Client object by storing the state information in a tagged pointer. Besides the Client structure, there are also other places where we apply this technique.
Granted, 8 bytes don’t seem much, but Phusion Passenger 5 is fast because we’ve accumulated a ton of these kinds of micro-optimizations. This technique by itself doesn’t help much, but it’s about all the optimizations combined. And because CPU caches are already so tiny, every bit of memory reduction is welcome.
Honestly sounds like an engineer trying to justify a cool trick. Like they know that they could re-arrange the order of the other member variables in the class declaration to achieve the same effect? I'm sure that there was some gotcha where they needed to fit the whole 1-byte value somewhere, but c'mon this is silly. CPU caches are on the order of KB and MB. It's only justifiable in that they say everything they do is like this.
Rest of the article is dope. lol linked strings.
Fav quote
This Least-Recently-Used policy of cache eviction is very simple, but has proven to be very effective in practice.
Basically my OS class - LRU is somehow fastest in practice, second fastest is random.
|
On February 15 2017 00:33 Acrofales wrote:Show nested quote +On February 15 2017 00:08 Silvanel wrote: That is realy important. I had a great prof teaching us logic and he used to define every symbol he will be using at begining of each chapter. Really good practice. Also eye opening. He sometimes used different symbols than commonly used to force students to think about underlying meaning of the symbol rather than symbol itself. Great guy. Not sure how useful that is. Yes, you have to understand the underlying meaning, but we use symbols as shorthand for that meaning, and the reason symbols are useful at all is because of that agreed upon meaning. I could start my class redefining + to mean - and vice versa.
Well i will repeat "logic" class. BTW: You do know that "+" doesnt mean the same thing in every theory right?
PS. Also i never said my prof would fail someone for that. Our exams were both written and oral. You could write everything perfectly and fail if You couldnt show that You actualy understand what You have written. Also You could pass with some written mistakes if Your oral part went well and You could demonstrate understanding of the subject.
|
The pointer tagging technique seems like an elegant way to implement garbage collection. One of the free bits (alternating per collection) flags a pointer as live, the other is zeroed at the same time to avoid the need to clean up separately.
|
On February 15 2017 00:43 Blisse wrote:Show nested quote +On February 14 2017 18:37 Manit0u wrote:http://www.rubyraptor.org/how-we-made-raptor-up-to-4x-faster-than-unicorn-and-up-to-2x-faster-than-puma-torquebox/A very nice read for anyone interested in web servers and building low level high performance applications. There's also plenty of good things to read in the links in this article. Sample: [...] One of the powers granted by the low-levelness of pointers, is a technique called “pointer tagging”. It turns out that – on many popular platforms (including x86 and x86_64) – not all data stored in a pointer is actually used. That means that a pointer contains a bit of free space that we can use for our own purposes. A pointer in which we’ve inserted our own custom data is called a “tagged pointer”. Why does a pointer contain free space? This has to do with data alignment. CPUs don’t like accessing data from arbitrary memory addresses. They would rather prefer to access data at memory offsets equal to some power-of-two multiple. For example, on x86 and x86-64, 32-bit integers should be aligned on a multiple of 32 bits (4 bytes). So an integer should be stored at memory address 0, or 4, or 8, or 12, …etc. On x86 and x86-64, accessing data at unaligned addresses results in a performance penalty. On other CPU architectures it’s even worse: accessing data at unaligned addresses would crash the program. This is why the memory allocator always aligns the requested allocation. The size of the alignment depends on many different factors, and is subject to C structure packing rules. Suffice to say that, in the context of Phusion Passenger, the pointers that we tag refer to memory addresses that are aligned on at least a multiple of 4 bytes. Being aligned on a multiple of 4 bytes has an interesting implication: it means that the lower 2 bits of a pointer are always zero. That’s our free space in the pointer: we can use the lower 2 bits for our own purposes. [...] 2 bits isn’t a lot, but it’s enough to store certain state information. For example, each client in Phusion Passenger can be in one of 3 possible connection states. Because there are only 3 possibilities, this information can be represented in 2 bits – exactly the amount of free space available in a pointer. void setConnState(ConnState state) { // This code is not strictly conforming C++. // It has been simplified to make it more // readable.
// Clear lower 2 bits of the server pointer. // 0x3 == 00000011 in binary. server = server & ~0x3;
// Store state information in lower 2 bits // with bitwise OR. server = server | state; }
ConnState getConnState() const { // This code is not strictly conforming C++. // It has been simplified to make it more // readable.
// Extracts the lower 2 bits from the server pointer. // 0x3 == 00000011 in binary. return server & 0x3; }
[...] Mind == blown. Honestly it's a neat trick. But it's the stupidest thing I've ever heard. Show nested quote +So how much memory do we save by using this technique? It depends, and the answer is complicated. Normally, saving the state information in its own field requires 1 byte of memory. However, in order to satisfy alignment requirements, C data structures are padded – i.e. certain memory is deliberately reserved but not used. The Client structure contains many pointers and integers, so on x86-64, Client objects have to be aligned on a multiple of at least 8 bytes (that’s how big pointers are). Depending on where exactly the field is inserted inside a structure, the field could increase the size of the structure by 8 bytes in the worst case (again, assuming x86-64; the specifics depend on the CPU platform).
In case of Phusion Passenger, the Client structure happened to be in such a way that we were able to save 8 bytes per Client object by storing the state information in a tagged pointer. Besides the Client structure, there are also other places where we apply this technique.
Granted, 8 bytes don’t seem much, but Phusion Passenger 5 is fast because we’ve accumulated a ton of these kinds of micro-optimizations. This technique by itself doesn’t help much, but it’s about all the optimizations combined. And because CPU caches are already so tiny, every bit of memory reduction is welcome.
Honestly sounds like an engineer trying to justify a cool trick. Like they know that they could re-arrange the order of the other member variables in the class declaration to achieve the same effect? I'm sure that there was some gotcha where they needed to fit the whole 1-byte value somewhere, but c'mon this is silly. CPU caches are on the order of KB and MB. It's only justifiable in that they say everything they do is like this. Rest of the article is dope. lol linked strings. Fav quote Show nested quote +This Least-Recently-Used policy of cache eviction is very simple, but has proven to be very effective in practice. Basically my OS class - LRU is somehow fastest in practice, second fastest is random.
Well, they specifically said they were only interested in L1 cache. i7 has 64KB of L1 cache per core. This isn't much, especially if your application has to instantiate 60k+ objects simultaneously - if you can shave off 8 bytes off of each of them this is pretty huge for heavy CPU cache utilization.
|
28079 Posts
My cousin is in grade 12 and he's thinking of majoring in CS when he goes to uni. He knows I've taken a few classes and have been doing self learning over the past year so he asked me some questions, one of which was "what's a good online course I can take to get a head start". I am thinking of recommending the intro to programming on Udacity as well as Harvard's CS50x since he wants more of an actual structured class. Any other options? Has anyone taken either of those, and would you recommend them?
|
On February 15 2017 01:25 Manit0u wrote:Show nested quote +On February 15 2017 00:43 Blisse wrote:On February 14 2017 18:37 Manit0u wrote:http://www.rubyraptor.org/how-we-made-raptor-up-to-4x-faster-than-unicorn-and-up-to-2x-faster-than-puma-torquebox/A very nice read for anyone interested in web servers and building low level high performance applications. There's also plenty of good things to read in the links in this article. Sample: [...] One of the powers granted by the low-levelness of pointers, is a technique called “pointer tagging”. It turns out that – on many popular platforms (including x86 and x86_64) – not all data stored in a pointer is actually used. That means that a pointer contains a bit of free space that we can use for our own purposes. A pointer in which we’ve inserted our own custom data is called a “tagged pointer”. Why does a pointer contain free space? This has to do with data alignment. CPUs don’t like accessing data from arbitrary memory addresses. They would rather prefer to access data at memory offsets equal to some power-of-two multiple. For example, on x86 and x86-64, 32-bit integers should be aligned on a multiple of 32 bits (4 bytes). So an integer should be stored at memory address 0, or 4, or 8, or 12, …etc. On x86 and x86-64, accessing data at unaligned addresses results in a performance penalty. On other CPU architectures it’s even worse: accessing data at unaligned addresses would crash the program. This is why the memory allocator always aligns the requested allocation. The size of the alignment depends on many different factors, and is subject to C structure packing rules. Suffice to say that, in the context of Phusion Passenger, the pointers that we tag refer to memory addresses that are aligned on at least a multiple of 4 bytes. Being aligned on a multiple of 4 bytes has an interesting implication: it means that the lower 2 bits of a pointer are always zero. That’s our free space in the pointer: we can use the lower 2 bits for our own purposes. [...] 2 bits isn’t a lot, but it’s enough to store certain state information. For example, each client in Phusion Passenger can be in one of 3 possible connection states. Because there are only 3 possibilities, this information can be represented in 2 bits – exactly the amount of free space available in a pointer. void setConnState(ConnState state) { // This code is not strictly conforming C++. // It has been simplified to make it more // readable.
// Clear lower 2 bits of the server pointer. // 0x3 == 00000011 in binary. server = server & ~0x3;
// Store state information in lower 2 bits // with bitwise OR. server = server | state; }
ConnState getConnState() const { // This code is not strictly conforming C++. // It has been simplified to make it more // readable.
// Extracts the lower 2 bits from the server pointer. // 0x3 == 00000011 in binary. return server & 0x3; }
[...] Mind == blown. Honestly it's a neat trick. But it's the stupidest thing I've ever heard. So how much memory do we save by using this technique? It depends, and the answer is complicated. Normally, saving the state information in its own field requires 1 byte of memory. However, in order to satisfy alignment requirements, C data structures are padded – i.e. certain memory is deliberately reserved but not used. The Client structure contains many pointers and integers, so on x86-64, Client objects have to be aligned on a multiple of at least 8 bytes (that’s how big pointers are). Depending on where exactly the field is inserted inside a structure, the field could increase the size of the structure by 8 bytes in the worst case (again, assuming x86-64; the specifics depend on the CPU platform).
In case of Phusion Passenger, the Client structure happened to be in such a way that we were able to save 8 bytes per Client object by storing the state information in a tagged pointer. Besides the Client structure, there are also other places where we apply this technique.
Granted, 8 bytes don’t seem much, but Phusion Passenger 5 is fast because we’ve accumulated a ton of these kinds of micro-optimizations. This technique by itself doesn’t help much, but it’s about all the optimizations combined. And because CPU caches are already so tiny, every bit of memory reduction is welcome.
Honestly sounds like an engineer trying to justify a cool trick. Like they know that they could re-arrange the order of the other member variables in the class declaration to achieve the same effect? I'm sure that there was some gotcha where they needed to fit the whole 1-byte value somewhere, but c'mon this is silly. CPU caches are on the order of KB and MB. It's only justifiable in that they say everything they do is like this. Rest of the article is dope. lol linked strings. Fav quote This Least-Recently-Used policy of cache eviction is very simple, but has proven to be very effective in practice. Basically my OS class - LRU is somehow fastest in practice, second fastest is random. Well, they specifically said they were only interested in L1 cache. i7 has 64KB of L1 cache per core. This isn't much, especially if your application has to instantiate 60k+ objects simultaneously - if you can shave off 8 bytes off of each of them this is pretty huge for heavy CPU cache utilization. I can see this sort of thing to make sense for a framework like theirs. But you have to be really careful. If you end up exceeding the cache size anyways, all potential gains may be lost and you might even take a hit because you now need extra instructions to untangle you data. Those extra instructions might also cause you to load stuff into the instruction cache more frequently, which can be a huge performance penalty as well. It's also very dependent on external parameters like the kind of CPU you're currently running on and whether your application is the only thing running on that CPU or not.
Certainly not something you should consider in a run of the mill enterprise application.
|
On February 15 2017 02:02 spinesheath wrote:Show nested quote +On February 15 2017 01:25 Manit0u wrote:On February 15 2017 00:43 Blisse wrote:On February 14 2017 18:37 Manit0u wrote:http://www.rubyraptor.org/how-we-made-raptor-up-to-4x-faster-than-unicorn-and-up-to-2x-faster-than-puma-torquebox/A very nice read for anyone interested in web servers and building low level high performance applications. There's also plenty of good things to read in the links in this article. Sample: [...] One of the powers granted by the low-levelness of pointers, is a technique called “pointer tagging”. It turns out that – on many popular platforms (including x86 and x86_64) – not all data stored in a pointer is actually used. That means that a pointer contains a bit of free space that we can use for our own purposes. A pointer in which we’ve inserted our own custom data is called a “tagged pointer”. Why does a pointer contain free space? This has to do with data alignment. CPUs don’t like accessing data from arbitrary memory addresses. They would rather prefer to access data at memory offsets equal to some power-of-two multiple. For example, on x86 and x86-64, 32-bit integers should be aligned on a multiple of 32 bits (4 bytes). So an integer should be stored at memory address 0, or 4, or 8, or 12, …etc. On x86 and x86-64, accessing data at unaligned addresses results in a performance penalty. On other CPU architectures it’s even worse: accessing data at unaligned addresses would crash the program. This is why the memory allocator always aligns the requested allocation. The size of the alignment depends on many different factors, and is subject to C structure packing rules. Suffice to say that, in the context of Phusion Passenger, the pointers that we tag refer to memory addresses that are aligned on at least a multiple of 4 bytes. Being aligned on a multiple of 4 bytes has an interesting implication: it means that the lower 2 bits of a pointer are always zero. That’s our free space in the pointer: we can use the lower 2 bits for our own purposes. [...] 2 bits isn’t a lot, but it’s enough to store certain state information. For example, each client in Phusion Passenger can be in one of 3 possible connection states. Because there are only 3 possibilities, this information can be represented in 2 bits – exactly the amount of free space available in a pointer. void setConnState(ConnState state) { // This code is not strictly conforming C++. // It has been simplified to make it more // readable.
// Clear lower 2 bits of the server pointer. // 0x3 == 00000011 in binary. server = server & ~0x3;
// Store state information in lower 2 bits // with bitwise OR. server = server | state; }
ConnState getConnState() const { // This code is not strictly conforming C++. // It has been simplified to make it more // readable.
// Extracts the lower 2 bits from the server pointer. // 0x3 == 00000011 in binary. return server & 0x3; }
[...] Mind == blown. Honestly it's a neat trick. But it's the stupidest thing I've ever heard. So how much memory do we save by using this technique? It depends, and the answer is complicated. Normally, saving the state information in its own field requires 1 byte of memory. However, in order to satisfy alignment requirements, C data structures are padded – i.e. certain memory is deliberately reserved but not used. The Client structure contains many pointers and integers, so on x86-64, Client objects have to be aligned on a multiple of at least 8 bytes (that’s how big pointers are). Depending on where exactly the field is inserted inside a structure, the field could increase the size of the structure by 8 bytes in the worst case (again, assuming x86-64; the specifics depend on the CPU platform).
In case of Phusion Passenger, the Client structure happened to be in such a way that we were able to save 8 bytes per Client object by storing the state information in a tagged pointer. Besides the Client structure, there are also other places where we apply this technique.
Granted, 8 bytes don’t seem much, but Phusion Passenger 5 is fast because we’ve accumulated a ton of these kinds of micro-optimizations. This technique by itself doesn’t help much, but it’s about all the optimizations combined. And because CPU caches are already so tiny, every bit of memory reduction is welcome.
Honestly sounds like an engineer trying to justify a cool trick. Like they know that they could re-arrange the order of the other member variables in the class declaration to achieve the same effect? I'm sure that there was some gotcha where they needed to fit the whole 1-byte value somewhere, but c'mon this is silly. CPU caches are on the order of KB and MB. It's only justifiable in that they say everything they do is like this. Rest of the article is dope. lol linked strings. Fav quote This Least-Recently-Used policy of cache eviction is very simple, but has proven to be very effective in practice. Basically my OS class - LRU is somehow fastest in practice, second fastest is random. Well, they specifically said they were only interested in L1 cache. i7 has 64KB of L1 cache per core. This isn't much, especially if your application has to instantiate 60k+ objects simultaneously - if you can shave off 8 bytes off of each of them this is pretty huge for heavy CPU cache utilization. I can see this sort of thing to make sense for a framework like theirs. But you have to be really careful. If you end up exceeding the cache size anyways, all potential gains may be lost and you might even take a hit because you now need extra instructions to untangle you data. Those extra instructions might also cause you to load stuff into the instruction cache more frequently, which can be a huge performance penalty as well. It's also very dependent on external parameters like the kind of CPU you're currently running on and whether your application is the only thing running on that CPU or not. Certainly not something you should consider in a run of the mill enterprise application.
I'm no expert on the subject and I believe they did do extensive testing and have people way smarter than me working on it. Thought it was an interesting article with some cool concepts to keep in mind
|
|
|
|