Among many advantages of working with hardware there are two (that i know of) significant disadvantages. One is the fact that it is hard to work remotly since You need the hardware itself. The other (that i just experienced today) is that it is painfull and tiring to move. My team just had to move to a different floor in same building and it took us most of the working day. And even now not everything is working. We have more space now though.
The Big Programming Thread - Page 930
Forum Index > General Forum |
Thread Rules 1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution. 2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20) 3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible. 4. Use [code] tags to format code blocks. | ||
Silvanel
Poland4692 Posts
Among many advantages of working with hardware there are two (that i know of) significant disadvantages. One is the fact that it is hard to work remotly since You need the hardware itself. The other (that i just experienced today) is that it is painfull and tiring to move. My team just had to move to a different floor in same building and it took us most of the working day. And even now not everything is working. We have more space now though. | ||
Deleted User 3420
24492 Posts
| ||
enigmaticcam
United States280 Posts
You have a given set of items. All items have four metrics, A, B, C, and D. Combining items together produces a net sum of all four metrics. For example, if you combine items 1 and 2 below, you get a net sum of 0 across all four metrics. Item 1 (A = 1, B = 2, C = 3, D = 4) Item 2 (A = -1, B = -2, C = -3, D = -4) Sum (A = 0,B = 0,C = 0,D = 0) Now let's say you have a target net sum. You want some combination of items that yields your target net sum, but you want it using the least number of items possible. Example: Item 1 (A = 0, B = 10, C = 0, D = 0) Item 2 (A = 0, B = -5, C = 0, D = 0) Item 3 (A = 0, B = 5, C = 0, D = 0) Target: (A = 0, B = 5, C = 0, D = 0) You can reach the target by combining items 1 and 2. But it would be more efficient to use just item 3 by itself. I've tried a dynamic programming approach, something similar to the knapsack problem. But I can't figure out how to define the boundaries. A smaller combination of a solution might seem to be way off, and can go negative or positive on some or all metrics, but all together it matches perfectly. Conversely, with the knapsack problem you can start at 0 weight and increment up to the max weight. Any ideas? Assume that the given set of items can in fact produce the target. | ||
TMG26
Portugal2017 Posts
Cheouckout z3 from microsoft research. I don't know if it supports maxSMT. But even with just SMT you can quickly work around it. It has a python api that I used in university. | ||
raNazUra
United States10 Posts
On December 14 2017 02:21 enigmaticcam wrote: For all you algorithm enthusiasts, I got one that I haven't been able to figure out. Maybe you can help me. You have a given set of items. All items have four metrics, A, B, C, and D. Combining items together produces a net sum of all four metrics. For example, if you combine items 1 and 2 below, you get a net sum of 0 across all four metrics. Item 1 (A = 1, B = 2, C = 3, D = 4) Item 2 (A = -1, B = -2, C = -3, D = -4) Sum (A = 0,B = 0,C = 0,D = 0) Now let's say you have a target net sum. You want some combination of items that yields your target net sum, but you want it using the least number of items possible. Example: Item 1 (A = 0, B = 10, C = 0, D = 0) Item 2 (A = 0, B = -5, C = 0, D = 0) Item 3 (A = 0, B = 5, C = 0, D = 0) Target: (A = 0, B = 5, C = 0, D = 0) You can reach the target by combining items 1 and 2. But it would be more efficient to use just item 3 by itself. I've tried a dynamic programming approach, something similar to the knapsack problem. But I can't figure out how to define the boundaries. A smaller combination of a solution might seem to be way off, and can go negative or positive on some or all metrics, but all together it matches perfectly. Conversely, with the knapsack problem you can start at 0 weight and increment up to the max weight. Any ideas? Assume that the given set of items can in fact produce the target. Just so you're aware, this is going to be an exponential running time. The single-metric case is equivalent to the subset sum problem, and the additional metrics make the dynamic programming solution that is pseudopolynomial time described there not work, since the items need to be sorted. I suppose the only way I can think of doing it that isn't brute force is to define OPT(a,b,c,d,n) = True if Target(a,b,c,d) is possible using the first n items. Your table would then be a 5-dimensional table where the first 4 dimensions are bounded by the minimum and maximum possible sums of that metric and the last dimension is the number of items. Initialize the table as all False, then: x = items[0] You can then fill in the table one slice at a time with the loop for i in range(1, len(items)): And now that I think about it, this is only for checking for existence. If you want an actual set (or the minimal set), you can store pointers in the table as you go for how to create each possible set, and if there's two possible ways at any point just keep the smaller set. Then check OPT[target metrics][n] for your final answer. Note that since this is pseudopolynomial, it's not going to necessarily be better than brute force. Brute force is exponential in the number of items in the input, whereas the dynamic programming approach is dependent on the size of the values in the metrics. But that's the case for any pseudopolynomial DP approach to these kinds of problems, like knapsack or subset sum. Edit: Of course, if this is for speed/productionalization rather than just understanding the problem, you should probably do what TMG suggests, which is often the right answer for any NP-complete problem that you need to solve quickly =P | ||
TMG26
Portugal2017 Posts
For any np-complete problem you should always convert to SAT or SMT if you need performance. Because there is plenty of research into those and tgere is active competition to keep improving the solvers, you can't compete with your own algoritm unless the use case uses so little data that the overhead of converting is bigger than the difference. That or you might as well prove that NP-complete is actuallt part of P. | ||
Deleted User 3420
24492 Posts
Would this be a linear programming problem? I feel like there are probably some heuristics like looking for combinations that lead to the correct sum of all elements, or maybe trying to leave out lin. indep vectors first since 2 or more of those equal one dependent vector (though what do i know, that may just end up making things worse) | ||
Manit0u
Poland17203 Posts
On December 14 2017 02:21 enigmaticcam wrote: For all you algorithm enthusiasts, I got one that I haven't been able to figure out. Maybe you can help me. You have a given set of items. All items have four metrics, A, B, C, and D. Combining items together produces a net sum of all four metrics. For example, if you combine items 1 and 2 below, you get a net sum of 0 across all four metrics. Item 1 (A = 1, B = 2, C = 3, D = 4) Item 2 (A = -1, B = -2, C = -3, D = -4) Sum (A = 0,B = 0,C = 0,D = 0) Now let's say you have a target net sum. You want some combination of items that yields your target net sum, but you want it using the least number of items possible. Example: Item 1 (A = 0, B = 10, C = 0, D = 0) Item 2 (A = 0, B = -5, C = 0, D = 0) Item 3 (A = 0, B = 5, C = 0, D = 0) Target: (A = 0, B = 5, C = 0, D = 0) You can reach the target by combining items 1 and 2. But it would be more efficient to use just item 3 by itself. I've tried a dynamic programming approach, something similar to the knapsack problem. But I can't figure out how to define the boundaries. A smaller combination of a solution might seem to be way off, and can go negative or positive on some or all metrics, but all together it matches perfectly. Conversely, with the knapsack problem you can start at 0 weight and increment up to the max weight. Any ideas? Assume that the given set of items can in fact produce the target. In Ruby you can do something like that:
I think that something like that might work... I mean, the script works fine, I'm just not sure if it's very optimal ![]() You can check it out here: https://repl.it/ (just choose Ruby) Edit: I should clarify that I'm sure that the script isn't very optimal. There's plenty of duplicated chosen being selected. I think you could maybe optimize it with some memoization or something. | ||
enigmaticcam
United States280 Posts
| ||
Manit0u
Poland17203 Posts
Note that there's plenty of superfluous steps/stuff but it's easier to debug this way and easier to see what could be optimized further. | ||
Silvanel
Poland4692 Posts
On December 14 2017 02:13 travis wrote: What kind of hardware do you work with. I remember in high school I had a class where we worked with hardware in suboptimal conditions, and we'd break (often via static electricity) our hardware all the time. But we were kids so we weren't too careful. We make hardware and software for automotive (infotainment, navigation, updates, bluetooth, audio etc.). My location mainly for Mercedes. We use something called testbench to connect to car computer (which is in fact several separate computers) and simulate environment. And if You need several of them (like i do) it takes a lot of space and time to setup. | ||
Manit0u
Poland17203 Posts
Input: range of numbers from 0 to 1,000,000 (ids) Operations: - all even numbers get double the sum of their siblings added - all odd numbers get (sum of siblings/3) deduced from them (save as absolute value from this operation) - all odd numbers after above operations are discarded - all numbers that are within delta 1 of arithmetic average of remaining numbers are discarded - numbers are sorted in descending order Output: - doesn't specify, I assume the count of numbers left is sufficient My solution: + Show Spoiler +
It's as stupid as it is useless I guess... It's also not very specific so I had to asume some things (like operations on odd and even numbers happening in isolation to prevent endless loops and delta value - the original assignment only mentioned 'close to arithmetic average') | ||
Acrofales
Spain17852 Posts
+ Show Spoiler +
I hope I wasn't dumb there. E: added code tags | ||
phar
United States1080 Posts
On December 13 2017 19:56 travis wrote: But how is that different than divising a formula or recipe for a chemical, medicine, material, etc? That's also just math. It's literally a an algorithm. So why is it commonly accepted that patents for this makes sense, but patents for digital recipes do not? I think the distinction is arbitrary and unfair. (that said, you actually can patent algorithms and ideas, if properly presented to the patent office. but you aren't supposed to be able to) Bear in mind that a lot of drug patents these days tend to be on synthesis or delivery system, not just the underlying chemical itself. This shit is very complicated, and there's an insane amount of money in it. | ||
Deleted User 3420
24492 Posts
And, this is without having to hash every element as a key pointing to the sets. Without having to hash any of the elements, actually. They are kind of specialized in use, though. | ||
phar
United States1080 Posts
yea here's the MSR paper on it from awhile back: https://www.microsoft.com/en-us/research/wp-content/uploads/2011/01/p255-DingKoenig.pdf I am skeptical about constant equality. | ||
Deleted User 3420
24492 Posts
Perhaps it is good for certain applications, though. But something I didn't see (admittedly only skimmed the article), is what the overhead is from a single add or delete from one of their sets. I have a suspicion it isn't pretty. Regarding constant time equality, it's already out there! I think, umm, Daniel Yellin was the author of the paper I saw. I also saw one from watson labs And moving to a different topic, I have a question for you guys that probably isn't asked much. Let's say I simply have too much data to store in memory. I need to write it to the hard drive. So, say I have a 10gb array of data. I write it to the HD, and then do a series of operations on it (let's say 10), where I pull out a chunk of it(maybe 5gb), do the operations, and write the results to the HD. Eventually, I will have a 2nd array on the HD too (where all my results went). So, actual operations start once I have my data prepared in memory, but between sets of operations I am reading or writing a huge chunk of data to the HD. What kind of performance hit will I be taking, assuming the user has a SSD. Is there a chance data will be corrupted(enough to worry about)? If so, how could I mitigate that? Thanks! | ||
phar
United States1080 Posts
On December 15 2017 23:48 travis wrote: What kind of performance hit will I be taking, assuming the user has a SSD. If you can the disk writes asynchronously while doing other stuff, not much to worry about. If your computation generates less data than the sequential write speed of your SSD (what, 500MB/s or higher now?) then it won't bottleneck there. If you need to do blocking writes bit by bit, then the latency of an ssd write is something on the order of 10,000x slower than memory, 100,000x slower than L3 cache, and 1,000,000x (or more) slower than L1 cache / register. But that's just for initial write, so it depends on how much you need to block on it, and what your throughput is. On December 15 2017 23:48 travis wrote: Is there a chance data will be corrupted(enough to worry about)? If so, how could I mitigate that? Yes. Mitigation: checksums, different filesystems (zfs is a thing? idk), write to multiple disks, etc. It really depends on what your failure cases that you care about are. Do you just want to avoid local file corruption, but don't care if e.g. the building burns down and you lose all data? Then it's pretty easy. If you're like Amazon/Google/etc and want to fail gracefully if a whole datacenter gets nuked, then you need to be a bit more careful and write to a proper distributed file storage system. r.e. yellin paper, I'm unfamiliar, but a quick search shows a couple of yellin algos for set equality that are O(log(n)) and O((log(n))^2). Not sure about constant still, that just feels off. | ||
Isualin
Germany1903 Posts
| ||
Acrofales
Spain17852 Posts
On December 16 2017 15:50 Isualin wrote: https access is restricted to admins on teamliquid, does anyone know why this is the case? i mean why wouldn't they use https for everyone http://www.teamliquid.net/forum/website-feedback/527276-https-access-only-available-to-staff | ||
| ||