|
Thread Rules 1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution. 2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20) 3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible. 4. Use [code] tags to format code blocks. |
On November 29 2017 08:09 Blitzkrieg0 wrote:Show nested quote +On November 29 2017 08:05 Excludos wrote:On November 29 2017 07:26 Blitzkrieg0 wrote:On November 29 2017 07:15 Excludos wrote: You can't expect someone to know of every possible scenario they might come across. And if you can't test for that, all you can really test for is a super easy fizzbuzz function, which tells you exactly nothing about a person beyond "He does indeed know what a for loop is". You keep harping on this, but that is exactly what we're testing for. What percentage of candidates do you think we interview that can not write fizzbuzz? We do this for our interns and our entry/newgrad level positions. Exactly what kind of education do you guys have? In Norway computer science is an engineering degree on the same line as electrical engineer, biochemistry or machine engineers. It requires semi decent grades and quite a bit of math, physics, a bit of chemistry, and a few other general subjects. A for-loop is literally what you learn at first week, and you finish the third year by spending the last 6 months programming a fully functioning program and a full thesis to go along with it. I would have suspected that most of the world operates like this? If so there is simply no way you don't know how to do extremely basic things like a loop after graduating. I thought the same thing until I interviewed people who couldn't write fizzbuzz with cs degrees.
Yea anyone telling you fizz buzz style questions aren't necessary has not done enough interviews, or is at a tiny company that can be very picky about who they interview (or is unaware of a pipeline in front of them that is filtering for people who can't do fizz buzz).
|
100k?
And the only operation is squaring? Couple of seconds at most, probably less if you use the right tools (e.g. Matlab): modern personal computers can do a couple of gflops, or a few billion floating point calculations per second. Memory isn't a problem: squaring you only need a single row and column at a time. But even if you needed to store the entire matrix three times, and the matrix contains doubles, you're still only up to 2.4 mb of memory.
1million is still not a problem. Remember: matrix multiplication is O(n^3) with n row/column length.
Anyway, just run some tests! Generate large matrices with random data and multiply them. You'll quickly see what your computer can handle.
|
On November 29 2017 15:25 phar wrote:Show nested quote +On November 29 2017 08:09 Blitzkrieg0 wrote:On November 29 2017 08:05 Excludos wrote:On November 29 2017 07:26 Blitzkrieg0 wrote:On November 29 2017 07:15 Excludos wrote: You can't expect someone to know of every possible scenario they might come across. And if you can't test for that, all you can really test for is a super easy fizzbuzz function, which tells you exactly nothing about a person beyond "He does indeed know what a for loop is". You keep harping on this, but that is exactly what we're testing for. What percentage of candidates do you think we interview that can not write fizzbuzz? We do this for our interns and our entry/newgrad level positions. Exactly what kind of education do you guys have? In Norway computer science is an engineering degree on the same line as electrical engineer, biochemistry or machine engineers. It requires semi decent grades and quite a bit of math, physics, a bit of chemistry, and a few other general subjects. A for-loop is literally what you learn at first week, and you finish the third year by spending the last 6 months programming a fully functioning program and a full thesis to go along with it. I would have suspected that most of the world operates like this? If so there is simply no way you don't know how to do extremely basic things like a loop after graduating. I thought the same thing until I interviewed people who couldn't write fizzbuzz with cs degrees. Yea anyone telling you fizz buzz style questions aren't necessary has not done enough interviews, or is at a tiny company that can be very picky about who they interview (or is unaware of a pipeline in front of them that is filtering for people who can't do fizz buzz).
Thats not necessarily true. Different companies have different recruiting schemes. For example in my company most people are either recommended by present employees or found by headhunters. Then HR vet them for their experience, salary expectation and etc. And only after that they have interview with technical staff. We dont ask fizzbuzz questions at any point. Even for junior positions. For junior testers You get some (simple) program and requirements are told to find as many bugs as possible. For junior developers there are some simple tasks/questions depending on the technology but no fizzbuzz. And besides tasks there is always a converastion. Every candidate gets hhis chance. And we are by no means small company. Granted they might do things differently in other locations, but thats how it is done in our polish location.
I do get what You are saying, there are tech companies in my town that do things this way. They invite a hell lot of people throw fizzbuzz/lanaguage standarized tests at them and invite selct few with best results to interview. You can do things this way. But it is not the only way.
|
On November 29 2017 17:54 Silvanel wrote:Show nested quote +On November 29 2017 15:25 phar wrote:On November 29 2017 08:09 Blitzkrieg0 wrote:On November 29 2017 08:05 Excludos wrote:On November 29 2017 07:26 Blitzkrieg0 wrote:On November 29 2017 07:15 Excludos wrote: You can't expect someone to know of every possible scenario they might come across. And if you can't test for that, all you can really test for is a super easy fizzbuzz function, which tells you exactly nothing about a person beyond "He does indeed know what a for loop is". You keep harping on this, but that is exactly what we're testing for. What percentage of candidates do you think we interview that can not write fizzbuzz? We do this for our interns and our entry/newgrad level positions. Exactly what kind of education do you guys have? In Norway computer science is an engineering degree on the same line as electrical engineer, biochemistry or machine engineers. It requires semi decent grades and quite a bit of math, physics, a bit of chemistry, and a few other general subjects. A for-loop is literally what you learn at first week, and you finish the third year by spending the last 6 months programming a fully functioning program and a full thesis to go along with it. I would have suspected that most of the world operates like this? If so there is simply no way you don't know how to do extremely basic things like a loop after graduating. I thought the same thing until I interviewed people who couldn't write fizzbuzz with cs degrees. Yea anyone telling you fizz buzz style questions aren't necessary has not done enough interviews, or is at a tiny company that can be very picky about who they interview (or is unaware of a pipeline in front of them that is filtering for people who can't do fizz buzz). Thats not necessarily true. Different companies have different recruiting schemes. For example in my company most people are either recommended by present employees or found by headhunters. Then HR vet them for their experience, salary expectation and etc. And only after that they have interview with technical staff. We dont ask fizzbuzz questions at any point. Even for junior positions. For junior testers You get some (simple) program and requirements are told to find as many bugs as possible. For junior developers there are some simple tasks/questions depending on the technology but no fizzbuzz. And besides tasks there is always a converastion. Every candidate gets hhis chance. And we are by no means small company. Granted they might do things differently in other locations, but thats how it is done in our polish location. Fizzbuzz is being used as a catch-all for "simple programming tasks", which you do. In fact, I'd argue that hunting bugs in someone else's deliberately crappy code is worse than programming a fizzbuzz algorithm from scratch. Both in terms of difficulty and in terms of what you can learn from it about the applicant.
|
Well sure You can argue. But You misunderstood me. I didnt say anything about looking at some crappy code. You have static analysis tools, unit tests and reviewers for that. I am talking about finding bugs in a working program, not code.
And regarding fizzbuzz usage, someone above mentiond "for" loop and in contrast to that i want to clarify that we do not consider writing a for loop "simple programming task".
|
On November 29 2017 15:42 Acrofales wrote: 100k?
And the only operation is squaring? Couple of seconds at most, probably less if you use the right tools (e.g. Matlab): modern personal computers can do a couple of gflops, or a few billion floating point calculations per second. Memory isn't a problem: squaring you only need a single row and column at a time. But even if you needed to store the entire matrix three times, and the matrix contains doubles, you're still only up to 2.4 mb of memory.
1million is still not a problem. Remember: matrix multiplication is O(n^3) with n row/column length.
Anyway, just run some tests! Generate large matrices with random data and multiply them. You'll quickly see what your computer can handle.
Doing a fast matrix multiplication is mostly about optimizing register/cache usage. The bottleneck isn't the CPU unless you use 'special tactics'.
|
On November 29 2017 07:15 Excludos wrote: [..] Another reason why live programming isn't very helpful. Not only does a person need to program, he needs to be able to learn. And that means googling..a lot. On my first job out of college I literally had to learn the entirety of Qt library, a third party SDK [..]
If you used 'literally' correctly you might be the only person on the planet who knows the entirety of Qt by heart.
|
On November 29 2017 15:42 Acrofales wrote: 100k?
And the only operation is squaring? Couple of seconds at most, probably less if you use the right tools (e.g. Matlab): modern personal computers can do a couple of gflops, or a few billion floating point calculations per second. Memory isn't a problem: squaring you only need a single row and column at a time. But even if you needed to store the entire matrix three times, and the matrix contains doubles, you're still only up to 2.4 mb of memory.
1million is still not a problem. Remember: matrix multiplication is O(n^3) with n row/column length.
Anyway, just run some tests! Generate large matrices with random data and multiply them. You'll quickly see what your computer can handle.
Checked with:
A = rand(n, n); tic; A = A * A; toc
n = 100k takes 74.5 GB (n = 1mil is 7450 GB). As Hanh said, you will need to have some algorithm that minimizes the communication time between the memory levels since likely at that level you'll be having to store on the hard drive (SSD).
FYI I ran n = 10k, took 9.323 seconds on my desktop.
|
On November 29 2017 19:38 emperorchampion wrote:Show nested quote +On November 29 2017 15:42 Acrofales wrote: 100k?
And the only operation is squaring? Couple of seconds at most, probably less if you use the right tools (e.g. Matlab): modern personal computers can do a couple of gflops, or a few billion floating point calculations per second. Memory isn't a problem: squaring you only need a single row and column at a time. But even if you needed to store the entire matrix three times, and the matrix contains doubles, you're still only up to 2.4 mb of memory.
1million is still not a problem. Remember: matrix multiplication is O(n^3) with n row/column length.
Anyway, just run some tests! Generate large matrices with random data and multiply them. You'll quickly see what your computer can handle. Checked with: A = rand(n, n); tic; A = A * A; toc
n = 100k takes 74.5 GB (n = 1mil is 7450 GB). As Hanh said, you will need to have some algorithm that minimizes the communication time between the memory levels since likely at that level you'll be having to store on the hard drive (SSD). FYI I ran n = 10k, took 9.323 seconds on my desktop.
They talked about n elements of the matrix. At least the initial post did. Not about n*n elements. But sure, 10 billion entries blow up your memory.
|
On November 29 2017 19:49 mahrgell wrote:Show nested quote +On November 29 2017 19:38 emperorchampion wrote:On November 29 2017 15:42 Acrofales wrote: 100k?
And the only operation is squaring? Couple of seconds at most, probably less if you use the right tools (e.g. Matlab): modern personal computers can do a couple of gflops, or a few billion floating point calculations per second. Memory isn't a problem: squaring you only need a single row and column at a time. But even if you needed to store the entire matrix three times, and the matrix contains doubles, you're still only up to 2.4 mb of memory.
1million is still not a problem. Remember: matrix multiplication is O(n^3) with n row/column length.
Anyway, just run some tests! Generate large matrices with random data and multiply them. You'll quickly see what your computer can handle. Checked with: A = rand(n, n); tic; A = A * A; toc
n = 100k takes 74.5 GB (n = 1mil is 7450 GB). As Hanh said, you will need to have some algorithm that minimizes the communication time between the memory levels since likely at that level you'll be having to store on the hard drive (SSD). FYI I ran n = 10k, took 9.323 seconds on my desktop. They talked about n elements of the matrix. At least the initial post did. Not about n*n elements. But sure, 10 billion entries blow up your memory.
Ah my bad, I thought Travis was talking about 100k x 100k. So yeah, what Acrofales said 1mil is nothing.
edit: run time is 0.021 seconds on my system.
|
On November 29 2017 19:26 Khalum wrote:Show nested quote +On November 29 2017 07:15 Excludos wrote: [..] Another reason why live programming isn't very helpful. Not only does a person need to program, he needs to be able to learn. And that means googling..a lot. On my first job out of college I literally had to learn the entirety of Qt library, a third party SDK [..] If you used 'literally' correctly you might be the only person on the planet who knows the entirety of Qt by heart.
Hahaha. Touché. No, I, like most others, google everything I need when I need it I guess the correct word here should have been "figuratively".
|
On November 29 2017 19:51 emperorchampion wrote:Show nested quote +On November 29 2017 19:49 mahrgell wrote:On November 29 2017 19:38 emperorchampion wrote:On November 29 2017 15:42 Acrofales wrote: 100k?
And the only operation is squaring? Couple of seconds at most, probably less if you use the right tools (e.g. Matlab): modern personal computers can do a couple of gflops, or a few billion floating point calculations per second. Memory isn't a problem: squaring you only need a single row and column at a time. But even if you needed to store the entire matrix three times, and the matrix contains doubles, you're still only up to 2.4 mb of memory.
1million is still not a problem. Remember: matrix multiplication is O(n^3) with n row/column length.
Anyway, just run some tests! Generate large matrices with random data and multiply them. You'll quickly see what your computer can handle. Checked with: A = rand(n, n); tic; A = A * A; toc
n = 100k takes 74.5 GB (n = 1mil is 7450 GB). As Hanh said, you will need to have some algorithm that minimizes the communication time between the memory levels since likely at that level you'll be having to store on the hard drive (SSD). FYI I ran n = 10k, took 9.323 seconds on my desktop. They talked about n elements of the matrix. At least the initial post did. Not about n*n elements. But sure, 10 billion entries blow up your memory. Ah my bad, I thought Travis was talking about 100k x 100k. So yeah, what Acrofales said 1mil is nothing. edit: run time is 0.021 seconds on my system.
my bad guy's, it was late mahgrell's intuition was right, I meant that it was n*n "elements"
so based on what's been said, I am guessing that a million x million is probably out of the realm of all but a few computers out there, unless you do something like use linked lists to represent your matrix?
|
On November 29 2017 21:27 travis wrote:Show nested quote +On November 29 2017 19:51 emperorchampion wrote:On November 29 2017 19:49 mahrgell wrote:On November 29 2017 19:38 emperorchampion wrote:On November 29 2017 15:42 Acrofales wrote: 100k?
And the only operation is squaring? Couple of seconds at most, probably less if you use the right tools (e.g. Matlab): modern personal computers can do a couple of gflops, or a few billion floating point calculations per second. Memory isn't a problem: squaring you only need a single row and column at a time. But even if you needed to store the entire matrix three times, and the matrix contains doubles, you're still only up to 2.4 mb of memory.
1million is still not a problem. Remember: matrix multiplication is O(n^3) with n row/column length.
Anyway, just run some tests! Generate large matrices with random data and multiply them. You'll quickly see what your computer can handle. Checked with: A = rand(n, n); tic; A = A * A; toc
n = 100k takes 74.5 GB (n = 1mil is 7450 GB). As Hanh said, you will need to have some algorithm that minimizes the communication time between the memory levels since likely at that level you'll be having to store on the hard drive (SSD). FYI I ran n = 10k, took 9.323 seconds on my desktop. They talked about n elements of the matrix. At least the initial post did. Not about n*n elements. But sure, 10 billion entries blow up your memory. Ah my bad, I thought Travis was talking about 100k x 100k. So yeah, what Acrofales said 1mil is nothing. edit: run time is 0.021 seconds on my system. my bad guy's, it was late mahgrell's intuition was right, I meant that it was n*n "elements" so based on what's been said, I am guessing that a million x million is probably out of the realm of all but a few computers out there, unless you do something like use linked lists to represent your matrix?
Anecdotally, I believe there are groups at my school that run ~100 million (100 mil x 100 mil) DOF simulations modeling blood flow around the heart or something at the Swiss computing center (3rd fastest according to top500). But I'm not really sure on the details. I believe you're are getting into super computing territory at around 1 mil.
edit: unless you have 8000 GB of ram lying around...
edit2: I suppose the simulations are sparse, so I'm not sure about a dense matrix. Pretty sure it will be memory limited though, the actual execution time would probably not be excessively long.
|
On November 29 2017 21:27 travis wrote:Show nested quote +On November 29 2017 19:51 emperorchampion wrote:On November 29 2017 19:49 mahrgell wrote:On November 29 2017 19:38 emperorchampion wrote:On November 29 2017 15:42 Acrofales wrote: 100k?
And the only operation is squaring? Couple of seconds at most, probably less if you use the right tools (e.g. Matlab): modern personal computers can do a couple of gflops, or a few billion floating point calculations per second. Memory isn't a problem: squaring you only need a single row and column at a time. But even if you needed to store the entire matrix three times, and the matrix contains doubles, you're still only up to 2.4 mb of memory.
1million is still not a problem. Remember: matrix multiplication is O(n^3) with n row/column length.
Anyway, just run some tests! Generate large matrices with random data and multiply them. You'll quickly see what your computer can handle. Checked with: A = rand(n, n); tic; A = A * A; toc
n = 100k takes 74.5 GB (n = 1mil is 7450 GB). As Hanh said, you will need to have some algorithm that minimizes the communication time between the memory levels since likely at that level you'll be having to store on the hard drive (SSD). FYI I ran n = 10k, took 9.323 seconds on my desktop. They talked about n elements of the matrix. At least the initial post did. Not about n*n elements. But sure, 10 billion entries blow up your memory. Ah my bad, I thought Travis was talking about 100k x 100k. So yeah, what Acrofales said 1mil is nothing. edit: run time is 0.021 seconds on my system. my bad guy's, it was late mahgrell's intuition was right, I meant that it was n*n "elements" so based on what's been said, I am guessing that a million x million is probably out of the realm of all but a few computers out there, unless you do something like use linked lists to represent your matrix?
On most applications where this would be needed, there are mathematical methods to somehow circumvent the problem. Usually this entails creating more (and sometimes bigger) sparse matrices.
Like in the stuff I'm currently working on, if you just read it directly there are some multiplications M*v, with M being a dense multimillion*multimillion matrix. But then you can represent M as a product of about 10 matrices, which are all sparse. So instead of calculating M, you take v and multiply it through those 10 matrices from the right, step by step, every time you want to calculate M*v. So you have 10 matrix-vector multiplications, with all matrices being sparse.
The whole issue becomes a bit more difficult, as there are some inverted matrices in between, which again cant be inverted with traditional means in any reasonable time, and the entire process is spread out over a cluster, but in the end everything is done in a way, that you absolutely never ever have to handle large dense matrices in one place.
And this is a huge field of mathematics/numerics which only cares about finding such solutions for all kinds of tasks.
|
On November 29 2017 21:48 mahrgell wrote:Show nested quote +On November 29 2017 21:27 travis wrote:On November 29 2017 19:51 emperorchampion wrote:On November 29 2017 19:49 mahrgell wrote:On November 29 2017 19:38 emperorchampion wrote:On November 29 2017 15:42 Acrofales wrote: 100k?
And the only operation is squaring? Couple of seconds at most, probably less if you use the right tools (e.g. Matlab): modern personal computers can do a couple of gflops, or a few billion floating point calculations per second. Memory isn't a problem: squaring you only need a single row and column at a time. But even if you needed to store the entire matrix three times, and the matrix contains doubles, you're still only up to 2.4 mb of memory.
1million is still not a problem. Remember: matrix multiplication is O(n^3) with n row/column length.
Anyway, just run some tests! Generate large matrices with random data and multiply them. You'll quickly see what your computer can handle. Checked with: A = rand(n, n); tic; A = A * A; toc
n = 100k takes 74.5 GB (n = 1mil is 7450 GB). As Hanh said, you will need to have some algorithm that minimizes the communication time between the memory levels since likely at that level you'll be having to store on the hard drive (SSD). FYI I ran n = 10k, took 9.323 seconds on my desktop. They talked about n elements of the matrix. At least the initial post did. Not about n*n elements. But sure, 10 billion entries blow up your memory. Ah my bad, I thought Travis was talking about 100k x 100k. So yeah, what Acrofales said 1mil is nothing. edit: run time is 0.021 seconds on my system. my bad guy's, it was late mahgrell's intuition was right, I meant that it was n*n "elements" so based on what's been said, I am guessing that a million x million is probably out of the realm of all but a few computers out there, unless you do something like use linked lists to represent your matrix? On most applications where this would be needed, there are mathematical methods to somehow circumvent the problem. Usually this entails creating more (and sometimes bigger) sparse matrices. Like in the stuff I'm currently working on, if you just read it directly there are some multiplications M*v, with M being a dense multimillion*multimillion matrix. But then you can represent M as a product of about 10 matrices, which are all sparse. So instead of calculating M, you take v and multiply it through those 10 matrices from the right, step by step, every time you want to calculate M*v. So you have 10 matrix-vector multiplications, with all matrices being sparse. The whole issue becomes a bit more difficult, as there are some inverted matrices in between, which again cant be inverted with traditional means in any reasonable time, and the entire process is spread out over a cluster, but in the end everything is done in a way, that you absolutely never ever have to handle large dense matrices in one place. And this is a huge field of mathematics/numerics which only cares about finding such solutions for all kinds of tasks.
Do you have a nice paper for this mahrgell?
Also, if I understand correctly, naively you would have to do this process n (n = something million) times if you want to square M?
|
One last thing, if you use single precision, 100 k is feasible on a desktop in a naive manner (memory req is around 40 GB on Matlab).
|
On November 29 2017 21:55 emperorchampion wrote:Show nested quote +On November 29 2017 21:48 mahrgell wrote:On November 29 2017 21:27 travis wrote:On November 29 2017 19:51 emperorchampion wrote:On November 29 2017 19:49 mahrgell wrote:On November 29 2017 19:38 emperorchampion wrote:On November 29 2017 15:42 Acrofales wrote: 100k?
And the only operation is squaring? Couple of seconds at most, probably less if you use the right tools (e.g. Matlab): modern personal computers can do a couple of gflops, or a few billion floating point calculations per second. Memory isn't a problem: squaring you only need a single row and column at a time. But even if you needed to store the entire matrix three times, and the matrix contains doubles, you're still only up to 2.4 mb of memory.
1million is still not a problem. Remember: matrix multiplication is O(n^3) with n row/column length.
Anyway, just run some tests! Generate large matrices with random data and multiply them. You'll quickly see what your computer can handle. Checked with: A = rand(n, n); tic; A = A * A; toc
n = 100k takes 74.5 GB (n = 1mil is 7450 GB). As Hanh said, you will need to have some algorithm that minimizes the communication time between the memory levels since likely at that level you'll be having to store on the hard drive (SSD). FYI I ran n = 10k, took 9.323 seconds on my desktop. They talked about n elements of the matrix. At least the initial post did. Not about n*n elements. But sure, 10 billion entries blow up your memory. Ah my bad, I thought Travis was talking about 100k x 100k. So yeah, what Acrofales said 1mil is nothing. edit: run time is 0.021 seconds on my system. my bad guy's, it was late mahgrell's intuition was right, I meant that it was n*n "elements" so based on what's been said, I am guessing that a million x million is probably out of the realm of all but a few computers out there, unless you do something like use linked lists to represent your matrix? On most applications where this would be needed, there are mathematical methods to somehow circumvent the problem. Usually this entails creating more (and sometimes bigger) sparse matrices. Like in the stuff I'm currently working on, if you just read it directly there are some multiplications M*v, with M being a dense multimillion*multimillion matrix. But then you can represent M as a product of about 10 matrices, which are all sparse. So instead of calculating M, you take v and multiply it through those 10 matrices from the right, step by step, every time you want to calculate M*v. So you have 10 matrix-vector multiplications, with all matrices being sparse. The whole issue becomes a bit more difficult, as there are some inverted matrices in between, which again cant be inverted with traditional means in any reasonable time, and the entire process is spread out over a cluster, but in the end everything is done in a way, that you absolutely never ever have to handle large dense matrices in one place. And this is a huge field of mathematics/numerics which only cares about finding such solutions for all kinds of tasks. Do you have a nice paper for this mahrgell? Also, if I understand correctly, naively you would have to do this process n (n = something million) times if you want to square M?
The process I described shouldn't be used to square matrices.
But what would be very similar to what I described was, if you had something like r=M*M*v, that instead you would calculate v1 = Mv, r = M*v. This would massively reduce the number of required operations, even if you do this a fair number of times in your code. But even that would probably blow up in your face, so in a more serious approach, you would try to completely disassemble M and somehow split it up in bits that are easier to handle.
The point was, that in most cases you really are not interested in the result of M*M itself or whatever massive operations you are doing. But instead you are just using this as part of other calculations. So now the task is to find ways, how to do those calculations in ways that you don't have to calculate those massive operations.
And often those pathes are longer, more difficult and require significant work. But in the end they are still faster.
About the paper: I don't have any free links for it, but what I'm working on is a cluster implementation of FETI-DP, for which you can find some free papers. But those papers are all targeted at mathematicians (who can code).
|
On November 29 2017 21:27 travis wrote:Show nested quote +On November 29 2017 19:51 emperorchampion wrote:On November 29 2017 19:49 mahrgell wrote:On November 29 2017 19:38 emperorchampion wrote:On November 29 2017 15:42 Acrofales wrote: 100k?
And the only operation is squaring? Couple of seconds at most, probably less if you use the right tools (e.g. Matlab): modern personal computers can do a couple of gflops, or a few billion floating point calculations per second. Memory isn't a problem: squaring you only need a single row and column at a time. But even if you needed to store the entire matrix three times, and the matrix contains doubles, you're still only up to 2.4 mb of memory.
1million is still not a problem. Remember: matrix multiplication is O(n^3) with n row/column length.
Anyway, just run some tests! Generate large matrices with random data and multiply them. You'll quickly see what your computer can handle. Checked with: A = rand(n, n); tic; A = A * A; toc
n = 100k takes 74.5 GB (n = 1mil is 7450 GB). As Hanh said, you will need to have some algorithm that minimizes the communication time between the memory levels since likely at that level you'll be having to store on the hard drive (SSD). FYI I ran n = 10k, took 9.323 seconds on my desktop. They talked about n elements of the matrix. At least the initial post did. Not about n*n elements. But sure, 10 billion entries blow up your memory. Ah my bad, I thought Travis was talking about 100k x 100k. So yeah, what Acrofales said 1mil is nothing. edit: run time is 0.021 seconds on my system. my bad guy's, it was late mahgrell's intuition was right, I meant that it was n*n "elements" so based on what's been said, I am guessing that a million x million is probably out of the realm of all but a few computers out there, unless you do something like use linked lists to represent your matrix?
I was about to say "100k isn't that bad". But 100k x 100k would indeed take up something like half an hour if you even have enough memory to hold it all (Unsure if Matlab lets you do it without enough memory. It might have some kind of semi-smart solution for cases like these).
1m x 1m on the other hand is pretty much impossible on normal systems indeed. But what exactly are you attempting to do here? Is it possible to compress these numbers in any way?
edit: I don't know what university you're at, but some of them do have supercomputers which might be used for good reasons. Their runtime is somewhat expensive tho, so you'd need a really good one, and not just "I'm kinda curious".
|
Do you guys fail logic tests and/or programming tests which ask you to write code which you're unlikely to use at work/in real life? For example, I was told to write code for an MxM matrix. It's artificially split into layers. Here's an example:
1 1 1 1 1 2 2 1 1 2 2 1 1 1 1 1
Outer layer starts with one. The deeper you dig the higher the number which is 2 in this case. Matrix can be from 1x1 to 9x9 or 100x100. I can't remember exactly. I had to take matrix input and validate if each layer has the expected number.
Also, if you have 1x1 and 2x2 matrix, then they only have an outer layer.
Overall test had to take 1 hour and 30 minutes. It was split into 3 categories - logic questions, SQL questions (8.6/9.0 here) and 2 programming questions. I just think this matrix stuff alone is so time-consuming that it will fuck up your remaining time anyway. Or, is it just me who is being silly?
Personally, it was very easy for me to understand what this matrix question was about. Programming isn't my weakness overall, but coming up with an algorithm/helper functions to check inner layers proved to be the more difficult task. Do you have any suggestions what to read/do so I can become better at tricky stuff like that?
Edit: Fixed matrix because example was wrong.
|
On November 29 2017 11:43 Manit0u wrote:Show nested quote +On November 29 2017 04:33 sc-darkness wrote: When you see syntax like that, calling C disgusting is a mild insult. You can write much more readable code in C++ (hint: std::function). Luckily, I never had to have an array of function pointers. I'd take C over C++ any day. C++ is like the ugliest widespread language out there.
You wanna fight me IRL or what?
|
|
|
|