|
Thread Rules 1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution. 2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20) 3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible. 4. Use [code] tags to format code blocks. |
On October 29 2016 06:47 centirius wrote:C# solution (so probably works in Java with minimal changes I suppose) also O(n^2) obv bool HasTwoSubArraysWithSameTotal(int[] values){ return HasTwoSubArraysWithSameTotal(values, 0, 0, 0); }
bool HasTwoSubArraysWithSameTotal(int[] values, int sumA, int sumB, int index){ return values.Length == index ? sumA == sumB : HasTwoSubArraysWithSameTotal(values, sumA + values[index], sumB, index + 1) || HasTwoSubArraysWithSameTotal(values, sumA, sumB + values[index], index + 1); }
Good catch that this can be done without storing the actual arrays, I didn't catch that.
|
Some review questions, testing my knowledge of big-O notation
20. True/False ‘ a. 7n^2+10n+100 is O(1) False b. 7n^2+10n+100 is O(n) False c. 7n^2+10n+100 is O(n^2) True d. 7n^2+10n+100 is O(n^3) True e. 7n^2+10n+100 is O(2^n) True 21. True/False a. 7n^2+10n+100 is θ(1) False b. 7n^2+10n+100 is θ (n) False c. 7n^2+10n+100 is θ (n^2) True d. 7n^2+10n+100 is θ (n^3) False e. 7n^2+10n+100 is θ (2^n) False
Is my understanding correct?
and then little o notation .. if my understanding of the above is correct could someone explain little-o to me?
|
Yes you're correct. You already got that O is upper bound, and theta is both upper and lower bound. o is just lower bound.
So n^2 + n + 5 is
O(n^2), O(n^3), O(n^5), etc...
o(n^2), o(n), o(nlog(n)), o(1), etc...
Same idea, just on the other side.
In practice in industry all that gets talked about is theta, but confusingly they will call it O when then mean theta. But you can worry about that later.
|
Or for your abcde question there,
O is FFTTT Theta is FFTFF o is TTTFF
|
Okay cool 
Followup question about big o, this time actually looking at a couple situations and determining the complexity
a. for (int i = 0; i < n; i++) { for (int j = n; j >= i; j++) { … } }
outside goes through n times, inside ends up going through..... just throwing out an observation... 2^(n-1) ? is this O(2^(n-1)) ?
b. for (int i = 10000; i < 50000; i++) { for (int j = n; j >= 1; j /= 2) { … } }
outside goes through constant times inside goes through... uh.. it looks like it keeps dividing by 2. So that's log(n) ? This one is a lot weirder to me Is the answer O(log(n)) ?
|
On October 31 2016 03:48 travis wrote: a. for (int i = 0; i < n; i++) { for (int j = n; j >= i; j++) { … } }
outside goes through n times, inside ends up going through..... just throwing out an observation... 2^(n-1) ? is this O(2^(n-1)) ? This one is not only tricky, its immensely stupid. Please never ever code like this. Everybody will believe you made a mistake when you write code like that. The example doesnt even make sense with the big-O-notation. The number of iterations is not really limited by the input but by the size of integers on your execution enviroment. In java that is 32bit integers and thus (2^31)-1 positive values. If we go with the most rigorous interpretation of the big-O-notation the loop over the variable j would be constant and thus O(1). If we use that interpretation though the loop over the variable i would also be O(1) because (2^31)-1 is also an upper bound for n. Since n must be less than 2^31 => O(n) == O(2^31) == O(1) but you can apply this logic to every algorithm in java. This makes the entire exercise pointless. Whoever came up with that question is either stupid or expects you to realize on your own that the exercise is stupid.
|
^ lol I thought I misread it but it is as stupid as it seems... maybe travis meant j-- in which it'd O(n^3)
|
Well, I think the only point is to test your ability to figure out the complexity. I didn't write it, the professor did.
I get what you are saying, mush, but I feel like what you are saying is a little pointless.
|
On October 31 2016 03:48 travis wrote: b. for (int i = 10000; i < 50000; i++) { for (int j = n; j >= 1; j /= 2) { … } }
outside goes through constant times inside goes through... uh.. it looks like it keeps dividing by 2. So that's log(n) ? This one is a lot weirder to me Is the answer O(log(n)) ? The outer loop does nothing to change the complexity, it just applies a constant factor of 40000 which is irrelevant for big O.
For the inner loop, imagine it was j /= 10: in this case it would run as many times as n has digits in the decimal system - each division cuts off the last digit. The number of digits is log_10(n) (logarithm with base 10). So this would be O(log_10(n)). In the actual example the same reasoning works if you think of n as a binary number, so it runs in O(log_2(n)).
You write log(n), which usually is interpreted as log_10(n). But whether you meant base 10 or 2 or e, O(log(n)) is right anyways because log_b(n) = C * log_a(n), where C = log_b(a) is a constant.
For question a, please double check that you didn't make a mistake when copying it. If that is the exact version, we have to assume that the inner loop is either constant (Integer.MaxValue + 1 overflows into a negative value) or infinite if you don't restrict the range. So the whole thing would either be O(n) or infinite. It really is stupid though. And it certainly is not anything like O(2^n). It's either O(1), O(n). O(n^2) or infinite. For a function f(n) you usually get 2^n stuff if you recursively do 2 calls to f(n - 1), n levels of recursion deep. Not if you nest two loops like that.
|
oh i see, i did misread it. but yeah that's what the professor wrote, i copy pasted it.
I see what you guys are saying, inner loop goes forever
sorry about that mush
|
On October 31 2016 05:17 travis wrote: oh i see, i did misread it. but yeah that's what the professor wrote, i copy pasted it.
I see what you guys are saying, inner loop goes forever
sorry about that mush Its worse than "inner loop goes forever". In every actual programming language the loop will not go forever. The range of integer variables is limited and thus it will end fairly quickly. The problem with that is that the big-O-notation doesnt work for actual real-life examples. Its a purely theoretical construct which can only be used in a gedankenexperiment. Once you try to use it on real-world code you will get O(1) for most pieces of code like the ones you posted.
If we, on the other hand, assume that your code is pseudo-code and has nothing to do with real life then the answer is that it runs forever. In this case we can not give an upper bound and there is nothing in the big-O notation that you can write to express that.
The example simply doesnt make any sense at all no matter how you look at it.
|
I'd just ask the professor to clarify that question, I can't imagine that it's not j-- unless the professor wanted you to think super outside the box or something like that.
|
On October 31 2016 03:36 phar wrote: Yes you're correct. You already got that O is upper bound, and theta is both upper and lower bound. o is just lower bound.
So n^2 + n + 5 is
O(n^2), O(n^3), O(n^5), etc...
o(n^2), o(n), o(nlog(n)), o(1), etc...
Same idea, just on the other side.
In practice in industry all that gets talked about is theta, but confusingly they will call it O when then mean theta. But you can worry about that later.
This is incorrect. Omega is the lower-bound analysis, not little-o. Little-o is the same as big O, but it is strictly greater. If an algorithm is O(f(n)), but not Theta(f(n)) (that is, the bound is not tight), then it is o(f(n)). It's like the difference between <= and <.
|
okay, professor got back to me. it was supposed to be j--
so outside goes through n times so inside goes through... I have no idea how do I figure this out
|
:p
when you have problems like this, i think it helps to manually count.
so for the inside loop, can you manually count the number of iterations?
at i = 0, j = n, j >= i, j-- will loop... n times at i = 1, j = n, j >= i, j-- will loop... ? times at i = ... at i = n-1, j = n, j >= i, j-- will loop... ? times
that should get you started
|
On October 31 2016 06:37 raNazUra wrote:Show nested quote +On October 31 2016 03:36 phar wrote: Yes you're correct. You already got that O is upper bound, and theta is both upper and lower bound. o is just lower bound.
So n^2 + n + 5 is
O(n^2), O(n^3), O(n^5), etc...
o(n^2), o(n), o(nlog(n)), o(1), etc...
Same idea, just on the other side.
In practice in industry all that gets talked about is theta, but confusingly they will call it O when then mean theta. But you can worry about that later. This is incorrect. Omega is the lower-bound analysis, not little-o. Little-o is the same as big O, but it is strictly greater. If an algorithm is O(f(n)), but not Theta(f(n)) (that is, the bound is not tight), then it is o(f(n)). It's like the difference between <= and <.
Ah yea, you're right, I had completely forgotten about Ω. In part, cus as previously mentioned, industry tends to be a bit... cavalier with definitions.
|
|
Because:
On October 28 2016 02:44 Nesserev wrote:ftfy, efficiency matters
|
Probably in case the latest one has bugs. Instead of waiting for a hotfix you can downgrade and perhaps things will work out that way.
|
I guessed like that too. It indicates latest version usually ( ) has bugs since older one is on the top. They want you to download the older one. Or they expect you to usually download it. Or all this is wrong and there's a more technically correct answer.
since bosses are reluctant to even use java 7 (not yet mature ), it's a funny thing.
|
|
|
|