|
Thread Rules 1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution. 2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20) 3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible. 4. Use [code] tags to format code blocks. |
On July 17 2015 22:52 BisuDagger wrote:Show nested quote +On July 17 2015 22:14 Acrofales wrote: Almost fits: 149.49 * log(x + 1). The problem is that it increases slightly too fast, and 45 isn't at 225. However, looking at your picture, you don't want 45 at 225 anyway, but slightly over 270, in which case the log increases too slow. A curve fitter should be able to fit this, though, even with so few points. I'd chug it through scipy's curve fitter for you, but I don't have time. Here's the general function:
a + b*log(c*x + d). Give it the following points: f(0) = 0, f(3) = 90, f(15) = 180, f(40) = 270.
Edit: if there is no way of scaling the log to fit these points properly, try a root:
a + b*x^(1/c) You are correct, I had to edit my post to show 40 @ 270 degrees. I am going to check out scipy's curve fitter. Never heard of it before now. Show nested quote +On July 17 2015 22:16 ZenithM wrote: AngleInDegrees = 10 * (gaugeValue * 50)^0.44 - (something) with something being between 0 and 4, which ever looks better ;D
I'd try a root in any case. What I have trouble wrapping my head around is a formula that tackles the whole range at once. Because between 2-5 the difference is 3, 10-15 the difference is 5, 15 to 25 the difference is 10, but then it descales 25-30 a difference of 5, and then 30-40 a difference of 10 lol. If it scaled where the difference was 1,2,3,5,5,10,10,10 that'd be a little be easier to work with. Anyway, the code I wrote was easily cleaned up. That was written long form just for me to visual see what was happening. Now it is relatively clean and now I see I could even do better but I already committed it and moved on lol. else if (needlePosition > 25 && needlePosition <= 30) { float scaleToMeter = ScaleToDegrees(25, 30, rotate25, rotate30, needlePosition); this.transform.Rotate(Vector3.right, scaleToMeter); }
You're right I hadn't looked at the gauge closely enough. You're probably better off hard coding it (like you did, it sounds fine) or picking another gauge picture ;D.
|
Bisutopia19154 Posts
On July 17 2015 23:10 ZenithM wrote:Show nested quote +On July 17 2015 22:52 BisuDagger wrote:On July 17 2015 22:14 Acrofales wrote: Almost fits: 149.49 * log(x + 1). The problem is that it increases slightly too fast, and 45 isn't at 225. However, looking at your picture, you don't want 45 at 225 anyway, but slightly over 270, in which case the log increases too slow. A curve fitter should be able to fit this, though, even with so few points. I'd chug it through scipy's curve fitter for you, but I don't have time. Here's the general function:
a + b*log(c*x + d). Give it the following points: f(0) = 0, f(3) = 90, f(15) = 180, f(40) = 270.
Edit: if there is no way of scaling the log to fit these points properly, try a root:
a + b*x^(1/c) You are correct, I had to edit my post to show 40 @ 270 degrees. I am going to check out scipy's curve fitter. Never heard of it before now. On July 17 2015 22:16 ZenithM wrote: AngleInDegrees = 10 * (gaugeValue * 50)^0.44 - (something) with something being between 0 and 4, which ever looks better ;D
I'd try a root in any case. What I have trouble wrapping my head around is a formula that tackles the whole range at once. Because between 2-5 the difference is 3, 10-15 the difference is 5, 15 to 25 the difference is 10, but then it descales 25-30 a difference of 5, and then 30-40 a difference of 10 lol. If it scaled where the difference was 1,2,3,5,5,10,10,10 that'd be a little be easier to work with. Anyway, the code I wrote was easily cleaned up. That was written long form just for me to visual see what was happening. Now it is relatively clean and now I see I could even do better but I already committed it and moved on lol. else if (needlePosition > 25 && needlePosition <= 30) { float scaleToMeter = ScaleToDegrees(25, 30, rotate25, rotate30, needlePosition); this.transform.Rotate(Vector3.right, scaleToMeter); }
You're right I hadn't looked at the gauge closely enough. You're probably better off hard coding it (like you did, it sounds fine) or picking another gauge picture ;D. I'm sure I could ask Boeing to redesign their aircraft for me. Lol.
|
Haha tough luck then. I hope airplane web browsers handle HTML 5 + Javascript. I wouldn't dare to ask my pilot to trust Adobe Flash gauge readings.
|
Bisutopia19154 Posts
On July 17 2015 23:24 ZenithM wrote: Haha tough luck then. I hope airplane web browsers handle HTML 5 + Javascript. I wouldn't dare to ask my pilot to trust Adobe Flash gauge readings. This is done in unity which has HTML5 support. :D
|
Shouldn't it be analog in the first place?
|
On July 17 2015 23:10 ZenithM wrote:Show nested quote +On July 17 2015 22:52 BisuDagger wrote:On July 17 2015 22:14 Acrofales wrote: Almost fits: 149.49 * log(x + 1). The problem is that it increases slightly too fast, and 45 isn't at 225. However, looking at your picture, you don't want 45 at 225 anyway, but slightly over 270, in which case the log increases too slow. A curve fitter should be able to fit this, though, even with so few points. I'd chug it through scipy's curve fitter for you, but I don't have time. Here's the general function:
a + b*log(c*x + d). Give it the following points: f(0) = 0, f(3) = 90, f(15) = 180, f(40) = 270.
Edit: if there is no way of scaling the log to fit these points properly, try a root:
a + b*x^(1/c) You are correct, I had to edit my post to show 40 @ 270 degrees. I am going to check out scipy's curve fitter. Never heard of it before now. On July 17 2015 22:16 ZenithM wrote: AngleInDegrees = 10 * (gaugeValue * 50)^0.44 - (something) with something being between 0 and 4, which ever looks better ;D
I'd try a root in any case. What I have trouble wrapping my head around is a formula that tackles the whole range at once. Because between 2-5 the difference is 3, 10-15 the difference is 5, 15 to 25 the difference is 10, but then it descales 25-30 a difference of 5, and then 30-40 a difference of 10 lol. If it scaled where the difference was 1,2,3,5,5,10,10,10 that'd be a little be easier to work with. Anyway, the code I wrote was easily cleaned up. That was written long form just for me to visual see what was happening. Now it is relatively clean and now I see I could even do better but I already committed it and moved on lol. else if (needlePosition > 25 && needlePosition <= 30) { float scaleToMeter = ScaleToDegrees(25, 30, rotate25, rotate30, needlePosition); this.transform.Rotate(Vector3.right, scaleToMeter); }
You're right I hadn't looked at the gauge closely enough. You're probably better off hard coding it (like you did, it sounds fine) or picking another gauge picture ;D.
Yup. I also completely missed that. What weird underlying analog process led to that being a good distribution on the dial?! How do you calibrate that correctly? :O
|
Hi TL,
Sorry this isn't "exactly" programming, but I am looking for a "serious" SQL book to study from. What I mean by "serious" is that I know how to write SQL statements, but I want to understand how to write fast and efficient SQL statements and know some deeper level tricks. Basically, I am interested in becoming an expert user, but not a DBA - if that makes any sense.
As far as SQL vendor specificity is concerned, lack of specificity is preferred, but PostgreSQL or Oracle specific would be fine by me as well.
Thanks in advance.
|
On July 18 2015 08:39 Sufficiency wrote: Hi TL,
Sorry this isn't "exactly" programming, but I am looking for a "serious" SQL book to study from. What I mean by "serious" is that I know how to write SQL statements, but I want to understand how to write fast and efficient SQL statements and know some deeper level tricks. Basically, I am interested in becoming an expert user, but not a DBA - if that makes any sense.
As far as SQL vendor specificity is concerned, lack of specificity is preferred, but PostgreSQL or Oracle specific would be fine by me as well.
Thanks in advance. I don't know any of such books, but I think your best bet would be learning how databases work?
|
On July 18 2015 08:52 sabas123 wrote:Show nested quote +On July 18 2015 08:39 Sufficiency wrote: Hi TL,
Sorry this isn't "exactly" programming, but I am looking for a "serious" SQL book to study from. What I mean by "serious" is that I know how to write SQL statements, but I want to understand how to write fast and efficient SQL statements and know some deeper level tricks. Basically, I am interested in becoming an expert user, but not a DBA - if that makes any sense.
As far as SQL vendor specificity is concerned, lack of specificity is preferred, but PostgreSQL or Oracle specific would be fine by me as well.
Thanks in advance. I don't know any of such books, but I think your best bet would be learning how databases work?
I guess that would make sense - to learn its theories. I am sure there are tons of books on this, but any recommendations?
|
On July 18 2015 08:39 Sufficiency wrote: Hi TL,
Sorry this isn't "exactly" programming, but I am looking for a "serious" SQL book to study from. What I mean by "serious" is that I know how to write SQL statements, but I want to understand how to write fast and efficient SQL statements and know some deeper level tricks. Basically, I am interested in becoming an expert user, but not a DBA - if that makes any sense.
As far as SQL vendor specificity is concerned, lack of specificity is preferred, but PostgreSQL or Oracle specific would be fine by me as well.
Thanks in advance. I'm not a SQL expert by any means but in my databases course we were taught that typically SQL implementations have a built in optimizer that usually does most of that for you.
|
On July 18 2015 10:31 Chocolate wrote:Show nested quote +On July 18 2015 08:39 Sufficiency wrote: Hi TL,
Sorry this isn't "exactly" programming, but I am looking for a "serious" SQL book to study from. What I mean by "serious" is that I know how to write SQL statements, but I want to understand how to write fast and efficient SQL statements and know some deeper level tricks. Basically, I am interested in becoming an expert user, but not a DBA - if that makes any sense.
As far as SQL vendor specificity is concerned, lack of specificity is preferred, but PostgreSQL or Oracle specific would be fine by me as well.
Thanks in advance. I'm not a SQL expert by any means but in my databases course we were taught that typically SQL implementations have a built in optimizer that usually does most of that for you.
Is it really that simple though?
For example, are the following three queries the same in terms of performance?
1.
SELECT a.row_a1, b.row_b1 FROM table_a a INNER JOIN table_b b ON a.id = b.id and b.row_b2 = 1
2.
SELECT a.row_a1, b.row_b1 FROM table_a a INNER JOIN (SELECT * FROM table_b where row_b2 = 1) b ON a.id = b.id
3.
SELECT a.row_a1, b.row_b1 FROM table_a a INNER JOIN table_b b ON a.id = b.id and b.row_b2 = 1 WHERE b.row_b2 = 1
My impression is that 2 is better if table_b is big. I don't really know if there is a difference between 1 and 3 performance wise.
|
Look up this thing called relational algebra. Basically my impression is that SQL implementations use relational algebra to reduce / simplify most expressions that are effectively equal, which means they end up similar or equivalent to the DB when it actually executes the queries.
|
what do you guys do when you start a new job and want to ramp up quickly?
I've been taking notes each day in a gdoc and been trying to make a daily recap of the top 3 things I did. Been pretty useful so far but it still feels so shitty to go from being a TL on a project at a different company to knowing absolutely nothing (have to learn entirely new proprietary tech mostly) in a position where everyone tells me it'll take around 6 months to do anything useful.
|
read docs (if there are any), read code
Get someone to walk you through standing up a local system end to end. Fix some small bug(s).
Even at a place running an entirely internal stack, you should be able to make some impact in << 6 months...
|
yea I don't mean make no impact within 6 months but not really pull my weight for 6 months. I've heard varying estimates from my colleagues and some friends who work/have worked here and the consensus seems to be 3-6 months ramp up time to at least not feel like you're just treading water the whole time.
At my previous work it was about a month for me but this is definitely a different beast. Everyone here is way smarter than I am lol
also I'm in a slightly different role, not really SWE. My team is like half SWE half SRE across three different continents...it's actually pretty surreal
|
On July 18 2015 14:01 wherebugsgo wrote: what do you guys do when you start a new job and want to ramp up quickly?.
ask devs to look over their shoulder / do pair programming on "Routine" Tasks in the code base / System. And let them explain most of their steps how/why they do it etc. ask questions while they do it
Do code Review on small Features / bug fixes and understand why that requirement needed to touch all the places it did. If there are Overall architecture / design Meetings / discussions listen in from time to time.
|
On July 18 2015 12:33 Sufficiency wrote:Show nested quote +On July 18 2015 10:31 Chocolate wrote:On July 18 2015 08:39 Sufficiency wrote: Hi TL,
Sorry this isn't "exactly" programming, but I am looking for a "serious" SQL book to study from. What I mean by "serious" is that I know how to write SQL statements, but I want to understand how to write fast and efficient SQL statements and know some deeper level tricks. Basically, I am interested in becoming an expert user, but not a DBA - if that makes any sense.
As far as SQL vendor specificity is concerned, lack of specificity is preferred, but PostgreSQL or Oracle specific would be fine by me as well.
Thanks in advance. I'm not a SQL expert by any means but in my databases course we were taught that typically SQL implementations have a built in optimizer that usually does most of that for you. Is it really that simple though? For example, are the following three queries the same in terms of performance? 1. SELECT a.row_a1, b.row_b1 FROM table_a a INNER JOIN table_b b ON a.id = b.id and b.row_b2 = 1
2. SELECT a.row_a1, b.row_b1 FROM table_a a INNER JOIN (SELECT * FROM table_b where row_b2 = 1) b ON a.id = b.id
3. SELECT a.row_a1, b.row_b1 FROM table_a a INNER JOIN table_b b ON a.id = b.id and b.row_b2 = 1 WHERE b.row_b2 = 1
My impression is that 2 is better if table_b is big. I don't really know if there is a difference between 1 and 3 performance wise.
On July 18 2015 12:33 Sufficiency wrote:Show nested quote +On July 18 2015 10:31 Chocolate wrote:On July 18 2015 08:39 Sufficiency wrote: Hi TL,
Sorry this isn't "exactly" programming, but I am looking for a "serious" SQL book to study from. What I mean by "serious" is that I know how to write SQL statements, but I want to understand how to write fast and efficient SQL statements and know some deeper level tricks. Basically, I am interested in becoming an expert user, but not a DBA - if that makes any sense.
As far as SQL vendor specificity is concerned, lack of specificity is preferred, but PostgreSQL or Oracle specific would be fine by me as well.
Thanks in advance. I'm not a SQL expert by any means but in my databases course we were taught that typically SQL implementations have a built in optimizer that usually does most of that for you. Is it really that simple though? For example, are the following three queries the same in terms of performance? 1. SELECT a.row_a1, b.row_b1 FROM table_a a INNER JOIN table_b b ON a.id = b.id and b.row_b2 = 1
2. SELECT a.row_a1, b.row_b1 FROM table_a a INNER JOIN (SELECT * FROM table_b where row_b2 = 1) b ON a.id = b.id
3. SELECT a.row_a1, b.row_b1 FROM table_a a INNER JOIN table_b b ON a.id = b.id and b.row_b2 = 1 WHERE b.row_b2 = 1
My impression is that 2 is better if table_b is big. I don't really know if there is a difference between 1 and 3 performance wise.
I can't imagine a case where query optimizer doesn't put 1st and 3rd to same. I get same query plan for both over a few different combinations I tried.
Without knowing more of your schema it's really impossible to say which would be faster though (1st/3rd or 2nd). It's a function of indices on your table, the size of the tables, the last time you might have run gather stats or analyze on your table -- all these factors go into what to query plan optimizer decides to execute.
For example, if I have an index on table b, column b.id, b.row_2 then for me, 1st,3rd query blows 2nd query out of water in my testing. Because in that case, MySQL is able to use that index, but with the subquery it's not smart enough. But that's in my case. I really can't say for sure what it might decide to do as table grows much larger. I'm not DBA at work but I know they running into issues with query plans changing as table size grows (but that's Oracle).
The MySQL refpages have probably been on of the most useful things for me, If you really want to become an expert, you should read the refpages for your specific database, since as you can see, queries that reduce to the same result set can be executed differently and that result really comes down to the DB you're using to get the most out of it.
Here's a transcipt of me talking to my database trying a few queries, but in general this is useless to anyone not me since your database will have different indices, table size, and useage patterns.
Sorry I don't have any exact advice, just thought I'd throw in my 2cents.
edit: some practical links: http://patshaughnessy.net/2014/11/11/discovering-the-computer-science-behind-postgres-indexes http://use-the-index-luke.com/
+ Show Spoiler + ysql> SELECT COUNT(*) FROM replays_buid; ERROR 1146 (42S02): Table 'app.replays_buid' doesn't exist mysql> SELECT COUNT(*) FROM replays_buildevent; +----------+ | COUNT(*) | +----------+ | 7193800 | +----------+ 1 row in set (5.09 sec)
mysql> SELECT COUNT(*) FROM replays_build; +----------+ | COUNT(*) | +----------+ | 26507 | +----------+ 1 row in set (0.02 sec)
mysql> SELECT COUNT(b.id) -> FROM replays_build b -> INNER JOIN (SELECT * FROM replays_buildevent be WHERE be.name = "SCV") be ON b.id = be.build_id; +-------------+ | COUNT(b.id) | +-------------+ | 457551 | +-------------+ 1 row in set (28.22 sec)
mysql> SELECT COUNT(b.id) -> FROM replays_build b -> JOIN replays_buildevent be ON b.id = be.build_id -> WHERE be.name = "SCV"; +-------------+ | COUNT(b.id) | +-------------+ | 457551 | +-------------+ 1 row in set (0.99 sec)
mysql> SELECT COUNT(*) -> FROM replays_build b -> INNER JOIN (SELECT * FROM replays_buildevent be WHERE be.name = "SCV") be ON b.match_id = be.time; +----------+ | COUNT(*) | +----------+ | 910866 | +----------+ 1 row in set (38.02 sec)
mysql> SELECT COUNT(b.id) -> FROM replays_build b -> JOIN replays_buildevent be ON b.match_id = be.time -> WHERE be.name = "SCV"; +-------------+ | COUNT(b.id) | +-------------+ | 910866 | +-------------+ 1 row in set (18.82 sec)
mysql> EXPLAIN EXTENDED -> SELECT COUNT(b.id) -> FROM replays_build b -> JOIN replays_buildevent be ON b.match_id = be.time -> WHERE be.name = "SCV"; +----+-------------+-------+------+------------------------+------------------------+---------+-------------+---------+----------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------+------------------------+------------------------+---------+-------------+---------+----------+-------------+ | 1 | SIMPLE | be | ALL | NULL | NULL | NULL | NULL | 6227388 | 100.00 | Using where | | 1 | SIMPLE | b | ref | replays_build_ff9c4e4a | replays_build_ff9c4e4a | 4 | app.be.time | 1 | 100.00 | Using index | +----+-------------+-------+------+------------------------+------------------------+---------+-------------+---------+----------+-------------+ 2 rows in set, 1 warning (0.00 sec)
mysql> EXPLAIN EXTENDED -> SELECT COUNT(*) -> FROM replays_build b -> INNER JOIN (SELECT * FROM replays_buildevent be WHERE be.name = "SCV") be ON b.match_id = be.time; +----+-------------+------------+-------+------------------------+------------------------+---------+----------------+---------+----------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+------------+-------+------------------------+------------------------+---------+----------------+---------+----------+-------------+ | 1 | PRIMARY | b | index | replays_build_ff9c4e4a | replays_build_ff9c4e4a | 4 | NULL | 24545 | 100.00 | Using index | | 1 | PRIMARY | <derived2> | ref | <auto_key0> | <auto_key0> | 4 | app.b.match_id | 253 | 100.00 | NULL | | 2 | DERIVED | be | ALL | NULL | NULL | NULL | NULL | 6227388 | 100.00 | Using where | +----+-------------+------------+-------+------------------------+------------------------+---------+----------------+---------+----------+-------------+ 3 rows in set, 1 warning (0.00 sec)
mysql> EXPLAIN EXTENDED -> SELECT COUNT(b.id) -> FROM replays_build b -> JOIN replays_buildevent be ON b.id = be.build_id -> WHERE be.name = "SCV"; +----+-------------+-------+-------+---------------+------------------------+---------+----------------+-------+----------+--------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+-------+---------------+------------------------+---------+----------------+-------+----------+--------------------------+ | 1 | SIMPLE | b | index | PRIMARY | replays_build_ff9c4e4a | 4 | NULL | 24545 | 100.00 | Using index | | 1 | SIMPLE | be | ref | build_name | build_name | 771 | app.b.id,const | 5 | 100.00 | Using where; Using index | +----+-------------+-------+-------+---------------+------------------------+---------+----------------+-------+----------+--------------------------+ 2 rows in set, 1 warning (0.00 sec)
mysql> SELECT COUNT(b.id) -> FROM replays_build b -> INNER JOIN (SELECT * FROM replays_buildevent be WHERE be.name = "SCV") be ON b.id = be.build_id; +-------------+ | COUNT(b.id) | +-------------+ | 457551 | +-------------+ 1 row in set (27.81 sec)
mysql> EXPLAIN EXTENDED -> SELECT COUNT(b.id) -> FROM replays_build b -> INNER JOIN (SELECT * FROM replays_buildevent be WHERE be.name = "SCV") be ON b.id = be.build_id; +----+-------------+------------+-------+---------------+------------------------+---------+----------+---------+----------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+------------+-------+---------------+------------------------+---------+----------+---------+----------+-------------+ | 1 | PRIMARY | b | index | PRIMARY | replays_build_ff9c4e4a | 4 | NULL | 24545 | 100.00 | Using index | | 1 | PRIMARY | <derived2> | ref | <auto_key0> | <auto_key0> | 4 | app.b.id | 253 | 100.00 | NULL | | 2 | DERIVED | be | ALL | NULL | NULL | NULL | NULL | 6227388 | 100.00 | Using where | +----+-------------+------------+-------+---------------+------------------------+---------+----------+---------+----------+-------------+ 3 rows in set, 1 warning (0.00 sec)
|
On July 18 2015 12:33 Sufficiency wrote:Show nested quote +On July 18 2015 10:31 Chocolate wrote:On July 18 2015 08:39 Sufficiency wrote: Hi TL,
Sorry this isn't "exactly" programming, but I am looking for a "serious" SQL book to study from. What I mean by "serious" is that I know how to write SQL statements, but I want to understand how to write fast and efficient SQL statements and know some deeper level tricks. Basically, I am interested in becoming an expert user, but not a DBA - if that makes any sense.
As far as SQL vendor specificity is concerned, lack of specificity is preferred, but PostgreSQL or Oracle specific would be fine by me as well.
Thanks in advance. I'm not a SQL expert by any means but in my databases course we were taught that typically SQL implementations have a built in optimizer that usually does most of that for you. Is it really that simple though? For example, are the following three queries the same in terms of performance? 1. SELECT a.row_a1, b.row_b1 FROM table_a a INNER JOIN table_b b ON a.id = b.id and b.row_b2 = 1
2. SELECT a.row_a1, b.row_b1 FROM table_a a INNER JOIN (SELECT * FROM table_b where row_b2 = 1) b ON a.id = b.id
3. SELECT a.row_a1, b.row_b1 FROM table_a a INNER JOIN table_b b ON a.id = b.id and b.row_b2 = 1 WHERE b.row_b2 = 1
My impression is that 2 is better if table_b is big. I don't really know if there is a difference between 1 and 3 performance wise.
It also depends on what kind of data you want from the DB. You could look up some database abstraction stuff and see how they do things (Doctrine is open source, you could definitely check out their hydration modes). It's fairly important to not only optimize your queries, it's just as important to reduce the number of queries and to get back only the stuff you need, without superfluous things.
Some most useful stuff includes COUNT, CONCAT, HAVING and GROUP BY.
Too many times I've seen code execute the query only to do stupid stuff like that:
$query = $queryBuilder ->select('o') // that's equal to * ->from('myTable') ->where('statement) ->getQuery();
$result = $query->getResult();
return count($result);
What it should be:
$query = $queryBuilder ->select('COUNT(o.id)') ->from('myTable') ->where('statement) ->getQuery();
$result = $query->getSingleScalarResult();
return $result;
CONCAT is likewise useful, when you want to display the user's first name + last name for example. Instead of doing concatenation later in the code you can simply select "CONCAT('u.first_name', ' ', 'u.last_name') AS name" and work with that.
You also don't always need objects and should use stuff like getArrayResult() (still speaking about Doctrine since I can't really live without some form of DBAL any more).
Another really important thing is query parametrization. So that you can reuse your queries for different data.
SELECT * FROM Orders WHERE CustomerID =?
INSERT INTO Employees(LastName, FirstName, Title, TitleOfCourtesy, BirthDate, HireDate, Address, City, Region, PostalCode, Country, HomePhone, Extension, Photo, Notes, ReportsTo, PhotoPath, rowguid) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
As a bonus, I present you with this most interesting study regarding DB performance when storing dates and times (tldr: never use timestamps in your db).
|
On July 18 2015 14:52 wherebugsgo wrote: yea I don't mean make no impact within 6 months but not really pull my weight for 6 months. I've heard varying estimates from my colleagues and some friends who work/have worked here and the consensus seems to be 3-6 months ramp up time to at least not feel like you're just treading water the whole time.
At my previous work it was about a month for me but this is definitely a different beast. Everyone here is way smarter than I am lol
also I'm in a slightly different role, not really SWE. My team is like half SWE half SRE across three different continents...it's actually pretty surreal Send me a PM, if you're working where I think you're working. There are likely resources internally that can help you out.
Also it is normal (nay, required) for SRE teams to be split across continents, here is the logic:
1) SRE oncall is real (hard core) on call, with proper SLO/SLA for all systems, and for answering pages. That means in normal dev land, if a SWE is on call and a page happens, and they don't answer for 15 minutes maybe it's ok. Oftentimes (depending on the level of requirement) an SRE is acking that shit in like a minute. That means you can't even go take a shit or drive home without clearing it with secondary.
2) You want your SRE to be awake and able to actually triage & solve production issues ASAP when answering a page. (For example an on-call dev waking up at 3 am is not going to actually solve any problem for like 30 minutes, because 3am you're groggy).
-> The solution here is time zone rotations. Some SRE teams will be split across 2 timezones (west coast & Europe is common) later in the evening. West coast on call during daylight hours there, then swap to Europe in the evening. Rinse & repeat. More expansive SRE teams will be split across 3 timezones (usually Australia or the like), only 8 hour shifts.
All that does make it hard to coordinate, because remote teammates can be harder to work with. Requires a lot of proactive communication. Also helps if you get to fly around and meet up with everyone, see what everybody's like in person. You'll realize that for the most part people do have your back, and are interested in helping you succeed - while remotely over email/code reviews/im, people can seem a bit more terse & not as helpful.
|
Damn guys, you're using all those acronyms... Could you at least explain what they mean? Google returns way too many possibilities and I'm genuinely curious here.
Random rant continued: Client presentation due the day after tomorrow, had to pull another saturday workday until 7pm, ux designer, half of the devs and all front-end people are on vacation, app looks like shit because every back-end dev was like "functionality is working, I'll just dump all the results wherever to prove it, let the front-end handle it" (I'm part of back-end too, but I had some experience with front-end so I at least made my stuff a bit more managable). My heart says it's fine, my brain says otherwise. Especially that (unlike most clients), this one sent a guy who was developing business apps for a living for years to act as their guy (the very first question he asked was "What's your test coverage?").
I really do feel rather grimdark right now.
|
|
|
|