|
[This isn't a homework thread]
I took AP Stats (4 years ago) where I learned about regressions, and I took it in college, although we didn't get too much into regressions. Anyways I'm trying to brush up on not necessarily DOING the regression, but sort of the interpretation of results and such.
It came up at work, where basically I'm looking at this data of the major defense companies and the proportion of their total sales that are weapons.
For example, here's a little taste of what I'm looking at (just because some people might be interested)
Boeing was ranked #1 on the "top 100 arms producing companies" - with 46% of its total sales in 2007 going towards arms. In 2006 the proportion was 50%.
Anyways, I was looking at this stuff and I thought to myself "hey, it would be cool to see if there is any relation between the profit earned and the proportion of sales going to arms!" Basically what I am trying to do is see if there is a relation between increasing profits of these companies, and the lowering of spending on arms.
Example:
Boeing 2006: Arms %: 50 Profit: 2,215 (million)
Boeing 2007: Arms %: 46% Profit: 4,074 (million)
Is there any relation? Is it a coincidence? I'm really curious. So I figured doing a regression might be a good idea to test this out.
Am I correct in assuming that a regression test would be good in this situation? Also, I'm planning on using about 5-10 years of data for maybe 10-12 different companies... is this something a regression test can do?
I'm just trying to basically remember if this is the right statistical test to use.
|
I'm going to put my neck on the line and say yes a regression is a legitimate way to investigate. You can carry out 1 regression for each company individually or you can use a Panel Data approach that tries to use both the cross sectional and time series information in the regressions, I am not too familiar with panel data regressions myself, never had to actually apply it so I reckon the wiki links or google might be a good place to go.
Also perhaps it might be wise to also regress %profit on %sales from arms to look at the relationship from 2 angles..... I am sure someone will blow me out of the water on this but hey.
Edit; Oh and Happy Birthday Edit2- I have notes on panel data lying around somewhere which are decent enough if you want the jist of them
|
|
From a rudimentary look at the data, I can say that a t test would show that there is not relationship between increased earnings and % of income spent on arms. The fact that income doubled yet arms sales only decreased by 4% would seem to suggest that the cause is extraneous to the arms variable.
|
16938 Posts
On November 13 2009 01:43 b3h47pte wrote: A regression will only tell you if there is a linear relationship between your two variables in thsi case the arms % and the profit. But yes, from what i've learned in AP stat, what you want to do is perfectly fine for this data set as long as it is linear.
Regression techniques can tell you a lot more than existence of linear relationships :/
|
United States2822 Posts
If I'm not mistaken, the regression with the highest r^2 value (that is, closest to 1) is the most accurate representation of the data, correct?
|
On November 13 2009 01:52 Empyrean wrote:Show nested quote +On November 13 2009 01:43 b3h47pte wrote: A regression will only tell you if there is a linear relationship between your two variables in thsi case the arms % and the profit. But yes, from what i've learned in AP stat, what you want to do is perfectly fine for this data set as long as it is linear. Regression techniques can tell you a lot more than existence of linear relationships :/
Right. was thinking about linear regression. lawl >>
|
16938 Posts
On November 13 2009 01:54 scintilliaSD wrote: If I'm not mistaken, the regression with the highest r^2 value (that is, closest to 1) is the most accurate representation of the data, correct?
No, it's just the one that explains most of the variance.
R^2 pretty much isn't good for anything.
You can make any regression model increase its R^2 value artificially just by adding more explanatory variables...even nonsensical ones such as number of potatoes eaten that week or amount of birds the company owns. As you add enough variables, eventually your X matrix becomes full rank and you'll have R^2 of one, no matter what kinds of trash you added as explanatory variables.
Besides, you could have a model that beautifully explains the data, but not linearly, so you'll have a comparatively low R^2.
You could also have multicollinearity issues which underlie the data, but you won't find that out until you check the covariance matrix.
If you're looking to compare different models, at the very least perform partial F-tests or check various information criteria (BIC is the most popular). There are better methods, but the easiest methods that provide legitimate information for model selection would be to perform partial F-tests on the models and comparing the BICs.
|
EDIT- NVM see Empryean posts re R^2 and AIC
|
16938 Posts
My issue with AIC is that it doesn't punish large models as much as BIC does.
Although I suppose everyone has their own opinions.
They usually give the same results anyway.
EDIT: One thing that BIC has that AIC doesn't (I think) is that it's been proven asymptotically consistent. I don't think it's been proven for AIC yet (might have even been disproven? I dunno. But I know for sure that BIC has been proven to be asymptotically consistent...not that it's going to matter for Xeris, anyway).
EDIT: Also it turns out the post above me has been edited so it looks like I'm talking to myself now <_<
|
The fact that income doubled yet arms sales only decreased by 4% would seem to suggest that the cause is extraneous to the arms variable.
This. Boeing is down to 2.7 billion net profit in 2008, a 34% decline, and I doubt their defense contracts went up that much. They blame reduced defense spending and less air traffic growth for their reduced profits.
The global economy continues to weaken and is adversely affecting air traffic growth and financing," Jim McNerney, Boeing's chairman, president and chief executive, said in a conference call. "We are also expecting pressure on defense budgets in light of the economic recovery and financial rescue packages put forth by various governments.
|
On November 13 2009 02:09 Empyrean wrote: My issue with AIC is that it doesn't punish large models as much as BIC does.
Although I suppose everyone has their own opinions.
They usually give the same results anyway.
EDIT: One thing that BIC has that AIC doesn't (I think) is that it's been proven asymptotically consistent. I don't think it's been proven for AIC yet (might have even been disproven? I dunno. But I know for sure that BIC has been proven to be asymptotically consistent...not that it's going to matter for Xeris, anyway).
EDIT: Also it turns out the post above me has been edited so it looks like I'm talking to myself now <_< Yes Sorry I edited mine out as soon as I saw your post since it was obvious you knew what you were talking about...and I didn't
|
HAPPY BDAY XERIS! (even though its tomorrow lol)
|
I am taking regression analysis right now. It is honestly the MOST difficult course I have ever taken.
|
16938 Posts
On November 13 2009 03:29 illu wrote: I am taking regression analysis right now. It is honestly the MOST difficult course I have ever taken.
It was actually an easier course for me.
Probability was difficult as fuck, though :/
|
For someone who has only taken a college stats class - AIC and BIC?? What are those? And uh, I believe when I ran a regression with excel it calculated something like Partial-F as well.
|
16938 Posts
Errr, what I mean by a partial F-test is really just for multiple regression cases when you're trying to decide whether or not to include variables. Basically, we're using conditional sums of squares to see if adding variables helps or not. Lots of stats packages will do this for you (the anova(model) command in R, for instance, automatically calculates this).
As for "easy" ways to choose between models, AIC and BIC are various selection criteria for different models. They basically assign a numerical score to each model; the lower the better. Most statisticians use more complex methods these days. I don't even think anyone uses AIC anymore...an additional method that some people use is stepwise regression, although it runs into the problem of encountering local minima and maxima so you might not always get to the best model.
Also if you have tons of data (hundreds of explanatory variables, or whatever), there're ways to deal with model selection, but it's both mathematically difficult and computationally intense.
I wouldn't worry about any of this stuff.
|
On November 13 2009 03:30 Empyrean wrote:Show nested quote +On November 13 2009 03:29 illu wrote: I am taking regression analysis right now. It is honestly the MOST difficult course I have ever taken. It was actually an easier course for me. Probability was difficult as fuck, though :/
I fnd probability to be a bird course.
|
|
|
|