Apr 2013 score is now available Online
Posted: Mon May 20, 2013 9:47 am
850(81%)
……
(sigh)
……
(sigh)
(This resource is NOT affiliated with gre.org or ets.org)
https://physicsgre.com/
Did you actually get that with your score report or is that just an estimate? If so, that's different than the past few years's tests.mrfotih wrote:62 correct
22 blanks
16 mistakes
raw scare 58
Absolutely same here. I assumed my raw score would be over 80 easily! So I expected over 900 scaled score.kal wrote:I got 850 with 7 blanks and an estimated 5-7 mistakes! The 990's must have been REALLY high . 90++ correct..
This is correct (see http://www.ets.org/gre/subject/scores/understand/) but that paragraph doesn't specify that it is exactly "3 years".stengah wrote:Correct me if I'm wrong, but I thought percentile scores are not based on the test you took, but an average of the tests over the past 3 years or something like that. There are some recent threads that discuss this.
I got 950 and I leave about 7 or 8 blank. However, it seems 990's are never more than 85 for raw?kal wrote:I got 850 with 7 blanks and an estimated 5-7 mistakes! The 990's must have been REALLY high . 90++ correct..
how do you get that information?blighter wrote:The test differs in different time zones at the very least if not different centres. So you guys probably didn't share a common test, unless you took the test in the same time zone.
do they put different weight on different tests? For example the scores of the guys took the same time with you would be weighted more? After all different test have quite different difficulties.TakeruK wrote:This is correct (see http://www.ets.org/gre/subject/scores/understand/) but that paragraph doesn't specify that it is exactly "3 years".stengah wrote:Correct me if I'm wrong, but I thought percentile scores are not based on the test you took, but an average of the tests over the past 3 years or something like that. There are some recent threads that discuss this.
They can't give you the same test in different time zones because the tests are at different times. Percentile rank isn't calculated right away. First they convert the raw score to the scaled score based on the difficulty of the tests regardless of how the people in your particular centre performed (you may live in a place where most people manage to score very highly in the test, but that won't affect your scaled score. That is to say your percentile rank in your particular centre may be 50% and yet your overall percentile rank might still be 95%, theoretically). And percentile ranks are given out based on the scaled scores of the people who took the test over past few years. That's why even if you take the test now, your percentile rank might change over the years. But your scaled score will remain the same.hermitw wrote:how do you get that information?blighter wrote:The test differs in different time zones at the very least if not different centres. So you guys probably didn't share a common test, unless you took the test in the same time zone.
ets just said:
Each GRE test score is reported with a corresponding percentile rank. A percentile rank for a score indicates the percentage of examinees who took that test and received a lower score. Regardless of when the reported scores were earned, the percentile ranks for Subject Test scores are based on the scores of all examinees who tested within a recent time period.
No, every single test is weighted the same when computing percentile ranks. Basically,hermitw wrote:
do they put different weight on different tests? For example the scores of the guys took the same time with you would be weighted more? After all different test have quite different difficulties.
kal wrote:@hermitw
how many mistakes you reckon you had, besides the blanks. I am thinking of asking them to check my score again, that's why I'm asking.
Then the problem is how do they measure the difficulty of the test when converting to scaled score?blighter wrote:They can't give you the same test in different time zones because the tests are at different times. Percentile rank isn't calculated right away. First they convert the raw score to the scaled score based on the difficulty of the tests regardless of how the people in your particular centre performed (you may live in a place where most people manage to score very highly in the test, but that won't affect your scaled score. That is to say your percentile rank in your particular centre may be 50% and yet your overall percentile rank might still be 95%, theoretically). And percentile ranks are given out based on the scaled scores of the people who took the test over past few years. That's why even if you take the test now, your percentile rank might change over the years. But your scaled score will remain the same.hermitw wrote:how do you get that information?blighter wrote:The test differs in different time zones at the very least if not different centres. So you guys probably didn't share a common test, unless you took the test in the same time zone.
ets just said:
Each GRE test score is reported with a corresponding percentile rank. A percentile rank for a score indicates the percentage of examinees who took that test and received a lower score. Regardless of when the reported scores were earned, the percentile ranks for Subject Test scores are based on the scores of all examinees who tested within a recent time period.
This make sense. The scale scores comes first then they calculate the percentile. But how do they measure the difficulty of each test to give the scale score?TakeruK wrote:No, every single test is weighted the same when computing percentile ranks. Basically,hermitw wrote:
do they put different weight on different tests? For example the scores of the guys took the same time with you would be weighted more? After all different test have quite different difficulties.
Scaled Score = a way to compare your raw score (i.e. actual # correct) vs. those who took the exact same test as you.
Percentile Rank = a way to compare your scaled score vs. those who took PGREs in the last X years (where X may be 3, as in this example here: http://www.ets.org/s/gre/pdf/gre_guide_table2.pdf)
These are the only two objective measures of performance that ETS can compute, so that's why they are the results that are published and put into our score reports. Ideally, there would be a way to account for varying difficulties in different tests (on different days/time zones) but obviously that is not possible since this would require an objective way to quantify the "difficulty" of each test.
Short answer: They don't. The difficulty of the test is not directly factored into either score. The scaled score is supposed to be scaled to the difficulty of the test that you are taking -- so that the best scaled score on the test will be 990. If the test is hard, a raw score of like 65 might be enough to get 990 but if the test is easier then a raw score of 80 might be 990 (I'm just using example numbers).hermitw wrote: This make sense. The scale scores comes first then they calculate the percentile. But how do they measure the difficulty of each test to give the scale score?
Good question... ETS says they measure the difficulty of a new version of the test by comparing it to a previous (known) version of the test using a process called "equating".hermitw wrote:This make sense. The scale scores comes first then they calculate the percentile. But how do they measure the difficulty of each test to give the scale score?
Wikipedia has an explanation of what equating is.After a new edition of a Subject Test is first administered, examinees’ responses to each test question are analyzed in a variety of ways to determine whether each question functioned as expected. These analyses may reveal that a question is ambiguous, requires knowledge beyond the scope of the test, or is inappropriate for the total group or a particular subgroup of examinees taking the test. Such questions are not used in computing scores.
Following this analysis, the new test edition is equated to an existing test edition. In the equating process, statistical methods are used to assess the difficulty of the new test. Then scores are adjusted so that examinees who took a more difficult edition of the test are not penalized, and examinees who took an easier edition of the test do not have an advantage. Variations in the number of questions in the different editions of the test are also taken into account in this process.