Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

Read the excerpt from “Inflating the Software Report Card.” Inflating the Softwa

ID: 3338872 • Letter: R

Question

Read the excerpt from “Inflating the Software Report Card.”

Inflating the Software Report Card (excerpt)

By TRIP GABRIEL and MATT RICHTELOCT. 8, 2011, New York Times

The Web site of Carnegie Learning, a company started by scientists at Carnegie Mellon University that sells classroom software, trumpets this promise: “Revolutionary Math Curricula. Revolutionary Results.”

The pitch has sounded seductive to thousands of schools across the country for more than a decade. But a review by the United States Department of Education last year would suggest a much less alluring come-on: Undistinguished math curricula. Unproven results.

The federal review of Carnegie Learning’s flagship software, Cognitive T, said the program had “no discernible effects” on the standardized test scores of high school students. A separate 2009 federal look at 10 major software products for teaching algebra as well as elementary and middle school math and reading found that nine of them, including Cognitive T, “did not have statistically significant effects on test scores.”

Amid a classroom-based software boom estimated at $2.2 billion a year, debate continues to rage over the effectiveness of technology on learning and how best to measure it. But it is hard to tell that from technology companies’ promotional materials.

Many companies ignore well-regarded independent studies that test their products’ effectiveness. Carnegie’s Web site, for example, makes no mention of the 2010 review, by the Education Department’s What Works Clearinghouse, which analyzed 24 studies of Cognitive T’s effectiveness but found that only four of those met high research standards. Some firms misrepresent research by cherry-picking results and promote surveys or limited case studies that lack the scientific rigor required by the clearinghouse and other authorities.

“The advertising from the companies is tremendous oversell compared to what they can actually demonstrate,” said Grover J. Whitehurst, a former director of the Institute of Education Sciences, the federal agency that includes What Works.

School officials, confronted with a morass of complicated and sometimes conflicting research, often buy products based on personal impressions, marketing hype or faith in technology for its own sake.

“They want the shiny new one,” said Peter Cohen, chief executive of Pearson School, a leading publisher of classroom texts and software. “They always want the latest, when other things have been proven the longest and demonstrated to get results.”

Carnegie, one of the most respected of the educational software firms, is hardly alone in overpromising or misleading. The Web site of Houghton Mifflin Harcourt says that “based on scientific research, Destination Reading is a powerful early literacy and adolescent literacy program,” but it fails to mention that it was one of the products the Department of Education found in 2009 not to have statistically significant effects on test scores.

Similarly, Pearson’s Web site cites several studies of its own to support its claim that Waterford Early Learning improves literacy, without acknowledging the same 2009 study’s conclusion that it had little impact.

And Intel, in a Web document urging schools to buy computers for every student, acknowledges that “there are no longitudinal, randomized trials linking eLearning to positive learning outcomes.” Yet it nonetheless argues that research shows that technology can lead to more engaged and economically successful students, happier teachers and more involved parents.

a. What is the null hypothesis used in the federal review?

b. Carefully explain what is meant by “did not have statistically significant effects on test scores.”

c. How could both the Carnegie and federal studies reach opposite conclusions? Explain possible differences in their hypothesis testing processes that would lead to their respective results.

Explanation / Answer

a.

Null hypothesis is

H0: The software products have effects on the standardized test scores of high school students.

b.

The software product did not have statistically significant effects on test scores means that the difference in test scores  is attributed to chance or by sampling error.

More technically, it means that , there’s a low probability of getting a large difference in test scores due to software products and normal test scores.

c.

Federal looked at 10 major software products for teaching algebra as well as elementary and middle school math and compaed it with the test scores (placebo) obtained without using any softwares. The study found that nine of them, including Cognitive T, did not have statistically significant effects on test scores.”

Carnegie ignored well-regarded independent studies that test their products’ effectiveness and misrepresented research by cherry-picking results and promote surveys or limited case studies that lack the scientific rigor.

This led to opposite conclusions reached by Carnegie and federal studies.

Hire Me For All Your Tutoring Needs
Integrity-first tutoring: clear explanations, guidance, and feedback.
Drop an Email at
drjack9650@gmail.com
Chat Now And Get Quote