Gill Vs Whitford Analysis

1737 Words4 Pages

The most powerful jurists in the country cannot do math. In March 2018, when the Supreme Court of the United States heard oral arguments in Gill v. Whitford, a landmark case that would determine the future of partisan gerrymandering, its members were reluctant to consider statistical evidence seriously.
A concept called the “efficiency gap” lies at the core of Gill v. Whitford: it is a simple fraction where the numerator is the difference between the two parties’ wasted votes and the denominator is the total number of votes cast. During the oral argument, Chief Justice Roberts dismissed the straightforward concept, stating that “[i]t may be simply my educational background, but I can only describe it as sociological gobbledygook.” Justice …show more content…

v. Joiner and Kumho Tire Co. v. Carmichael that the task of “gatekeeping” – or ensuring that expert testimonies, including statistical evidence, are relevant and reliable – belongs to the judge rather than independent experts. Eventually, in 2000, the decisions in Daubert, General Electric, and Kumho served as the basis for the Federal Rule of Evidence 702. Since then, over half of the states have adopted some variation of the so-called Daubert standard.
In light of Rule 702, judges who oftentimes lack a quantitative background took over the responsibility that previously belonged to highly trained experts. Their newly gained role as evaluators of statistical evidence has led to a series of chaos and confusion. The interpretation of the term “peer review”, for instance, has been inconsistent among different courts throughout the country. Professor Paul Giannelli writes in Case Western Reserve Law Review that the “peer review” standard in some courts has been interpreted to simply mean that someone has double-checked a lab analyst’s results rather than a “rigorous peer review with independent external reviewers to validate the accuracy … [and] overall consistency …show more content…

In U.S. v. Havvard, numerous scientific findings challenged the validity and accuracy of the latent fingerprint matching technique. The technique involves using special procedures to uncover finger print residues that are invisible to the naked eye, and studies find that it can vary significantly in its quality and correctness. The court, nevertheless, referred to latent finger print matching as the “archetype” of reliable expert testimony by asserting that it “[has] been used in ‘adversarial testing for roughly 100 years,’ which offered a greater sense of the reliability of fingerprint comparisons than could the mere publication of an article.” Though many studies point out that finger print collection and examination can be highly inaccurate if done without rigor, fingerprinting methods such as latent print matching have not suffered a sustained challenge in federal court in nearly 100 years. In addition to latent print matching, the National Academy of Sciences, the National Commission on Forensic Science, the President’s Council of Advisors on Science, and Technology and the Texas Forensic Science Commission have found that many well-known and admitted forensic science techniques such as bite-mark analysis, microscopic hair comparison, and arson evidence are questioned by independent

Open Document