Archives

All posts for the month February, 2015

For a number of years now, privacy law scholars have been writing, discussing, and worrying about the effect of big data on different aspects of our lives.  Last year my own law school hosted a conference on big data, which covered government regulation of big data, its economic impact, and its effect on industries as diverse as health, education, and city planning.  However, up until recently there has not been much discussion about the use of big data in the criminal law context.  This is now starting to change, with a handful of articles addressing the inevitable future when courts begin to consider the use of big data in various aspects of the criminal justice system.

Bid data   Police with computer

First, a definition: when people talk about big data, they are usually referring to the practice of accumulating extraordinarily large amounts of information from a variety of different sources and then processing that information to learn new information or provide valuable services.  Private companies have been using big data for quite some time now.  Retailers use it to determine customer behavior and affect shopping habits (As reported in a famous New York Times magazine cover story, Target uses large amounts of seemingly random purchasing data to determine that customers are pregnant, so that the store cab send the customers coupons for pregnancy and new baby items).  Insurance companies rely on big data to try to determine who the safest drivers and healthiest people are.  And all sorts of companies buy and sell this data to each other, seeking to mine it for information about their customers that they can use for economic advantage.

The two most intriguing aspects of big data as it relates to criminal law are (1) it can reveal otherwise unknowable information about individuals from public sources; and (2) it can predict future behavior.  These two facts make it very likely that big data will revolutionize the criminal justice system over the next decade.  Police have already been using massive amounts of data to help decide where to deploy resources, as exemplified by the famous crime mapping software found in police COMPSTAT programs.  And the NSA’s massive metadata collection program, which is currently being reviewed by various district courts (see here and here), is another example of law enforcement trying to collect, analyze, and use big data to try to detect criminal activity–perhaps in violation of the Fourth Amendment.  But as the amount of data about individuals grows and becomes more and more accessible, we will see big data being used at every stage of the criminal justice system.

The next use of big data will probably be with regard to Terry stops.  Professor Andrew Ferguson of the University of the District of Columbia Law School wrote about this in a recent article in the University of Pennsylvania Law Review entitled “Big Data and Predictive Reasonable Suspicion.”  As Professor Ferguson notes, Terry was originally developed (and has so far been applied) in a “small data” context, in which police officers use their own individual observations of the suspect, perhaps combined with their knowledge of the neighborhood, to develop reasonable suspicion for a stop.  But the increasingly networked amount of information about individuals, combined with the speed at which law enforcement can now access this information, allows police to generate useful information about any individual they may see on the street.  Professor Ferguson re-imagines Detective McDadden observing John Terry in a modern day setting:

He observes John Terry and, using facial recognition technology, identifies him and begins to investigate using big data. Detective McFadden learns through a database search that Terry has a prior criminal record, including a couple of convictions and a number of arrests. McFadden learns, through pattern–matching links, that Terry is an associate (a “hanger on”) of a notorious, violent local gangster—Billy Cox—who had been charged with several murders. McFadden also learns that Terry has a substance abuse problem and is addicted to drugs. These factors—all true, but unknown to the real Detective McFadden—are individualized and particularized to Terry. Alone, they may not constitute reasonable suspicion that Terry is committing or about to commit a particular crime. But in conjunction with Terry’s observed actions of pacing outside a store with two associates, the information makes the reasonable suspicion finding easier and, likely, more reliable.

Indeed, the standard of “reasonable suspicion” is so low that police officers may be able to use big data information to stop a suspect even though he was not engaged in any suspicious activity at the time, if a reliable algorithm predicts that he is at heightened risk for carrying a gun or narcotics.

Professor Ferguson notes a number of benefits from this use of big data, such as improved accuracy in Terry stops; the ability to use big data to allay suspicions and thus avoid an intrusive police/citizen encounter; and greater accountability for police actions.  He also discusses the obvious dangers of widespread use of this data: the data may not be accurate; there will inevitably be false positives; and those who are poor or disenfranchised may be overrepresented in the “criminal propensity” data sets.  Indeed, the entire idea of police making decisions about whom to stop based on a science that predicts future criminal activity has a dystopian science fiction feel to it.  Professor Ferguson suggests some changes to both to legal doctrine and in how we collect and use big data in order to alleviate these concerns.  He also notes that the “old-fashioned” method of relying on individual police officer’s observations–and unavoidably biased interpretations of those observations–is hardly a perfect system.

Other articles have begun to apply big data concepts to other aspects of the criminal justice system, such as parole decisions, analyzing criminal court rulings, and jury selection.  But there are still more applications that have yet to be explored.  What is the impact when police use big data analysis in search warrant application?  What about prosecutors and defense attorneys predicting flight risks during bail hearings?  What about judges predicting future dangerousness during sentencing hearings?  And what about the criminal trial itself?  The rules of Evidence allow a defendant to bring in opinion and reputation evidence to show that they are not the “type” of person who would have committed the crime in question; why not allow him to bring in far more accurate evidence based on big data about his unlikeliness to have committed the crime?  The courts, no doubt, will be slow to accept this kind of information, and slower still to craft sensible rules for how to deal with it, but there is little doubt that the change will come.

Next week the Eleventh Circuit will hear the en banc appeal of United States v. Davis.  This case involves the use of cell tower location information to track the movements of a suspect.  Last year a three judge panel ruled that the government needed to obtain a warrant before it could acquire this information from the phone company.  Next week, the Eleventh Circuit will re-hear the case en banc and decide whether they will pull back from the broad holding and expansive reasoning of the original decision.

cell tower location display

In the Davis case, the government suspected the defendant of numerous armed robberies.  During its investigation, the government obtained a court order to acquire the cell tower location data from the defendant’s phone pursuant to the Stored Communications Act (“SCA”).  At the outset, it should be noted that this information is the least intrusive and least precise type of location information that is available from an individual’s cell phone.  Cell tower location information merely tells the phone company (and in this case, the government) the one or two towers which were used to contact the suspect’s phone when he made or received a phone call, as well as the direction the suspect was in relation to the tower(s).  These are usually, but not always, the closest cell phone towers to the suspect at the time he or she used the cell phone.  The data is only created when the suspect actually uses the cell phone–usually when he or she is making or receiving a call.  In contrast, when law enforcement officers have the phone company “ping” a cell phone, or when it uses the GPS device built into the cell phone,the officers obtain a real-time, continuous, precise location of the suspect, regardless of whether the suspect is using the cell phone at the time.

Under the SCA, the government need only show “specific and articulable facts” that the information could be linked to a crime in order to obtain a court order.  Davis argued (and the three judge panel agreed) that acquiring this location information was a Fourth Amendment search, and so the government needed to obtain a warrant based on probable cause before gaining access to this data.  The three-judge panel acknowledged that this was a case of first impression, and so it relied heavily on Justice Alito’s four justice concurrence in the Jones case in its reasoning.  In Jones, four justices found that a twenty-eight day continuous surveillance using a GPS was a Fourth Amendment search because of the “mosaic doctrine”–i.e., the government learned so much public information about the defendant that it created a mosaic which revealed private, protected information.  The three-judge panel in Davis acknowledged the difference between the two fact patterns, but argued that the case was “sufficiently similar” to make it “clearly relevant” to their analysis.

In fact, the distinctions between Davis and Jones are significant, and they all point to the conclusion that the search in Davis does not deserve Fourth Amendment protection.  The only reason the Alito concurrence found that the government surveillance in Jones constituted a search was because of the large number of trips that were tracked; in the Davis case, the government only examined a small number of incidents (specifically, the times when a robbery was occurring).  But the Davis three judge panel ignored this distinction, arguing that tracking a person’s public location even once could constitute a search: “…[E]ven on a person’s first visit to a gynecologist, a psychiatrist, a bookie, or a priest, one may assume the visit is private if it was not conducted in a public way.”  The Jones case also involved tracking an individual at all times, while the police in Davis only gained location information from the defendant when he voluntarily provided that information to the phone company by using his cell phone.  And finally, the Jones location information was much more precise, showing the police exactly where the defendant’s car was located; the Davis location information only showed the general area where the defendant was located.  (The three judge panel brushed this difference aside, arguing that because the prosecutor claimed the cell phone location placed the defendant “near each of six crime scenes,” it could place him “near any other scene” as well, including the “home of a lover, or a dispensary of medication, or a place of worship, or a house of ill repute.”)

Essentially the Davis panel appeared to be arguing that since an individual may want to keep his general location at any given time private from the government, the Fourth Amendment protects the government from learning that information unless it first obtained a warrant.  This is certainly not supported by Jones and directly contradicts Knotts, which allows the government to use electronic means to track an individual over the course of one trip.

The only significant difference the Davis panel found between its case and the Jones case was that the Jones case involved tracking a car, whereas the Davis case involved tracking a cell phone.  The Davis court concluded that a person has less reasonable expectation of privacy in the movements of a car, because it is easily visible when in public, than it does in the movements of an individual (as tracked through a cell phone), which may not be so easily visible.  Unfortunately for the Davis court, no other court has made any such distinction.  The only distinction that matters is whether the location being tracked is in public (as in Knotts) or in private (as in Karo)–and, after Jones, whether there is so much information that it creates a mosaic.  Neither of those distinctions existed in Davis.

Finally, the Davis court had to overcome one more obstacle in order to come to its extraordinary conclusion: it had to deal with the third party doctrine.  As a general rule, a person loses all Fourth Amendment protection for any information that he or she turns over to a third party (such as a phone company).  The Davis court argued that the third party doctrine only applies when a person “voluntarily and knowingly” conveys information to a third party, and then claimed that a cell phone user has no idea that he or she is conveying her location to the phone company when he or she makes a cell phone call.  The first step of this argument seems questionable as a matter of law (there is no strong support for the proposition that the third party doctrine only applies to “voluntary and knowing” transfer of information) and the second step of this argument seems flat out wrong as a matter of fact (regardless of what the defendant in Davis might have thought, most people must know that the cell phone company needs to determine the location of their phone in order to send calls to it).

The Davis court ultimately ruled for the government and refused to suppress the evidence based on the good faith exception to the exclusionary rule, but its reasoning and dicta regarding cell phone location information still stands.  If the en banc court does not overturn that aspect of the case, it will represent a radical expansion of the Jones case–an expansion that is not consistent with the rest of Fourth Amendment doctrine in this area.

 

In the five years since the Supreme Court decided Herring v. United States, law professors and other commentators have written dozens of articles about it, to the point at which it seemed as though there was not much more to say about the decision.  However, a recent article posted on SSRN takes a fresh perspective on Herring by examining how the case has been handled by lower courts–and, by extension, how police departments may be reacting to Herring in order to “launder” evidence that was obtained in violation of the Fourth or Fifth Amendment.

In Herring, the Supreme Court broadened the application of the “good faith exception” of the exclusionary rule.  The arresting officer in Herring relied upon a negligent mistake by a police department employee from another department when he arrested the defendant–but there was no way for the arresting officer to know that there had been a mistake.  Thus, the defendant was illegally arrested, but the arresting officer had know way of knowing at the time that the arrest was illegal.  However, unlike in previous good faith cases (like United States v. Leon), the mistake at issue was made by a law enforcement official–thus, the Court had to address the question of whether a negligent mistake made by a law enforcement officer who was not the arresting officer would still trigger the exclusionary rule.  The Court held that the exclusionary rule should not apply–essentially, it was not worth the cost of applying the exclusionary rule in a case where the arresting officer acted in good faith, even if originally it was a police error that lead to the Fourth Amendment violation.  The Court noted in dicta that if the original error was “deliberate misconduct, recklessness, or gross negligence,” or if there were “systemic negligence” on the part of the mistaken officer, then the good faith exception should not apply.

Herring decision left a lot of questions unanswered: how much attenuation is necessary between the original police error and the illegal arrest before the good faith doctrine can apply?  What exactly constitutes “gross negligence” or “systemic negligence”?   And more broadly: does this case signal the beginning of the end of the exclusionary rule, since the Court is now refusing to apply the rule even in the case of a police mistake that leads to a Fourth Amendment violation?

Probably these questions were intentionally left unanswered: the Supreme Court wanted to wait and see how the decision played out in lower courts before deciding what its next move should be with regard to the application of the exclusionary rule.  Elsewhere I have been very critical of this “wait-and-see” strategy by the Supreme Court, arguing that the Court takes so few cases in Fourth Amendment law that it needs to be bolder when it addresses unsettled areas of law–otherwise (as in Herring) it ends up creating more questions than it resolves.  But when the Court chooses to move incrementally, it is undoubtedly useful to actually take a look a few years later and see exactly what the lower courts are doing.  This is exactly what this latest law review article does.

The article, Evidence Laundering: How Herring Made Ignorance the Best Detergent, is co-written by Professor Kay Levine of Emory, Professor Jenia Turner of Southern Methodist, and Professor Ronald Wright of Wake Forest.  The article conducts an analysis of the twenty-one lower court decisions that have applied Herring in cases where one police officer acting in good faith relied on tainted information and thus violated a defendant’s Fourth Amendment rights.  In those twenty-one cases, seventeen courts allowed the evidence to be admitted, while four determined that there was “deliberate misconduct, recklessness or gross negligence” which required exclusion of the evidence.

The authors worry that police officers may launder evidence intentionally, reviving the “silver platter” doctrine from the pre-Mapp era in which state police would violate the law to obtain evidence and then hand over the tainted evidence to federal authorities, who could then legally use it in federal court.  Although most of the post-Herring cases involved fact patterns very similar to Herring (i.e., a mistake in an arrest warrant database), the authors still found cause for concern:

we identify courts that have permitted boldly problematic hand-offs of the sort contemplated by the hypothetical. But even in the less obviously problematic cases, acquiescent reasoning or insufficient fact-finding by courts suggests a tolerance for evidence laundering that not only is troubling on its face but might also inspire evasive tactics by law enforcement in the future.

The article goes on to make two important points.  First, the Herring decision is based on an individualistic, “atomistic” view of how police departments operate, which leads it to focus on the (innocent) actions of the arresting officer rather than the (negligent) actions of some other member of law enforcement.  As the article points out, this is increasingly an inaccurate way of viewing how police departments operate, since in the age of computer databases and cross-jurisdictional crimes, police officers often work closely (or at least rely upon) officers in other divisions or other departments in the course of investigating criminal activities.   Second, the article compares our current exclusionary rule to the rules followed by other countries, and finds–somewhat surprisingly–that the current state of the exclusionary rule is now very similar to the rule for other countries.   Other civil and criminal law countries do apply an exclusionary rule, although less often than in the United States, and in doing so they apply a broad balancing test rather than applying a stricter rule-based analysis.  That is, these countries “weigh the effect of factors such as the seriousness of the misconduct, the gravity of the offense, and the importance of the rights violated.”  This is increasingly how the United States courts are applying the exclusionary rule post-Herring.   The article points out some good and some bad effects of this shift from a traditionally American “rules-based” standard to an international “balancing test” analysis:

One of the chief weaknesses of the balancing approach is that its flexibility carries the risk of inconsistent and unpredictable decisions. To the extent it relies on a subjective evaluation of officers’ state of mind, a balancing approach also raises practical difficulties for defendants in proving this element. And finally, because balancing expands in some respects the range of cases in which unlawfully obtained evidence is admitted, this likely reduces the disciplinary effect of exclusion.

Yet balancing also offers some potential advantages. In certain circumstances, its openness allows judges to exclude evidence to ensure systemic integrity where our deterrence-oriented approach would call for admission. The flexibility of the balancing approach also permits courts to consider alternative remedies, such as sentence reduction or jury cautions, in some cases where our zero-sum approach would lead to admissibility because of concerns about the costs of exclusion. While empirical evidence on the practical effects of the balancing approach is very limited, existing data suggest that it need not severely undermine the exclusionary rule.

This is probably a very accurate prediction of the future of the exclusionary rule–the doctrine will ultimately complete its evolution from a rigid rule-based analysis into a flexible balancing test that will result in more illegally obtained evidence being admitted.  Whether this is a positive development depends on how much a person accepted the original premise of the exclusionary rule as an effective deterrent that was worth the cost of setting some guilty people free.  In his seminal article Fourth Amendment First Principles, Professor Akhil Amar predicted that we would eventually get to the point where courts reject the exclusionary rule in favor of a more balanced reasonableness analysis.  Professor Amar believed this would be a positive development.  As he pointed out:

The exclusionary rule renders the Fourth Amendment contemptible in the eyes of judges and citizens. Judges do not like excluding bloody knives, so they distort doctrine, claiming the Fourth Amendment was not really violated. In the popular mind, the Amendment has lost its luster and become associated with grinning criminals getting off on crummy technicalities. When rapists are freed, the people are less secure in their houses and persons–and they lose respect for the Fourth Amendment. If exclusion is the remedy, all too often ordinary people will want to say that the right was not really violated. At first they will say it with a wink; later, with a frown; and one day, they will come to believe it. Here, too, unjustified expansion predictably leads to unjustified contraction elsewhere.

Professor Amar (and some others) have argued for adopting a number of alternate remedies for addressing Fourth Amendment violations, such as civil liability of police departments (which would require weakening or abolishing some of the qualified immunity doctrine), punitive damages, class actions, and injunctive relief.  Of course, courts will not start developing these alternate remedies in any meaningful way until the Supreme Court completes this shift once and for all and abolishes the exclusionary rule as we know it in favor of the broader balancing test that some post-Herring lower courts already seem to be applying.  Surely it is now time for the Court to take this final step and allow a more robust development of other Fourth Amendment remedies.

In what may be the most significant Fourth Amendment case this year, the Supreme Court recently heard arguments in Rodriguez v. United States, which raises the question of how long (if at all) a police officer can delay a traffic stop to conduct further criminal investigations.  In the Rodriguez case, Officer Struble pulled defendant Rodriguez over at 12:06 AM for veering onto the shoulder of the road.  Struble was a K-9 officer, and he had his partner Floyd with him in the police car. Officer Struble conducted a routine check of Rodriguez’s license and other papers, asked a few questions, and issued a written warning at about 12:27.  Struble then asked whether Rodriguez would consent to the drug dog sniffing the car, and the Rodriguez refused.  Officer Struble, correctly concluding that he was allowed to conduct a drug dog sniff even without consent, decided to go ahead and deploy Floyd over Rodriguez’s objections.  However, since Rodriguez had a passenger in his car, Struble decided that it would be too dangerous to conduct the dog sniff without backup, so he ordered Rodriguez out of the car and made him wait until a second officer arrived on the scene.  Approximately six or seven minutes later,  a second officer arrived, and Officer Struble and Floyd walked around Rodruiguez’s car.  Floyd alerted in under a minute, providing probable cause to search, and the police ultimately recovered a large bag of methamphetamine.

traffic stop drug-dog

The question before the Court in its narrowest formulation is whether Officer Struble was permitted to require Rodriguez to wait for the backup to arrive.  The broader question is usually phrased as whether (and for how long) police officers can prolong a traffic stop after it is completed in order to conduct a further investigation.  But a more realistic way of setting out the question is really: “What is a ‘reasonable length of time’ for a traffic stop?”  In other words, the Court can either analyze this case as either:

a twenty-one minute traffic stop that included a license check and routine questioning, which was completed, after which there was a seven minute delay for a drug dog sniff, or

a twenty-eight minute traffic stop that included a license check, routine questioning, and a drug dog sniff.

If the Court adopts the first analysis, there are only two arguments that the police action was constitutional, and neither of them are very convincing.  The first argument is that Officer Struble had reasonable suspicion to seize Rodriguez for those seven minutes as part of a Terry stop.   But even under the low standards of reasonable suspicion, there is not much evidence that Officer Struble had specific and articulable facts that criminal activity was afoot–and at any rate this issue was not briefed for the Court.  The second argument, which was the one adopted by the Eighth Circuit, was that the extra seven minutes was a de minimis intrusion and thus the delay was reasonable.  The Eighth Circuit cited a number of cases which have approved of a delay of two to four minutes, and so it concluded that the traffic stop in this case was not “unreasonably prolonged.”  Ginger Anders, the Assistant to the Solicitor General who argued the government’s case, also took this position.  However, the Justices were not very amenable to this line of reasoning during oral argument, and they pointed out that it would allow the police officer to do any number of things after the traffic stop was over, as long as they did not “unreasonably prolong” the stop.  For example, a police officer could question a motorist for an extra seven or eight minutes about potential criminal activity.  Justice Kagan showed her displeasure at this idea:

But then you really are saying because we have a reason to pull you over for a traffic stop, that gives us some extra time to start questioning you about other law enforcement related things and to do other law enforcement related business. And I never thought that that was the rule. I always thought is that once the objective basis . . . for the stop dissipated, that was it.

The other problem with the “reasonable delay” argument is that it results in rather arbitrary line-drawing about how long a traffic stop should take.  And once those lines are drawn, police officers will have free reign to do whatever they think is useful (as long as it is not a “Fourth Amendment search”) within the time frame.  Again, Justice Kagan summarized the problem:

But . . . where your rule is going to lead to, Ms. Anders, is something along the lines of . . . everybody will decide 30 minutes or 40 minutes, I think you say at one point in your brief, is reasonable for a traffic stop. And if you see a taillight violation, that’s 40 minutes of free time for the police officers to investigate any crimes that they want, because they can do it all in the range of what you’ve decided is kind of the reasonable traffic stop.

The second analysis–that the drug dog sniff could be thought of as a reasonable part of a routine traffic stop–is more interesting, and it received a fair amount of attention from the Justices in the oral argument.  During his argument, defense attorney Shannon O’Connor  struggled to define the point at which a traffic stop was completed.  After some confusion on the issue, he wisely rejected a formalist, brightline test (e.g., the traffic stop is over when the police officer formally hands a ticket to the motorist).  Instead, he argued that a traffic stop can take no more time than is reasonably necessary to conclude the “mission” of the traffic stop.  But that merely begs the question: what is the “mission” of a traffic stop?  Currently, police officers routinely check a motorist’s license, registration, and proof of insurance; they take time to run the plates of the car to see if it has been stolen; and they ask the motorist questions about where they are going and why.  None of these actions have any particular relationship to the initial reason for the traffic stop–in this case, the fact that the motorist swerved onto the shoulder.  So if these non-related actions are allowed, why not also a dog sniff?  O’Connor even conceded that if the police officer who initially pulled over the motorist feels like she needs backup in order to safely interact with the motorist, it is constitutional to force the motorist to wait for up to thirty minutes before the backup arrives.  And yet, after the backup arrives, and while the original officer is writing out the ticket, the backup cannot go ahead and conduct a dog sniff of the car–even though the dog sniff itself is not a search?   Rory Little at SCOTUSblog thought that it was “clear” after the oral argument that drug dog sniffs are “extraneous” to the mission of a traffic stop, but some of the questioning from the Justices seemed to leave open the possibility that a drug dog sniff could be a reasonable part of a routine traffic stop.  Thus, the Court could resolve this case by merely saying that drug dog sniffs are (or are not) part of the mission of a stop.  If they are not, any extra delay in order to conduct a drug dog sniff is an unconstitutional seizure.  If they are, then the police are allowed to delay the stop to conduct a drug dog sniff, as long as the police are diligent in conducting that search.  But this only raises more questions: what else is part of the mission of the stop?  In particular, is extensive questioning part of the mission?  If so, how much questioning?

In resolving this problem, Professor Orin Kerr argues that traffic stops should be classified into two categories.  The most common category is comprised of stops which are the result of a mere traffic violation (as in Rodriguez).  For these stops, the mission is “to find and evaluate safety concerns”–so the officer can only do what is necessary to ensure the safety of other motorists, and she cannot conduct any criminal investigation (such as asking questions about criminal activity, asking consent to search the car, or using a drug dog) if that conduct prolongs the stop.  The second category is when the officer has reasonable suspicion to believe that a crime has been committed–in which case the mission of the stop expands to criminal investigation.

Although this is an elegant solution, there is no case law that supports this distinction, and it adds yet another layer of complexity to an already extremely complex area of law. We would have two different types of car stops, with different rules (and two separate sets of jurisprudence) for each.  Furthermore, the distinction between a “traffic code violation” and a “criminal law violation” would be hard to apply in practice.  Many mere traffic violations could give rise to reasonable suspicion that the motorist is intoxicated–which would transform a traffic code violation into a criminal law violation.  And even in pure traffic code stops, police would have a strong incentive to try to gather information to transform the traffic code stop into a criminal law violation stop, leading to an infinite variety of fact-based puzzles as to when a stop was “transformed” and whether a police officer delayed the original traffic code stop in an attempt to bump it up to a criminal law stop.

In the end, it will be hard for the Court to resolve this case without de facto legislating from the bench by telling police officers exactly what they can and can’t do during a routine traffic stop.  The Justices will no doubt have to base this list of permissible actions on what they say is “reasonable,” but in fact they will probably conclude that what is “reasonable” is simply what the police have traditionally done during traffic stops–checking documents, running plates, and asking routine questions.  There is no particular argument that these actions are “reasonable” compared to, say, a drug dog sniff or a prolonged inspection of the outside of the car, or running a warrant check on all the passengers in the car, or any number of other things that police might want to do.

But “reasonableness” here is a term of art to mean (as it often does in the Fourth Amendment context) what we are used to seeing the police doing, rather than what is actually reasonable for the police to do.  Thus, in this case, well-established practice will create the constitutional rule, rather than (as it should be) the other way around.