Archive for the ‘privacy’ tag

Increased Use of StingRay Devices May Raise More than Just Privacy Concerns

leave a comment

On February 22, 2015, the Washington Post ran an article about the arrest of Florida man Tadrae McKenzie.  The facts of the case were relatively unremarkable:  Mr. McKenzie was arrested on March 6, 2013 by the Tallahassee Police Department.  Mr. McKenzie was charged with robbery with a deadly weapon, a first degree felony.  If convicted, Mr. McKenzie would have faced a prison sentence of up to 30 years.   However, luckily for Mr. McKenzie, this was not to be.  Before his trial began, the state of Florida offered him a plea bargain under which he agreed to plead guilty to a lesser charge (second-degree misdemeanor) and serve six months probation.

On its face, this seems like a routine story of a small-time criminal who got a lucky break from the criminal justice system.  So why did it attract the attention of a national newspaper like the Washington Post?  The answer lies in the reason behind Florida’s the plea agreement offer to Mr. McKenzie.  If this case had gone to trial, the state of Florida would have been forced to disclose to Mr. McKenzie and the public information about a surveillance device known as a “Stingray” (sometimes called an “IMSI-catcher”). [1]

So what is a StingRay?  To explain this, the Post’s article included a helpful infographic.  Essentially, StingRays take advantage of a security flaw in older 2G cell signals to gain access to data stored in nearby cell phones.  Unlike the newer 3G and 4G cell signals, 2G cell signals do not authenticate the cell phone towers with which they communicate.  To gain access to nearby cell phones, a StingRay blocks 3G and 4G cell signals, which forces cell phones in the area to switch to 2G.  They then send out a cell signal that imitates a genuine cell phone tower, which causes cell phones within range to connect with the StingRay instead of an actual tower.   Once the phone is connected, the stingray can pull metadata such as call history and location data, all without the owner’s knowledge.

It is this last part–lack of notice to the cell phone owners–that most worries civil rights advocates due to privacy concerns.  Moreover, according to documents obtained by the Electronic Privacy Information Center (EPIC) through FOIA, this is often done without first obtaining a warrant.  The FBI does not have a uniform national policy that identifies the legal authority under which it collects information using StingRay devices because Federal District Courts are split on the question of whether information collected using a StingRays falls under the third-party doctrine.  According to the FBI, some federal courts have determined that government agencies must show probable cause and obtain a warrant before conducting surveillance, while others merely require that government agencies meet the more lenient requirements contained in the Stored Communications Act, 18 U.S.C. § 2703.[2]  At the state level, governments have been generally skeptical of the use of devices like StingRays without warrants.  So far, eight legislatures–Illinois, Indiana, Maryland, Minnesota, Tennessee, Utah, Virginia and Wisconsin–have passed laws requiring warrants for tracking devices like Stingrays.  The supreme courts of Florida and Massachusetts have handed down decisions to that effect as well.[3]

While I share the concerns of organizations like EPIC, I find that a more troubling aspect of this story is the extent to which Florida used its prosecutorial discretion as a tool to protect the StingRay’s secrecy.  One of the fundamental tenets of the criminal justice system is that punishment should be dealt in a way that gives fair and equal treatment under the law.  If the likelihood of a plea offer in a case is determined primarily on the basis of whether the police apprehended the defendant with the assistance of a StingRay, it would undermine the legitimacy of the criminal justice system as a whole.  This problem will likely grow more pronounced as StingRays become more common and the frequency of plea agreements like Mr. McKenzie’s increases.

Of course, it is possible that a court case will come up where a plea agreement is not possible or the defendant refuses to settle.  Perhaps, if this happens, it will finally force information regarding StingRays into the open, where the public can finally have an informed debate about their use.

[1] Currently, the FBI requires that law enforcement agencies sign a Non-Disclosure Agreement before obtaining a StingRay from its manufacturer, Florida-based Harris Corporation.  According to the FBI, the NDA is necessary to maintain the StingRay’s effectiveness as a crime-fighting tool.

[2] StingRay devices are outside the scope of the Riley decision since that case concerned cell phones that are actually seized by police in a search.  The Supreme Court has not yet ruled on the subject of cell phone tracking using StingRay devices.

[3]  The Florida case had not yet been decided at the time of Mr. McKenzie’s arrest.

Written by

April 6th, 2015 at 6:30 pm

The Right to be Forgotten

leave a comment

This past May, the Court of Justice of the European Union approved “the right to be forgotten” in a case brought by Mario Costeja against a newspaper and Google, a move which fundamentally changed our notions of Internet privacy. More than a decade earlier, Costeja had posted two notices about an auction of his property to pay off debt, and the links to the notices were still appearing in the search results when Googling his name. Costeja brought suit in an effort to remove the links from the search results. The court said the links could be removed if they were found to be “inadequate, irrelevant or no longer relevant.” Under the right to be forgotten, only searches that include a person’s name will provoke the search result removal, which means that the articles or website can still show up in the results if the search is under a different keyword.

The European Union’s right to be forgotten has spurred much concern for free speech campaigners, who claim the ruling unjustly limits what can be published online. Privacy advocates, however, are praising the ruling for allowing people some exercise of power over what content appears about them online. This new right creates a process for people to remove links to embarrassing, outdated, and otherwise unwanted content from Google and other search engines’ results. Courts are directed to balance the public’s interest in access to the information in question and the privacy interests of the person affected by the content.

As of now, the ruling applies only to Google’s local European sites, such as in Germany, in France, and other search engines. This leaves an easy loophole because the content is still available by searching from European data protection representatives are, of course, eager to apply the right to be forgotten worldwide in order to make the ruling more effective. Europe’s Article 29 cross-European panel of data protection watchdogs recently announced: “de-listing decisions must be implemented in such a way that they guarantee the effective and complete protection of data subjects’ rights and that EU law cannot be circumvented.” The Article 29 Working Party is comprised of data protection representatives from across Europe and it has very recently published guidelines on the implementation of the right to be forgotten ruling.

The guidelines note, “a balance of the relevant rights and interests has to be made and the outcome may depend on the nature and sensitivity of the processed data and on the interest of the public in having access to that particular information. The interest of the public will be significantly greater if the data subject plays a role in public life.” They also address concerns of how this will impact free speech: “in practice, the impact of the de-listing on individuals’ rights to freedom of expression and access to information will prove to be very limited. When assessing the relevant circumstances, [Data Protection Authorities] will systematically take into account the interest of the public in having access to the information. If the interest of the public overrides the right of the data subject, de-listing will not be appropriate.”

The representatives ask search engines to apply this new right to be forgotten to all of their websites, including, for enforcement worldwide. Privacy advocates allege Google has been undermining the new right by limiting its application to local European sites, while free-speech advocates say the rule is “a gateway to Internet censorship that would whitewash the Web.” It is up to the data regulators in individual countries to decide whether to enforce the panel’s guidelines, and it remains unclear whether Google will move to implement the rule.

No Ads, No Games, No Gimmicks

leave a comment

Prior to its $19 billion acquisition by Facebook in February, WhatsApp promised subscribers three things: no ads, no games, no gimmicks.  For the past five years, WhatsApp prided themselves on operating a simple, practical messaging service that protected user’s privacy by not accessing their data for advertising and profit-generating purposes.  WhatsApp enjoyed enormous success under this model, drawing approximately 450 million subscribers since its introduction. Facebook’s acquisition of WhatsApp turned heads because of the stark contrast in how the two companies handle user privacy and data protection. Facebook is notorious for having a profit model built on accessing user data for monetary gain, and has become a figurehead for those opposing data collection practices in the wake of the NSA leaks.

On March 6, 2014, privacy advocate groups EPIC (Electronic Privacy Information Center) and CDD (Center for Digital Democracy) filed a complaint with the Federal Trade Commission to block the deal until Facebook provided further details on how it would use data acquired from WhatsApp subscribers.  These groups are seeking a court order enjoining Facebook from implementing future changes to WhatsApp’s privacy policy, claiming that a change in the policy to advocate user data collection would be an unfair and deceptive business practice on the part of Facebook.  The complaint states that, “WhatsApp users could not have reasonably anticipated that by selecting a pro-privacy messaging service, they would subject their data to Facebook’s data collection practices.”

Facebook has repeatedly assured that it will not change WhatsApp’s privacy policy and will allow WhatsApp to function as a separate business entity. However, Facebook made similar assurances following their $1 billion acquisition of Instagram in April 2012.  Facebook then amended Instagram’s privacy policy and incorporated the data from Instagram users into their business profit model.  It’s unclear whether WhatsApp can continue its success under the ownership of Facebook.  Recent events have put user data collection under intense public scrutiny.  With endless messaging alternatives available, WhatsApp’s continued future success may hinge on Facbeook’s management of user privacy.

Written by

March 19th, 2014 at 10:37 am

Posted in Commentary

Tagged with , , ,

Should You Be Afraid of the Kinect?

leave a comment

Last month Microsoft released its newest console, the Xbox One. As part of its basic package the Xbox One includes a separate, yet required, add-on called the Kinect. The Kinect is a combination of a microphone, a camera, and a plethora of other sensors. The Kinect can see in the dark, pick up voice commands, read your heart rate, and recognize your face to sign in. Ostensibly, the Kinect is used to transmit voice commands to the Xbox One in order for quick and easy control of the device, as well as to provide motion control for certain games. Microsoft also encourages users to run their cable box and other video devices through the Xbox One to allow the console to act as the center of their living room. When not in use, the Kinect stays on standby and waits for the “Xbox On” voice command that will reactivate it. While the gaming possibilities for such a device have gamers excited, the many capabilities of the Kinect also has privacy groups worried about Microsoft’s collection of user data.

When the Xbox One was first announced back in May of 2013, the German Federal Data Protection Commissioner, Peter Schaar referred the Kinect as “a twisted nightmare” adding that Microsoft has sold a “monitoring device” rather than a game console. These concerns were echoed in the US, where lack of information on the newly announced device had gaming sites worried that the Kinect could not be shut off and would be nearly impossible to fully control. After the recent NSA leaks that showed Microsoft-owned properties, such as Skype, were providing information to the government, the fact that the Microsoft Kinect would collect a large amount of personal data was worrying.

So should you be worried about Microsoft your distributing your data? Luckily since the initial outcry, Microsoft has taken several steps to reassure users that their data will not be used inappropriately. For example, Microsoft has promised that it will not sell any data to advertisers, and that the Kinect could be fully disabled and even unplugged without worry. Even your facial recognition information (used as an alternate method of logging in) is converted to a string of numbers and stays on the console, rather than being processed by Microsoft. The only information that Microsoft reserves the right to explicitly monitor is voice chatting (although this does not include Skype conversations). Microsoft also recently joined other technology companies in publishing an open letter to the President and the members of Congress calling for surveillance reform. All in all, Microsoft has taken admirable steps to ensure that its users’ privacy.

Written by

December 12th, 2013 at 1:14 pm

Posted in Commentary

Tagged with , ,

San Diego Pilots Facial Recognition Technology for Law Enforcement Officials

leave a comment

Earlier this year, 133 Galaxy tablets and smartphones were distributed to 25 law enforcement agencies in the San Diego region as part of a pilot program. Their purpose: to allow San Diego law enforcement officials to make use of mobile facial recognition technology in daily operations. Facial images captured by the mobile devices are run through the Tactical Identification System, a new mobile facial recognition technology that matches images officers take in the field with images from databases containing approximately 348,000 San Diego County arrestees and over 1.4 million booking photos. The Tactical Identification System, which pulls mugshots from the statewide Cal-Photo law enforcement database and also has access to 32 million driver’s license photos, is coordinated by the San Diego Association of Governments and counts over 25 federal, state and local agencies among its participants.

The San Diego County Sheriff’s Department and San Diego Police Department, which received a combined 91 of the 133 total mobile devices, together have made nearly 2,000 queries into the system since the pilot program was launched at the beginning of the year. Officers have stated that the Tactical Identification System’s facial recognition capabilities have proven useful in identifying people who refuse to provide their names or make use of false identification. The technology has also been cited as valuable in identifying immigrants who don’t have authorization to be in the United States.

This pilot program, which relies on technology developed for military use, has largely avoided the public spotlight, most likely due to the fact that the technology was deployed in the field without notice or public hearings. Despite that fact, privacy concerns have still been raised regarding the program and the potential erosion of privacy its expansion could result in. The former executive director of the American Civil Liberties Union of San Diego & Imperial Counties, Kevin Keenan, has expressed concerns about the “monitoring of everyone’s action…” and “storage in perpetuity.” The ACLU has been voicing concerns about facial recognition technology for over a decade, identifying the technology’s potential inaccuracies and susceptibility to abuse as serious issues.

Proponents of the program have countered by pointing out that the facial recognition software has built-in privacy safeguards. Specifically, after field images are run through the system they are deleted by the central database (but the images do remain on the mobile devices until they are manually deleted by an officer). Proponents have also highlighted the fact that the system has valuable applications beyond identifying criminal suspects, including identifying unresponsive, injured persons with no identifying documents.

While the legality of the use of facial recognition technology by law enforcement has not yet been tested in the courts, such a challenge seems likely if the pilot program succeeds and is expanded throughout the San Diego Sheriff’s Department and San Diego Police Department. When the courts decision will be of great importance not only to the citizens of San Diego and San Diego law enforcement, but also to the multibillion dollar biometrics industry. Over 70% of the purchases in the biometrics market, which is expected to grow to $9.37 billion by 2014, are made by law enforcement, the military, and other branches of government.

Facial recognition technology is just beginning to play what could ultimately be a very important role in United States law enforcement. At present, just remember that if a San Diego police officer uses his smartphone to take your picture, there may be a lot more going on than meets the eye.


Written by

November 18th, 2013 at 11:28 am

Germany and Brazil Object to United States’ Spying using United Nations Procedures

leave a comment

The United States is facing reproach from fellow United Nations members for domestic and international espionage. On November 1st United Nations’ diplomats from Germany and Brazil began circulating a draft resolution appealing to the UN High Commissioner for Human Rights to probe “the protection of the right to privacy in the context of domestic and extraterritorial, including massive, surveillance of communications, their interception and collection of personal data.” The draft resolution will also be orally presented to the UN General Assembly’s committee on social and humanitarian affairs.

The proposed resolution is in response to anger over allegations regarding the United States National Security Agency’s (NSA) surveillance practices. Fugitive former United States intelligence contractor Edward Snowden was a source of the information on the United States’ spying. The situation presents an interesting example of the interaction between technology, domestic law, international law, human rights, and foreign affairs. Critics regard the NSA’s practices as a violation of the right to privacy, and thus a human rights violation. The right to privacy is guaranteed in Article 12 of the Universal Declaration of Human Rights, which was adopted by the UN General Assembly in 1948.

Germany and Brazil are specifically concerned about reports that the NSA may have tapped the mobile phone of German Chancellor Angela Merkel and spied on the private email communications of Brazilian President Dilma Rousseff. Both leaders have denounced the NSA’s spying. German intelligence chiefs met with White House representatives in Washington, DC this week. In contrast, President Rousseff canceled a planned visit to the United States.

The draft resolution calls for a report to be developed by the UN human rights chief during the next two years. The purpose of the report would be to clarify the principles, standards, and best practices for conducting national security surveillance within the boundaries of international human rights law, including “digital communications [monitoring] and the use of other intelligence technologies.” The proposal also recommends that members review and improve their surveillance practices and oversight mechanisms. If the resolution is passed, it could carry political weight and establish a forum for other members to object to the NSA’s practices, although resolutions adopted by the General Assembly are non-binding on member nations.

Written by

November 4th, 2013 at 12:05 pm

Body Cameras for Police Officers

leave a comment

Using video cameras to record what a police officer did is not a new phenomenon.  Allegations of racial profiling and other police misconduct as well as corrosion of the public’s confidence in the police had prompted police departments across the country to install in-car cameras to provide objective accounts of traffic stops and police encounters with the public. [1] A study conducted by International Association of Chiefs of Police (IACP) found that in-car police cameras overall had a positive impact – increasing officer safety, boosting citizens’ confidence in the police by recording inappropriate police behavior, and reducing frivolous complaints against police for lack of professionalism or courtesy. [2] However, these in-car cameras can only capture about 5 percent of what a police officer did, and much of what occurs in the field are lost. [3] With no unbiased evidence, complaints filed against a police officer devolve into a “he-said-she-said” argument.

In response to the costs of civil litigation, worries regarding police accountability, and effectiveness of in-car cameras, some police departments have started equipping their officers with portable body cameras. [4][5] One recent study in Rialto, California shows that in the first year that body cameras were introduced, “the number of complaints filed against officers fell by 88 percent compared with the previous 12 months.” [6] In addition, “[u]se of force by officers fell by almost 60 percent over the same period.” [7] Two months ago, a federal judge had ruled that NYPD’s stop-and-frisk tactics as unconstitutional and had order the NYPD to initiate a one-year pilot program that would require officers to wear body cameras to record their dealings with the public. [8][9]

However, not everyone is happy with the idea of equipping body cameras on police officers.  Although public advocates do see body cameras as deterrent to police misconduct, a principal complaint is privacy invasion. [10] Critics are concerned with the handling and storing of the data captured on these cameras. [11] There are a lot of potential for abuse.  For example, data captured by an officer should not be broadcasted to the evening news; neither should the data be allowed to be emailed around the police department. [12] Police officers often interact with public citizens when they are in a sensitive, embarrassing or traumatic state, and that information should not be easily accessible or distributed. [13] And opposition to body cameras is not only on the public front. Although they recognize the value of body cameras, some police officers and governmental officials see body cameras as an encumbrance. [14][15]

Despite the opposition, police departments have started pilot programs to test out the effectiveness of these cameras, especially in larger cities such as Los Angeles and New York City. [16][17] Installing body cameras on police officers may be an effective way of utilizing technology to increase police accountability and to provide unbiased accounts of contentious events. There is a potential for this technology to become the new industry standard, like the in-car dashboard cameras; but this technology should not be used if the police department cannot adequately regulate its use so as to prevent invasion of privacy and significant encumbrance on an officer.

Written by

October 16th, 2013 at 1:15 pm

Posted in Commentary

Tagged with , ,

Who Am I? Property and Privacy Concerns of the Future

leave a comment

Computers and the internet continue to revolutionize the ways we collect and distribute information.  Many privacy concerns have accompanied these technological advancements.  Earlier this year, in a New York Times article, some of these privacy concerns were made readily apparent.  Using supposedly anonymous DNA sequences from a publicly available database, a scientist was able to identify the names behind five of those samples.  The participants in this database had voluntarily participated, and had in fact signed a form saying that the researchers could not guarantee their privacy.  Even so, there are several potential legal concerns that this experiment illuminated.

First, third party family members’ privacy could be violated, even if they had not voluntarily participated in in a genetic database.  In the above demonstration, the scientist was not only able to identify the name behind the DNA sequence, he was also able to identify the family members of the participant using online genealogical databases.  Given that science has identified genetic risk factors that can potentially lead to health problems, these family members might not want to be identified.  If a child of the participant was identified as having a 50% or 100% chance of inheriting some genetic risk factor they might fear being denied insurance coverage or being subjected to discriminatory hiring practices.  Consider, for example, someone sharing their home phone number without the permission of other household occupants.  If the house starts getting lots of calls from telemarketers, it could be an unpleasant experience for everyone, not just the person who gave out the phone number.  This is a relatively benign example, but it demonstrates how volunteering information that is common to both parties could lead to unwanted consequences for the group that has unwittingly been conscripted. Is someone allowed to voluntarily donate their DNA to science, when it could potentially cause problems for their family members?

This leads to a second question, how much of our DNA do we actually own, and what bundle of property rights does this ownership grant?  There was a recent debate about whether a company could patent individual genes associated with breast cancer. See Ass’n for Molecular Pathology v. Myriad Genetics, Inc.,  133 S.Ct. 2107 (holding that isolated naturally occurring DNA is not patentable, but synthetically occurring complimentary DNA can be patented).  The Supreme Court seems to be correct that a company cannot “invent” a gene, but even if they can’t, that doesn’t necessarily say whether or not individuals own their DNA.  Humans share about 99% of their DNA with chimpanzees as well as bonobos. They share an even higher percentage with each other, and identical twins share nearly identical DNA.  Further, DNA has never been truly under an individual’s exclusive control.  We leave our genetic material everywhere we go as we shed hundreds of thousands of skin cells every day.  If we share so much of our DNA with others (indeed sometimes almost all of it), and it is so readily available for sample collection, how much of it are we capable of protecting for privacy reasons?  Finally, if we don’t own our genes or DNA, what are the implications?  Do we own larger parts of our bodies which are merely expressions of these genes?  Right now, I think I am in control of my body, and I would like to keep it that way.  However, as technology continues to progress, these are questions that our society and government will have to resolve.


Written by

September 18th, 2013 at 11:06 am

Scroogled?: Microsoft’s Attack Campaign on Gmail

leave a comment

Microsoft has created a new ad campaign attacking Gmail. And for good reason: Gmail has 425 million active users as of June 2012. As of November 2012, the new Outlook has only 25 million users.  Microsoft’s campaign is striking on the fact that Gmail “scans emails” in order to create personalized advertisements for users.  Microsoft claims that Outlook does not similarly scan through emails in order to create advertisements (it does, however, scan through emails to separate the mail into spam, viruses, and other dangers).

The truth? Gmail does scan through emails in order to create personalized ads, but no humans or Google employees ever read through your emails according to Google. It stresses the fact that these advertisements are necessary in order to keep the email service free.

In an effort to capture some Gmail users, Microsoft has tried (and failed) to get the FTC to sue Google for violations of antitrust. Microsoft is now following up trying to get Gmail users to switch to Outlook by accusing Google of violating privacy rights that it seems consumers care about.

Are Google’s actions legal? When you have a Gmail account, you agree to Google’s privacy policy. It states: “Google also uses this scanning technology to deliver targeted text ads and other related information. This is completely automated and involves no humans.” However, Google has been sued multiple times by non-Gmail users (who have obviously not agreed to the privacy policy) because their emails are also scanned when they send emails to Gmail accounts.  No cases have gone to trial yet, but because no one receives the content of any emails sent through Gmail except the intended recipient, Gmail is likely not breaking any laws by conducting an automated scanning of emails for advertisement purposes. Regardless, it is smart of Microsoft to recognize that people are upset with these alleged violations of privacy, and its new advertisements and commercials use this to Outlook’s advantage.

There’s what I call the creepy line, and the Google policy about a lot of these things is to get right up to the creepy line, but not cross it.” -Eric Schmidt, Google Executive Chairman.

Written by

February 24th, 2013 at 10:56 am

Survey says nearly half of lawyers want to move key functions into “the cloud”

leave a comment

As more of our lives, and more of our work move into the digital realm, a Legal IT Professional’s survey indicates a split in the profession over whether firms should move key technology functions into “the cloud”. The survey’s sample size was fairly small (there were 438 respondents), yet 45% of lawyers and paralegals were in favor of the shift, with the slightly larger 46% opposing it (the remaining 9% very diplomatically had no opinion on the issue). Small to mid-size firms were more likely to be in favor, with 57% of firms boasting over 1,000 fee earners opposing the move. This is unsurprising, as larger firms tend to have in-house IT departments that might suffer from the move.

What is surprising is that such a high level of the profession seems so willing to embrace what would doubtlessly be a huge change for the field. On one hand, it would certainly make remote access easier, which may explain the high number of lawyers in favor of the move. Yet increasing the technological complexity of day-to-day legal work will involve training staff in the new processes, taking risks with a lot of the firm’s documentation, and ultimately, opening up a large amount of confidential information to the risk of hacking.

It is likely that none of these problems will ultimately prevent the shift from occurring, and 81% of those responding indicated they thought it would likely happen in the next decade. The willingness of such a large swath of a generally conservative, risk-averse profession to make the leap already is still worth noting however. In a profession that tends to eschew development for stability and to prefer precedent over novelty, the fact that these numbers are so high already may tell us a lot about the way all of society has embraced technology over the past few decades, and how much larger a role it is likely to play in our lives in years to come.

Written by

February 20th, 2013 at 3:21 pm