Archive for the ‘privacy’ tag

No Ads, No Games, No Gimmicks

leave a comment

Prior to its $19 billion acquisition by Facebook in February, WhatsApp promised subscribers three things: no ads, no games, no gimmicks.  For the past five years, WhatsApp prided themselves on operating a simple, practical messaging service that protected user’s privacy by not accessing their data for advertising and profit-generating purposes.  WhatsApp enjoyed enormous success under this model, drawing approximately 450 million subscribers since its introduction. Facebook’s acquisition of WhatsApp turned heads because of the stark contrast in how the two companies handle user privacy and data protection. Facebook is notorious for having a profit model built on accessing user data for monetary gain, and has become a figurehead for those opposing data collection practices in the wake of the NSA leaks. On March 6, 2014, privacy advocate groups EPIC (Electronic Privacy Information Center) and CDD (Center for Digital Democracy) filed a complaint with the Federal Trade Commission to block the deal until Facebook provided further details on how it would use data acquired from WhatsApp subscribers.  These groups are seeking a court order enjoining Facebook from implementing future changes to WhatsApp’s privacy policy, claiming that a change in the policy to advocate user data collection would be an unfair and deceptive business practice on the part of Facebook.  The complaint states that, “WhatsApp users could not have reasonably anticipated that by selecting a pro-privacy messaging service, they would subject their data to Facebook’s data collection practices.” Facebook has repeatedly assured that it will not change WhatsApp’s privacy policy and will allow WhatsApp to function as a separate business entity. However, Facebook made similar assurances following their $1 billion acquisition of Instagram in April 2012.  Facebook then amended Instagram’s privacy policy and incorporated the data from Instagram users into their business profit model.  It’s unclear whether WhatsApp can continue its success under the ownership of Facebook.  Recent events have put user data collection under intense public scrutiny.  With endless messaging alternatives available, WhatsApp’s continued future success may hinge on Facbeook's management of user privacy.

Written by

March 19th, 2014 at 10:37 am

Posted in Commentary

Tagged with , , ,

Should You Be Afraid of the Kinect?

leave a comment

Last month Microsoft released its newest console, the Xbox One. As part of its basic package the Xbox One includes a separate, yet required, add-on called the Kinect. The Kinect is a combination of a microphone, a camera, and a plethora of other sensors. The Kinect can see in the dark, pick up voice commands, read your heart rate, and recognize your face to sign in. Ostensibly, the Kinect is used to transmit voice commands to the Xbox One in order for quick and easy control of the device, as well as to provide motion control for certain games. Microsoft also encourages users to run their cable box and other video devices through the Xbox One to allow the console to act as the center of their living room. When not in use, the Kinect stays on standby and waits for the “Xbox On” voice command that will reactivate it. While the gaming possibilities for such a device have gamers excited, the many capabilities of the Kinect also has privacy groups worried about Microsoft’s collection of user data. When the Xbox One was first announced back in May of 2013, the German Federal Data Protection Commissioner, Peter Schaar referred the Kinect as “a twisted nightmare” adding that Microsoft has sold a “monitoring device” rather than a game console. These concerns were echoed in the US, where lack of information on the newly announced device had gaming sites worried that the Kinect could not be shut off and would be nearly impossible to fully control. After the recent NSA leaks that showed Microsoft-owned properties, such as Skype, were providing information to the government, the fact that the Microsoft Kinect would collect a large amount of personal data was worrying. So should you be worried about Microsoft your distributing your data? Luckily since the initial outcry, Microsoft has taken several steps to reassure users that their data will not be used inappropriately. For example, Microsoft has promised that it will not sell any data to advertisers, and that the Kinect could be fully disabled and even unplugged without worry. Even your facial recognition information (used as an alternate method of logging in) is converted to a string of numbers and stays on the console, rather than being processed by Microsoft. The only information that Microsoft reserves the right to explicitly monitor is voice chatting (although this does not include Skype conversations). Microsoft also recently joined other technology companies in publishing an open letter to the President and the members of Congress calling for surveillance reform. All in all, Microsoft has taken admirable steps to ensure that its users’ privacy.

Written by

December 12th, 2013 at 1:14 pm

Posted in Commentary

Tagged with , ,

San Diego Pilots Facial Recognition Technology for Law Enforcement Officials

leave a comment

Earlier this year, 133 Galaxy tablets and smartphones were distributed to 25 law enforcement agencies in the San Diego region as part of a pilot program. Their purpose: to allow San Diego law enforcement officials to make use of mobile facial recognition technology in daily operations. Facial images captured by the mobile devices are run through the Tactical Identification System, a new mobile facial recognition technology that matches images officers take in the field with images from databases containing approximately 348,000 San Diego County arrestees and over 1.4 million booking photos. The Tactical Identification System, which pulls mugshots from the statewide Cal-Photo law enforcement database and also has access to 32 million driver’s license photos, is coordinated by the San Diego Association of Governments and counts over 25 federal, state and local agencies among its participants. The San Diego County Sheriff’s Department and San Diego Police Department, which received a combined 91 of the 133 total mobile devices, together have made nearly 2,000 queries into the system since the pilot program was launched at the beginning of the year. Officers have stated that the Tactical Identification System’s facial recognition capabilities have proven useful in identifying people who refuse to provide their names or make use of false identification. The technology has also been cited as valuable in identifying immigrants who don’t have authorization to be in the United States. This pilot program, which relies on technology developed for military use, has largely avoided the public spotlight, most likely due to the fact that the technology was deployed in the field without notice or public hearings. Despite that fact, privacy concerns have still been raised regarding the program and the potential erosion of privacy its expansion could result in. The former executive director of the American Civil Liberties Union of San Diego & Imperial Counties, Kevin Keenan, has expressed concerns about the “monitoring of everyone’s action..." and "storage in perpetuity.” The ACLU has been voicing concerns about facial recognition technology for over a decade, identifying the technology’s potential inaccuracies and susceptibility to abuse as serious issues. Proponents of the program have countered by pointing out that the facial recognition software has built-in privacy safeguards. Specifically, after field images are run through the system they are deleted by the central database (but the images do remain on the mobile devices until they are manually deleted by an officer). Proponents have also highlighted the fact that the system has valuable applications beyond identifying criminal suspects, including identifying unresponsive, injured persons with no identifying documents. While the legality of the use of facial recognition technology by law enforcement has not yet been tested in the courts, such a challenge seems likely if the pilot program succeeds and is expanded throughout the San Diego Sheriff’s Department and San Diego Police Department. When the courts decision will be of great importance not only to the citizens of San Diego and San Diego law enforcement, but also to the multibillion dollar biometrics industry. Over 70% of the purchases in the biometrics market, which is expected to grow to $9.37 billion by 2014, are made by law enforcement, the military, and other branches of government. Facial recognition technology is just beginning to play what could ultimately be a very important role in United States law enforcement. At present, just remember that if a San Diego police officer uses his smartphone to take your picture, there may be a lot more going on than meets the eye. Sources: http://cironline.org/reports/facial-recognition-once-battlefield-tool-lands-san-diego-county-5502# http://www.californiacountynews.org/2013/05/facial-recognition-technology-available-for-san-diego-law-enforcement.html

Written by

November 18th, 2013 at 11:28 am

Germany and Brazil Object to United States’ Spying using United Nations Procedures

leave a comment

The United States is facing reproach from fellow United Nations members for domestic and international espionage. On November 1st United Nations’ diplomats from Germany and Brazil began circulating a draft resolution appealing to the UN High Commissioner for Human Rights to probe "the protection of the right to privacy in the context of domestic and extraterritorial, including massive, surveillance of communications, their interception and collection of personal data." The draft resolution will also be orally presented to the UN General Assembly’s committee on social and humanitarian affairs. The proposed resolution is in response to anger over allegations regarding the United States National Security Agency’s (NSA) surveillance practices. Fugitive former United States intelligence contractor Edward Snowden was a source of the information on the United States’ spying. The situation presents an interesting example of the interaction between technology, domestic law, international law, human rights, and foreign affairs. Critics regard the NSA’s practices as a violation of the right to privacy, and thus a human rights violation. The right to privacy is guaranteed in Article 12 of the Universal Declaration of Human Rights, which was adopted by the UN General Assembly in 1948. Germany and Brazil are specifically concerned about reports that the NSA may have tapped the mobile phone of German Chancellor Angela Merkel and spied on the private email communications of Brazilian President Dilma Rousseff. Both leaders have denounced the NSA’s spying. German intelligence chiefs met with White House representatives in Washington, DC this week. In contrast, President Rousseff canceled a planned visit to the United States. The draft resolution calls for a report to be developed by the UN human rights chief during the next two years. The purpose of the report would be to clarify the principles, standards, and best practices for conducting national security surveillance within the boundaries of international human rights law, including "digital communications [monitoring] and the use of other intelligence technologies." The proposal also recommends that members review and improve their surveillance practices and oversight mechanisms. If the resolution is passed, it could carry political weight and establish a forum for other members to object to the NSA’s practices, although resolutions adopted by the General Assembly are non-binding on member nations.

Written by

November 4th, 2013 at 12:05 pm

Body Cameras for Police Officers

leave a comment

Using video cameras to record what a police officer did is not a new phenomenon.  Allegations of racial profiling and other police misconduct as well as corrosion of the public’s confidence in the police had prompted police departments across the country to install in-car cameras to provide objective accounts of traffic stops and police encounters with the public. [1] A study conducted by International Association of Chiefs of Police (IACP) found that in-car police cameras overall had a positive impact – increasing officer safety, boosting citizens’ confidence in the police by recording inappropriate police behavior, and reducing frivolous complaints against police for lack of professionalism or courtesy. [2] However, these in-car cameras can only capture about 5 percent of what a police officer did, and much of what occurs in the field are lost. [3] With no unbiased evidence, complaints filed against a police officer devolve into a “he-said-she-said” argument. In response to the costs of civil litigation, worries regarding police accountability, and effectiveness of in-car cameras, some police departments have started equipping their officers with portable body cameras. [4][5] One recent study in Rialto, California shows that in the first year that body cameras were introduced, “the number of complaints filed against officers fell by 88 percent compared with the previous 12 months.” [6] In addition, “[u]se of force by officers fell by almost 60 percent over the same period.” [7] Two months ago, a federal judge had ruled that NYPD’s stop-and-frisk tactics as unconstitutional and had order the NYPD to initiate a one-year pilot program that would require officers to wear body cameras to record their dealings with the public. [8][9] However, not everyone is happy with the idea of equipping body cameras on police officers.  Although public advocates do see body cameras as deterrent to police misconduct, a principal complaint is privacy invasion. [10] Critics are concerned with the handling and storing of the data captured on these cameras. [11] There are a lot of potential for abuse.  For example, data captured by an officer should not be broadcasted to the evening news; neither should the data be allowed to be emailed around the police department. [12] Police officers often interact with public citizens when they are in a sensitive, embarrassing or traumatic state, and that information should not be easily accessible or distributed. [13] And opposition to body cameras is not only on the public front. Although they recognize the value of body cameras, some police officers and governmental officials see body cameras as an encumbrance. [14][15] Despite the opposition, police departments have started pilot programs to test out the effectiveness of these cameras, especially in larger cities such as Los Angeles and New York City. [16][17] Installing body cameras on police officers may be an effective way of utilizing technology to increase police accountability and to provide unbiased accounts of contentious events. There is a potential for this technology to become the new industry standard, like the in-car dashboard cameras; but this technology should not be used if the police department cannot adequately regulate its use so as to prevent invasion of privacy and significant encumbrance on an officer.

Written by

October 16th, 2013 at 1:15 pm

Posted in Commentary

Tagged with , ,

Who Am I? Property and Privacy Concerns of the Future

leave a comment

Computers and the internet continue to revolutionize the ways we collect and distribute information.  Many privacy concerns have accompanied these technological advancements.  Earlier this year, in a New York Times article, some of these privacy concerns were made readily apparent.  Using supposedly anonymous DNA sequences from a publicly available database, a scientist was able to identify the names behind five of those samples.  The participants in this database had voluntarily participated, and had in fact signed a form saying that the researchers could not guarantee their privacy.  Even so, there are several potential legal concerns that this experiment illuminated. First, third party family members’ privacy could be violated, even if they had not voluntarily participated in in a genetic database.  In the above demonstration, the scientist was not only able to identify the name behind the DNA sequence, he was also able to identify the family members of the participant using online genealogical databases.  Given that science has identified genetic risk factors that can potentially lead to health problems, these family members might not want to be identified.  If a child of the participant was identified as having a 50% or 100% chance of inheriting some genetic risk factor they might fear being denied insurance coverage or being subjected to discriminatory hiring practices.  Consider, for example, someone sharing their home phone number without the permission of other household occupants.  If the house starts getting lots of calls from telemarketers, it could be an unpleasant experience for everyone, not just the person who gave out the phone number.  This is a relatively benign example, but it demonstrates how volunteering information that is common to both parties could lead to unwanted consequences for the group that has unwittingly been conscripted. Is someone allowed to voluntarily donate their DNA to science, when it could potentially cause problems for their family members? This leads to a second question, how much of our DNA do we actually own, and what bundle of property rights does this ownership grant?  There was a recent debate about whether a company could patent individual genes associated with breast cancer. See Ass'n for Molecular Pathology v. Myriad Genetics, Inc.,  133 S.Ct. 2107 (holding that isolated naturally occurring DNA is not patentable, but synthetically occurring complimentary DNA can be patented).  The Supreme Court seems to be correct that a company cannot “invent” a gene, but even if they can’t, that doesn't necessarily say whether or not individuals own their DNA.  Humans share about 99% of their DNA with chimpanzees as well as bonobos. They share an even higher percentage with each other, and identical twins share nearly identical DNA.  Further, DNA has never been truly under an individual’s exclusive control.  We leave our genetic material everywhere we go as we shed hundreds of thousands of skin cells every day.  If we share so much of our DNA with others (indeed sometimes almost all of it), and it is so readily available for sample collection, how much of it are we capable of protecting for privacy reasons?  Finally, if we don’t own our genes or DNA, what are the implications?  Do we own larger parts of our bodies which are merely expressions of these genes?  Right now, I think I am in control of my body, and I would like to keep it that way.  However, as technology continues to progress, these are questions that our society and government will have to resolve.  

Written by

September 18th, 2013 at 11:06 am

Scroogled?: Microsoft’s Attack Campaign on Gmail

leave a comment

Microsoft has created a new ad campaign attacking Gmail. And for good reason: Gmail has 425 million active users as of June 2012. As of November 2012, the new Outlook has only 25 million users.  Microsoft's campaign is striking on the fact that Gmail "scans emails" in order to create personalized advertisements for users.  Microsoft claims that Outlook does not similarly scan through emails in order to create advertisements (it does, however, scan through emails to separate the mail into spam, viruses, and other dangers). The truth? Gmail does scan through emails in order to create personalized ads, but no humans or Google employees ever read through your emails according to Google. It stresses the fact that these advertisements are necessary in order to keep the email service free. In an effort to capture some Gmail users, Microsoft has tried (and failed) to get the FTC to sue Google for violations of antitrust. Microsoft is now following up trying to get Gmail users to switch to Outlook by accusing Google of violating privacy rights that it seems consumers care about. Are Google's actions legal? When you have a Gmail account, you agree to Google's privacy policy. It states: "Google also uses this scanning technology to deliver targeted text ads and other related information. This is completely automated and involves no humans." However, Google has been sued multiple times by non-Gmail users (who have obviously not agreed to the privacy policy) because their emails are also scanned when they send emails to Gmail accounts.  No cases have gone to trial yet, but because no one receives the content of any emails sent through Gmail except the intended recipient, Gmail is likely not breaking any laws by conducting an automated scanning of emails for advertisement purposes. Regardless, it is smart of Microsoft to recognize that people are upset with these alleged violations of privacy, and its new advertisements and commercials use this to Outlook's advantage. "There's what I call the creepy line, and the Google policy about a lot of these things is to get right up to the creepy line, but not cross it." -Eric Schmidt, Google Executive Chairman.

Written by

February 24th, 2013 at 10:56 am

Survey says nearly half of lawyers want to move key functions into “the cloud”

leave a comment

As more of our lives, and more of our work move into the digital realm, a Legal IT Professional’s survey indicates a split in the profession over whether firms should move key technology functions into “the cloud”. The survey’s sample size was fairly small (there were 438 respondents), yet 45% of lawyers and paralegals were in favor of the shift, with the slightly larger 46% opposing it (the remaining 9% very diplomatically had no opinion on the issue). Small to mid-size firms were more likely to be in favor, with 57% of firms boasting over 1,000 fee earners opposing the move. This is unsurprising, as larger firms tend to have in-house IT departments that might suffer from the move. What is surprising is that such a high level of the profession seems so willing to embrace what would doubtlessly be a huge change for the field. On one hand, it would certainly make remote access easier, which may explain the high number of lawyers in favor of the move. Yet increasing the technological complexity of day-to-day legal work will involve training staff in the new processes, taking risks with a lot of the firm’s documentation, and ultimately, opening up a large amount of confidential information to the risk of hacking. It is likely that none of these problems will ultimately prevent the shift from occurring, and 81% of those responding indicated they thought it would likely happen in the next decade. The willingness of such a large swath of a generally conservative, risk-averse profession to make the leap already is still worth noting however. In a profession that tends to eschew development for stability and to prefer precedent over novelty, the fact that these numbers are so high already may tell us a lot about the way all of society has embraced technology over the past few decades, and how much larger a role it is likely to play in our lives in years to come.

Written by

February 20th, 2013 at 3:21 pm

Are Your E-mail Communications Protected by the Stored Communications Act?

leave a comment

Last month, the Supreme Court of South Carolina ruled that the Stored Communications Act (“SCA”) did not protect e-mails contained in a user’s webmail account. Jennings v. Jennings, No. 27177 2012 WL 4808545 (S.C. Oct. 10, 2012). The e-mail user sued his wife and her relative for violating the SCA by accessing his Yahoo! account to obtain e-mails he exchanged with another woman. The SCA was enacted in 1986 as part of the Electronic Communications Privacy Act and provides protection to electronic communication service providers and users by limiting when the government can compel disclosure of certain communications, limiting when service providers can voluntarily disclose information, and providing a cause of action against a person that intentionally obtains an “electronic communication while in electronic storage” without authorization. 18 U.S.C. § 2701(a)(2) (2000). Due to outdated definitions, the SCA affords little protection to internet communications exchanged today. An electronic communication service is a service providing “electronic storage,” which is defined as “(A) any temporary intermediate storage of a wire or electronic communication incidental to the electronic transmission thereof; and (B) any storage of such communication by an electronic communication service for purposes of backup protection of such communication.” 18 U.S.C. § 2510(17) (2000). This definition of electronic storage tracked the way e-mail was used at the time the SCA was enacted; mail was temporarily copied and stored before being downloaded to the recipient’s computer. Today, webmail services allow the user to access mail on the web through any computer rather than require the user to download the mail onto their personal computer, which raises the question of whether webmail is ever in temporary intermediate storage or stored solely for backup protection. The Department of Justice (“DOJ”) has adopted a narrow interpretation of “electronic storage.” According to the DOJ, a communication is not electronic storage under §2510(17)(A) unless it is stored in the course of transmission, and communications stored as backup protection under § 2510(17)(B) are those that are stored by the service provider as a backup copy prior to delivery to the recipient. CCIPS, U.S. Dep’t of Justice, Searching and Seizing Computers and Obtaining Electronic Evidence in Criminal Investigations, 123 (3d ed. 2009). Conversely, in Theofel v. Farey-Jones, the Ninth Circuit found that e-mail read by the recipient but still available on the server was stored for the purpose of “backup protection” and thus protected by the SCA under § 2510(17)(B). 359 F.3d 1066 (9th Cir. 2004). In Jennings, a lower court relied on Theofel in determining that the e-mails were in electronic storage “for purposes of backup protection.” The Supreme Court reversed, holding that the “passive inaction” involved in opening e-mail and leaving the single copy on the server cannot constitute storage for backup protection. Based on this holding, it is unclear whether any web-based e-mail communication would be protected under the SCA. Although the question of whether the e-mail could be protected under §2510(A) (“temporary intermediate storage of a wire or electronic communication”) was not raised in Jennings, it is unlikely that any opened e-mail could be said to be “intermediate storage.” The SCA’s outdated definitions are difficult to apply to electronic communications as they are used today and therefore do not adequately protect web-based communications.

Written by

November 12th, 2012 at 8:24 am

Posted in Commentary

Tagged with , ,

No Harm, No Foul?

leave a comment

On October 5th, Facebook moved to dismiss allegations that it tracked users’ online activity after they logged out of the social network, because the users failed to show how they were harmed. This allegation of secretive-tracking-post-logout is the central claim of the $15 billion class action suit filed against Facebook on the eve of the company’s much-maligned IPO. Facebook’s litigation strategy in this case - In re Facebook Internet Tracking Litigation - appears to be based on the line of argument oft utilized with much success at the summary judgment stage in tracking cases (and privacy cases generally), that the plaintiff is not able to show how interception and collection of personal information/browsing history results in a cognizable or economic harm. But while gathering information about how a person uses the Internet, typically for purposes of enabling behavioral advertising, may not result in a substantial economic injury (as currently conceived), there is an intuition that harm is nonetheless present in this type of practice. A recent study out of Berkeley illustrates the obvious: Most Americans are uncomfortable with the collection of data about their online activity. Yet, those weirded-out or, possibly, legitimately harmed by the collection of personal information/browsing histories are currently not finding help when they turn to the courts for protection of their privacy concerns. The FTC seems to be making a concerted effort to reign in the use of tracking – note its $22.5 million settlement with Google; but also note that the Google settlement turns on the fact that Google misrepresented its policies and practices regarding its tracking cookies. Blatant misrepresentations aside, tracking of individuals by private companies remains the status quo. Maybe this is all just fine. It may be perfectly acceptable for a person’s activity to be monitored in order to improve her Internet experience through customization – maybe behavioral advertising should cause online shoppers to rejoice for streamlining the consuming process. Or it might be that online tracking is simply unavoidable, in the sense that it’s merely a cost of entry – privacy-as-causalty of the Internet age, as Senator Franken would say. Perhaps it’s needed to keep the Internet  free – or perhaps it’s not. Regardless of whether the act of monitoring and collecting information about online behavior is mostly beneficial or mostly problematic, those who find that the practice makes them uneasy – and judging by prevailing public opinion as well as the frequency and volume of lawsuits filed because of tracking, many are uneasy – will not be afforded relief by the legal system until new ways to conceive of and formulate the harm done by this type of monitoring are proposed and accepted. The questions remain how exactly to go about doing this, and whether the only way to possibly do so in an effective and comprehensive way is via new legislation. Until those questions are answered, we can enjoy prescient online ads while living with the knowledge that Facebook knows more about us than the majority of our Facebook friends.

Search the Blog