Tinker Toys or Dandy Devices?

leave a comment

As the scope of Google’s dispute with European competition regulators widens by the month, consumers in very different positions with very different preferences have been drawn into the debate over what is and what isn’t good for them. Another section of them was drawn in April 15, when the European Commission announced a formal investigation of Google’s business practices in the mobile phone operating system market that its Android has come to dominate.

Although this marks the most high-profile investigation of the popular operating system yet, it isn’t the first. Antitrust regulators in Russia announced a formal investigation in February based on complaints from Russian search engine Yandex that in order for Russian smartphone users to access the Google Play app store, Google required smartphone manufactures set Google as the default search engine. A few days later, the Northern California District Court threw out a lawsuit alleging similar bundling practices.

The European Commission describes its investigation in three parts. First, regulators will examine whether Google has used its market position to require or incentivize smartphone and tablet manufacturers to exclusively pre-install Google’s own applications or services. Second, investigators will probe whether manufactures seeking to provide modified versions of Android on other devices received push-back or threats from Google. Third, the investigation will explore whether Google tied or bundled Android with other Google applications and services.

Google has raised a number of defenses and pro-consumer explanations for its behavior. For the average smartphone user, the most interesting is the desire to give the end user a great ‘out of the box’ experience. In other words, when you first power your new phone up, and you want to browse the internet, Google thinks it prudent to have its web browser Chrome waiting there for you to open. Or if your parents finally give in and get you that smartphone so you can play that video game with the goblins and the ogres, Google doesn’t want you to have to look any further than its Google Play app store to download it.

Pro-consumer explanations likely averted stateside antitrust charges against Google for its search engine practices, which makes a great out of box experience relevant to the European Commission’s investigation. Whether it holds any water depends on the consumer under consideration. For some consumers, like this writer, the out of box experience looks a lot like the out of the box for six months experience. A phone is a phone, and a smart one just means you can use it to Google, “how to operate a system,” when your techie friend starts spouting off about mods and peer-to-peer networks.

For that friend, however, the out of box experience entails a lot more. There are browsers to download, diagnostics to run, and apps to test. The fun is in the tinkering. For consumers like her, and larger consumers, like software developers and cellphone manufacturers, there isn’t much of a sandbox to play with when the sand’s all Android and the toys are all Google.

Luckily for Google, allegations like these needn’t lead to fines or dissolution. Microsoft stood accused of similar tactics with its Internet Explorer web browser and Windows operating system. The European Commission and Microsoft eventually struck a deal where Microsoft gave European users the option of picking their web browser during their PC’s initial boot-up. Perhaps Google and the commission can reach a similar agreement that covers both you and your friend.

Written by

June 7th, 2015 at 3:59 pm

Posted in Commentary

The Future of Net Neutrality

leave a comment

After years of struggling with what the federal government’s role should be in regulating the “free internet,” the FCC voted to enforce net neutrality rules under Title II of the Communications Act. Under the new Rules, major Internet Service Providers (ISPs) like Verizon, AT&T and Comcast are prohibited from slowing down applications or services, accepting fees for preferential treatment or blocking lawful content. In a nutshell, the rules place ISPs under the same strict regulatory framework that governs telecommunication networks to ensure that all Internet traffic that runs through these providers is treated equally.

While the Rules have been praised by the Obama Administration and the FCC Chairman as “necessary to protect Internet openness against new tactics that would close the Internet,” there has been rapid backlash from opponents.

USTelecom, a consortium of ISPs that had filed a suit against the FCC before the Rules went public, re-filed its suit just minutes after the Rules were published on the Federal Register earlier today. USTelecom claims that the FCC used the incorrect approach to implementing net neutrality standards and argues that the reclassification of broadband Internet access as a public utility is “arbitrary, capricious, and an abuse of discretion.”

Another snag in the implementation of the FCC rules comes from Congressional support of the ISP lobby. Representative Doug Collins, a Georgia Republican introduced a new bill in Congress that would allow Congress to use an expedited legislative process to review new federal agency regulations. The measure would need only a simple majority to pass, instead of the usual 60 votes needed to overcome a filibuster. Essentially, this bill is a quick-stop to what Republican supporters call “heavy-handed regulations that will hamper broadband deployment and could increase taxes and fees.”

Despite this aggressive push-back from opponents, FCC Chairman Tom Wheeler, is optimistic that the Rules will withstand legal challenge. The FCC would likely argue that Title II of the Communications Act of 1934 is ambiguous, and that the FCC should be granted Chevron deference in interpreting and applying it.

While Chevron deference is generous, the future of net neutrality is still uncertain, especially given the coming election. If the FCC loses the initial case and a Republican wins the 2016 Presidential Election, the case will not likely make it up to the Supreme Court for review.

Regardless of politics, one thing is certain — the FCC will be facing a lot of litigation on its net neutrality rules.


Written by

May 25th, 2015 at 11:34 am

Posted in Commentary

Twitter and Cyber-bullying

leave a comment

Twitter has recently announced that it will be rolling out a new “quality filter” that is designed to “remove all Tweets from your notification timeline that contain threats, offensive or abusive language, duplicate content, or are sent from suspicious accounts.” The “quality filter” is only attached to verified users since they have the most followers and therefore are susceptible to the most abuse, but Twitter has also implemented other anti-harassment tools such as a feature that makes it easier to report abuse to law enforcement. So essentially, this quality filter and other recent features are designed to prevent instances of cyber-bullying and protect user safety.

Cyber-bullying is more and more common as Internet users are shielded by anonymity on the Web. Cyber-bullying is especially present on Twitter. According to data from the Pew Center, Twitter users face many forms of harassment including death threats and threats of sexual abuse and stalking and the victims of this abuse are disproportionately women. There have been several recent high-profile cases of cyber-bullying involving Twitter including #gamergate, the harassment of Robin William’s daughter after his death, and Ashley Judd’s decision to press charges against trolls. These high-profile incidents have been speculatively identified as the impetus for Twitter’s implementation of anti-harassment blocking tools including the “quality filter”.

Twitter initially positioned itself as the “free speech wing of the free speech party”, which meant that they took a neutral view on message content. Twitter’s “neutral view” has seemingly made the company more tolerant of abuse and harassment on their social media site relative to other social media sites. For instance, Twitter is notoriously criticized for its failure to deal with cyber-bullying. In fact, Twitter’s CEO Dick Costolo claimed that “ We [Twitter] suck[s] at dealing with abuse”, apologized for his company’s failure to adequately protect its users from abuse via Twitter, and admitted that cyber-bullying has cost platform users. The “quality filter” and other blocking tools have emerged since Twitter’s CEO has taken personal responsibility for Twitter’s slow response to protecting its users.

Twitter has no legal obligation to censor its users but Twitter is also not under any obligation imposed by the First Amendment to protect free speech. Therefore, as a private company, Twitter may balance free speech against user safety in any manner it so chooses. Given the bad rap Twitter is receiving for not censoring enough and the resulting loss of platform users both low and high-profile, it is likely a good decision for Twitter to implement more anti-harassment blocking tools. Free speech is an admirable value but it likely shouldn’t come at such a high cost to user safety.

Written by

April 17th, 2015 at 11:08 pm

Posted in Commentary

The Danger of “Just & Reasonable” Net Neutrality Rules: The Potential Toothlessness of the FCC’s New Rules

leave a comment

On February 26, 2015, proponents of the open Internet celebrated the Federal Communications Commission’s vote to reclassify broadband Internet as a public utility and approve new net neutrality rules. The goal of the FCC’s vote is to protect Net neutrality by requiring Internet service providers (ISPs) to treat all Internet traffic equally. Although increasing regulatory oversight of the “last mile” of the Internet is certainly a step in the right direction toward a true open Internet, this is not a clear victory for Net neutrality advocates.

On March 12, 2015, the FCC released a declaratory ruling and order that contained the FCC’s newly adopted Net neutrality rules. Because the FCC voted to reclassify broadband Internet as a public utility, all ISPs are now subject to regulation under Title II of the Communications Act of 1934. This effectively places ISPs under the same strict regulations as telephone networks. Accordingly, the document outlines strict rules for Internet providers that are designed to preserve an open Internet.

The Net neutrality rules help ensure Net neutrality by explicitly prohibiting ISPs from: 1) blocking legal content, 2) throttling, and 3) creating Internet fast lanes (accepting fees for priority treatment).

While these are all great things, Net neutrality advocates should hold off on celebrating with the top-shelf champagne because the new rules include a standard of review that can greatly undermine their robustness.

The rules require ISPs’ conduct to be “just and reasonable.” This gives the FCC the power to decide on a case-by-case basis whether a ISP has overstepped its bounds or to exempt its actions as “just and reasonable”.

The FCC itself admits that the terms just and reasonable are broad, “inviting the Commission to undertake the kind of line-drawing that is necessary to differentiate just and reasonable behavior on the one hand from unjust and unreasonable behavior on the other” (p. 127). The wording leaves the standard of review open to interpretation. It appears that the effectiveness of the Net neutrality rules in maintaining an open Internet rests on the FCC’s willingness to affirmatively act and declare an ISP’s actions as unjust and unreasonable, and thus illegal.

On the upside, the new net Neutrality rules give the FCC the powerful tools needed to effectively enforce an open Internet. However, these tools can only be effective if the FCC actually uses them. Although the rules do ban content blocking, throttling, and paid prioritization, there are numerous other ways that ISPs can violate net neutrality. For example, although the new rules also prohibit ISPs from using “reasonable network management” to charge consumers more, AT&T seems to be doing just that without any FCC condemnation by using network management to throttle its grandfathered customers with unlimited data plans after they have used 5 GB.

The new rules fail to provide consumers with a blanket shield against Net neutrality violations. Although ISPs can no longer engage in content blocking, throttling, and paid prioritization, it seems inevitable that ISPs will seek other ways to increase profits that may not comply with the open Internet concept. In order for the new rules to have actual bite, the FCC ought to broadly define the terms “unjust” and “unreasonable” and actively fight to keep Net neutrality alive and well.

Written by

April 15th, 2015 at 9:19 pm

Posted in Commentary

Glancing at the USPTO Enhanced Patent Quality Initiative

leave a comment

The United States Patent & Trademark Office (USPTO) recently began an enhanced patent quality initiative.  Over the past few years, the USPTO has significantly reduced patent application backlog and pendency and is now turning its attention to patent quality.  The USPTO is better positioned to address patent quality than ever before, since the America Invents Act (AIA) allows the USPTO to set its own fees and retain the fees it collects.  Previously, the USPTO was required to share a portion of its fees with other government entities.  With the ability to charge higher fees and keep the fees it collects, it is possible to imagine significant progress towards improved patent quality.  Currently, a large part of the problem is that patent examiners work in an environment where quantity is often emphasized over quality.  The patent examiner count system awards points to examiners for processing patent applications. With a new emphasis on quality and more resources at its disposal, the USPTO has the opportunity to change this environment.

The USPTO has been seeking public input and guidance to direct its continued efforts towards enhancing patent quality.  Their stated focus is on “improving patent operations and procedures to provide the best possible work products, to enhance the customer experience, and to improve existing quality metrics.” Just recently, on March 25 and 26, 2015, the USPTO held a Quality Summit with the public to discuss its outlined proposals.  The USPTO has outlined six proposals:

  • Requests for Quality Review: allowing applicants to request a review if they receive very low quality office actions
  • Automated Pre-Examination Search: searching for new tools to find better search results in less time
  • Clarity of Record: looking for ways to enhance the clarity and completeness of the prosecution record
  • Review of and Improvements to Quality Metrics: measuring the patent system and examiner performance
  • Review of Current Compact Prosecution Model: considering concerns that too many cases result in either an RCE (Request for Continued Examination) or an appeal
  • InPerson Interview Capability with All Examiners: expanding locations for conducting in-person interviews

All of these proposals could contribute to improved patent quality.  But in addition, I would like to see the USPTO commit to hiring more examiners and re-evaluating the examiner count system, or at least its importance compared to the “quality metrics” it mentions in proposal 4.

Hopefully, the USPTO is sincere in its commitment to considering public input and improving patent quality.  Increased patent quality could go a long way towards changing the conversation in the face of anti-patent “propaganda” that has become increasingly loud in the past few years.  Besides playing public relations defense against the anti-patent crowed, increasing patent quality obviously would also have many other benefits.  Intellectual property-intensive industries support at least 40 million jobs in the U.S. and contribute more than $5 trillion (nearly 35 %) to US gross domestic product.  Increased patent quality will lead to more predictability for patent filers, owners, and litigants.  It will more equitably reward greater innovation with greater patent rights.  It will make it harder for the few truly bad actors to use low quality patents and extortion-like tactics to collect frivolous patent litigation settlements.  The initiative is good common sense and comes at a time when there is increased recognition and appreciation around the world of the importance of IP.  Let’s hope that the patent community takes advantage of this opportunity for change by getting involved and that the USPTO follows through on its ideas and continues to work diligently towards the important and never-ending task of improving our patent system.

Written by

April 11th, 2015 at 4:58 pm

Posted in Commentary

Autonomous Cars: The Legality of Cars on Autopilot

leave a comment

Mercedes, BMW, Infiniti, Honda, and Volvo have produced cars that have the ability to be in a semi-autopilot mode in certain situations. Google has even produced bubble-like experimental self-driving cars that completely take the human driver out of the equation. Recently, the chief executive officer of Tesla, Elon Musk, announced that the company would introduce cars with an autopilot mode into the U.S. market this summer.

Tesla’s anticipated product would not remove human participation completely, like the Google self-driving car, but it is the first commercially available, largely autonomous vehicle. Tesla’s car would have technology that would allow drivers to transfer control to autopilot on “major roads” such as highways. The only thing required to obtain this technology is a software update in Tesla’s current Model S sedans. This is hugely exciting news for a lot of people; not having to pay attention during the commute to and from work would allow an extra hour or so for people to be productive or get some rest.

However, there are serious legal questions regarding autonomous vehicles that have yet to be answered. For example, who will be liable if the car strikes a pedestrian while on autopilot? Will it be the driver, as the owner of the car, who maintains the ultimate ability to control the vehicle? Will it be the manufacturer or programmer who developed the software that failed to detect the pedestrian? There simply are not laws covering these scenarios in most states, let along cohesive federal laws. At most, there are a few states that have passed laws declaring the legality of autonomous vehicles mainly for testing purposes, not for consumers. The U.S. Department of Transportation’s National Highway Traffic Safety Administration (NHTSA) released an initial policy in May of 2013, but has not developed or released any additional guidelines.

Tesla is taking a brave first step by taking the car into the market without settled regulations. The widespread development of autonomous functions in cars indicates that full autonomy is the direction technology is heading, but the legal system has not caught up to it. Google’s director of the self-driving cars, Chris Urmson, has stated that provided that the required crash-test and other safety standards are met, there are no regulatory prohibitions to autonomous vehicles. This is not because autonomous vehicles are expressly permitted, by any means; there are simply no fully developed laws or regulations tailored to the unique liability issues presented by autonomous cars. Tesla will likely be the guinea pig in the courts, and the decisions resulting from any lawsuits that arise from accidents or failures will help establish and shape the law.

Written by

April 8th, 2015 at 10:28 pm

Posted in Commentary

Will the “Blurred Lines” Verdict Fuel Excessive Litigation?

leave a comment

In the past two months, three major pop artists have paid royalties to older musicians because new pop songs sounded too much like older hits: Sam Smith paid Tom Petty for the similarities between “Stay With Me” and “Won’t Back Down” and Pharrell Williams and Robin Thicke paid the family of Marvin Gaye for the similarities between “Blurred Lines” and “Got to Give It Up.”

Concerning the Smith-Petty dispute, a mashup of the two songs seems to show strong similarities. Although Smith’s representatives and co-writers acknowledged the “undeniable similarities” of the two songs, they claimed that they were “Not previously familiar with… “’I Won’t Back Down’” and that all similarities between the songs were “complete coincidence.” The two artists settled the dispute outside of court. Tom Petty does not seem to think that Sam Smith and his co-writers infringed on purpose: “The word lawsuit was never even said and was never my intention . . . all my years of songwriting have shown me these things can happen . . . a musical accident no more no less.

On the other hand, the Williams/Thicke and Gaye dispute was much more venomous and personal. In  a federal trial in the Central District of California, a jury awarded damages of nearly $7.4 million dollars in a trial in which entertainment lawyer Richard Busch succeeded in branding Pharrell and Thicke as “liars who went beyond trying to emulate the sound of Gaye’s . . . music and copied . . . Got to Give It Up outright.” With tears in her eyes, Marvin Gaye’s daughter Nona told reporters that the verdict made her feel “Free from … Pharrell Williams and Robin Thicke’s chains and what they tried to keep on us and the lies that were told.” NYU music professor Jeff Peretz believes that the jury reacted negatively “to the hubris and to the arrogance of Robin Thicke and Pharrell” given that this suit began because Pharrell and Thicke originally filed for a declaratory judgment from the court.

So one pop musician was humble and agreed to pay an older musician for the similarities between the two songs, while the other two were arrogant enough to think they could avoid justice so they got smacked down by the ($7.4 million) Hammer of Thor. Respect your elders. Easy takeaway, right?

Here’s the problem: the Gaye v. Pharrell verdict may have opened up a minefield for future songwriters and record labels. Despite how this video may cause the average listener to believe this was an open and shut easy verdict, the two songs do not have the same “melodic and harmonic structure” according to music professor Jeff Peretz. Peretz claims what was similar about these songs was the rhythmic structure and “vibe…[which] up until this particular case, that was never a copyright-able thing.” Pharrell essentially admitted he was “channeling …that late-’70s [Marvin Gaye-esque] feeling” and was paying homage to the sound of that musical time period. “’Blurred Lines’ differs substantially and audibly from ‘Got to Give It Up’ in . . . melody and lyrics—even lyrical topic.

The problem with giving infringement based on “feel” is that it is an impossible-to-find line that gives songwriters little idea of what is infringing and what is original. Everyone can name at least a few modern-day songs which sound similar to older songs. Internet mashup artists have highlighted this: a country music mashup with interchangeable elements from six country hits, Lady Gaga’s “Alejandro” and Ace of Base’s “Don’t Turn Around,” Bruno Mars’s “Locked Out of Heaven” vs. The Police’s “Roxanne,” In fact there is an entire website dedicated entirely to putting together songs that sound similar. One has to wonder if anything is original anymore.

Will this newest verdict encourage musicians to start suing everyone? Will it become more lucrative to sue as opposed to writing new songs?

To conclude, I would like to emphatically state that I do not endorse the lyrics or message of the song “Blurred Lines” in any way (I much prefer “Weird” Al Yankovic’s parody “Word Crimes” which has the same catchy beat with none of the misogyny). But given that everyone can name at least one song that sounds like another, and there are now more copyrighted songs in existence than ever before (a fact that is going to continue to be true), perhaps it makes sense to worry that songwriters may have an exceedingly difficult time writing new hits without stepping on the toes of a previous musician.

Written by

April 7th, 2015 at 4:21 pm

Posted in Commentary

Increased Use of StingRay Devices May Raise More than Just Privacy Concerns

leave a comment

On February 22, 2015, the Washington Post ran an article about the arrest of Florida man Tadrae McKenzie.  The facts of the case were relatively unremarkable:  Mr. McKenzie was arrested on March 6, 2013 by the Tallahassee Police Department.  Mr. McKenzie was charged with robbery with a deadly weapon, a first degree felony.  If convicted, Mr. McKenzie would have faced a prison sentence of up to 30 years.   However, luckily for Mr. McKenzie, this was not to be.  Before his trial began, the state of Florida offered him a plea bargain under which he agreed to plead guilty to a lesser charge (second-degree misdemeanor) and serve six months probation.

On its face, this seems like a routine story of a small-time criminal who got a lucky break from the criminal justice system.  So why did it attract the attention of a national newspaper like the Washington Post?  The answer lies in the reason behind Florida’s the plea agreement offer to Mr. McKenzie.  If this case had gone to trial, the state of Florida would have been forced to disclose to Mr. McKenzie and the public information about a surveillance device known as a “Stingray” (sometimes called an “IMSI-catcher”). [1]

So what is a StingRay?  To explain this, the Post’s article included a helpful infographic.  Essentially, StingRays take advantage of a security flaw in older 2G cell signals to gain access to data stored in nearby cell phones.  Unlike the newer 3G and 4G cell signals, 2G cell signals do not authenticate the cell phone towers with which they communicate.  To gain access to nearby cell phones, a StingRay blocks 3G and 4G cell signals, which forces cell phones in the area to switch to 2G.  They then send out a cell signal that imitates a genuine cell phone tower, which causes cell phones within range to connect with the StingRay instead of an actual tower.   Once the phone is connected, the stingray can pull metadata such as call history and location data, all without the owner’s knowledge.

It is this last part–lack of notice to the cell phone owners–that most worries civil rights advocates due to privacy concerns.  Moreover, according to documents obtained by the Electronic Privacy Information Center (EPIC) through FOIA, this is often done without first obtaining a warrant.  The FBI does not have a uniform national policy that identifies the legal authority under which it collects information using StingRay devices because Federal District Courts are split on the question of whether information collected using a StingRays falls under the third-party doctrine.  According to the FBI, some federal courts have determined that government agencies must show probable cause and obtain a warrant before conducting surveillance, while others merely require that government agencies meet the more lenient requirements contained in the Stored Communications Act, 18 U.S.C. § 2703.[2]  At the state level, governments have been generally skeptical of the use of devices like StingRays without warrants.  So far, eight legislatures–Illinois, Indiana, Maryland, Minnesota, Tennessee, Utah, Virginia and Wisconsin–have passed laws requiring warrants for tracking devices like Stingrays.  The supreme courts of Florida and Massachusetts have handed down decisions to that effect as well.[3]

While I share the concerns of organizations like EPIC, I find that a more troubling aspect of this story is the extent to which Florida used its prosecutorial discretion as a tool to protect the StingRay’s secrecy.  One of the fundamental tenets of the criminal justice system is that punishment should be dealt in a way that gives fair and equal treatment under the law.  If the likelihood of a plea offer in a case is determined primarily on the basis of whether the police apprehended the defendant with the assistance of a StingRay, it would undermine the legitimacy of the criminal justice system as a whole.  This problem will likely grow more pronounced as StingRays become more common and the frequency of plea agreements like Mr. McKenzie’s increases.

Of course, it is possible that a court case will come up where a plea agreement is not possible or the defendant refuses to settle.  Perhaps, if this happens, it will finally force information regarding StingRays into the open, where the public can finally have an informed debate about their use.

[1] Currently, the FBI requires that law enforcement agencies sign a Non-Disclosure Agreement before obtaining a StingRay from its manufacturer, Florida-based Harris Corporation.  According to the FBI, the NDA is necessary to maintain the StingRay’s effectiveness as a crime-fighting tool.

[2] StingRay devices are outside the scope of the Riley decision since that case concerned cell phones that are actually seized by police in a search.  The Supreme Court has not yet ruled on the subject of cell phone tracking using StingRay devices.

[3]  The Florida case had not yet been decided at the time of Mr. McKenzie’s arrest.

Written by

April 6th, 2015 at 6:30 pm

Is There a Role for International Law in Privacy and Technology?

leave a comment

Recently there has been an increasingly large spotlight being shown down upon the realm of technology, big data, and privacy. Certainly we live in a world that becomes more and more dependent upon technology. Additionally, we live a world where business and personal lives are becoming increasingly globalized, and the lines between national and international can be hazy at best. This can be especially true in areas like technology that allow for communication and transactions to occur real-time across borders. With technological and global expansion comes the risk associated with data breaches. This has become apparent with events such as the Edward Snowden debacle and numerous data breaches at large, multinational corporations. As a necessary corollary the public and businesses alike become entangled in a struggle to protect their information and privacy. In cases like this people turn to the law to provide guidance and relief. Thus, it is worth asking what role will International Law play in all of this?

At the MTTLR Symposium on Saturday, February 21, 2015 the international law panelists discussed their perspectives on International Law and its relation to privacy and technology. At its most basic level, International Law was discussed as a tool to aid in the protection of private rights. By private rights I mean, for example, individual citizens or individual corporate entities. In my opinion, International Law is wholly inadequate to deal with the technology and privacy issues facing the world and its citizens today in respect to protecting individual rights. My rationale is threefold and is rooted in the main limitations of International Law generally.

First, under International Law, private rights don’t exist in the sense that we understand them to exist in U.S. or other domestic law regimes. That is to say it is not you or I who can sue another country for relief due to some wrong we believe we have suffered. Rather, an individual must petition its government to bring an action on his/her behalf against the foreign country. This is hard to do in and of itself, but even if one is able to get a case brought, he/she has no express right to any relief that is granted. The U.S. may elect to deliver any damages received to the individual, but in general it is the U.S. (or representative country’s) money to with as it pleases. This is a huge drawback from a plaintiff’s perspective. If a plaintiff brings a case under International Law it unlikely to be heard, and even more unlikely that the plaintiff will personally recover anything in the event the case is heard. Further, even if the case is heard under International Law and the country representing the plaintiff chooses to return any damages/monetary relief received directly to the plaintiff, there is still the major problem of getting decisions under international law to be enforced in the first place.

What good is a right without a remedy? Turns out not very good at all. Imagine you are a U.S. plaintiff suing a Russian business under international law. Further, assure you’ve avoided the hurdles discussed above. You’ve been awarded $1 million. But now the U.S. must seek to enforce this punishment against the Russian company. Unlike under U.S. law where this would be relatively easy, there is a huge problem in International Law of actually enforcing international court decisions. So you may win the case, but be left with nothing but a judgment to show for it.

Finally, much of international law comes in the form of customary law or law that only gains its force from its existence in “custom” over an unspecified period of time. If international privacy laws are created it will take time for them to truly gain effect. Given the rapid change that occurs in the technology realm it is almost impossible to imagine the development of international law regarding technology and privacy now that will be adequate even a few years later. Thus, there is a problem if we consider that international law we develop now may not gain its teeth for a few years, by which time it likely won’t be able to deal with the latest technology and privacy issues.

All that being said, talking about technological and privacy issues at an international level can be beneficial if only for the reason that it brings the issue to the forefront across the globe. This spurs discussion about more efficient potential solutions to these issues. So while I am of the opinion that international law itself is inadequate to deal with the privacy and technology issues of the modern world, I do think these are important issues that need to be solved in an internationally cooperative manner.

Written by

April 3rd, 2015 at 12:00 am

Posted in Commentary

Big Data and the Fall of Personally Identifiable Information

leave a comment

There has been no shortage of “Big Data” based start-ups in the last decade, and that trend shows no sign of slowing down. As computing power and sophistication continues to increase, the ability to process large sets of information has led to increasingly pointed insights about the sources of this data.

Take Target for example. When you pay for something at Target using a credit card, not only do you exchange your credit for physical goods, you also open a file. Target records your credit card number, sticks it to a virtual file and begins to fill that file with all sorts of information. Your purchase history is recorded: what you buy, when you bought it, how much you bought. Every time you respond to a survey, or call the customer help line or send them an email, Target is aware. Anytime you interact with Target, the data and meta-data that characterize that interaction are parsed carefully and stored as Target’s institutional knowledge. But it doesn’t end there. As diligent as Target may be in monitoring your interactions, there will inevitably be holes. But fear not! Instead of settling for an inadequate picture of who you are, Target can just buy the rest of it from the other people you do business with. “Target can buy data about your ethnicity, job history, the magazines you read, if you’ve ever declared bankruptcy or got divorced, the year you bought (or lost) your house, where you went to college, what kinds of topics you talk about online, whether you prefer certain brands of coffee, paper towels, cereal or applesauce, your political leanings, reading habits, charitable giving and the number of cars you own.”

And the results speak for themselves. By scrutinizing the mountains of data that it collects from countless individuals, patterns emerge. One particular creepy example involved Target finding out a teenage girl was pregnant before her father did.

But taking a step back, the increase in the specificity and pervasiveness of the insights that can be drawn from data analytics in the age of Big Brother Data poses, besides the issue of immediate discomfort at the individual level (the creepy factor), a broader legal problem.

Much of US data privacy law centers around the idea of Personally Identifiable Information (PII) and restricting its uses in certain contexts. However, the functionality of such a definition, one that places added weight on information that may distinguish an individual identity, relies on the existence of a practical distinction between data that is labeled PII and data that is not.

As Big Data continues to grow in both reach and sophistication, our information economy will start to approach a state in which no information falls outside of the definition of PII. The Target example makes clear that even seemingly benign information, when processed in conjunction with other “harmless” data, can reveal deeply personal facts about an individual. In a world where correlative findings have valid predictive value, the definition of PII is no longer effective in pursuing its goal of protecting individual rights to privacy.

Written by

March 24th, 2015 at 4:59 pm