Archive for December 2011

Is the FDA Draft Guidance for Mobile Medical Applications Really Too Vague?

leave a comment

This summer, the FDA issued a draft guidance document addressing how the agency intends to apply its regulatory authority to Mobile Medical Applications (MMAs). The guidance document defines mobile applications as “software applications that can be executed (run) on a mobile platform,” which includes tablet computers, smart phones, and personal digital assistants (PDAs). To be considered an MMA, however, the mobile application must first meet the definition of “device” under section 201(h) of the Food, Drug & Cosmetic Act and either:

  • Be used as an accessory to a regulated medical device; or
  • Transform a mobile platform into a regulated medical device.

Many critics of the FDA draft guidance document complain that the guidance does a poor job at drawing a line between regulated and unregulated MMAs. For example, many point out that the FDA’s intention to regulate mobile applications that “allow the user to input patient-specific information and – using a formulae or processing algorithms – output a patient-specific result,” could include algorithms like BMI calculators that patients can find through a simple online search engines. At first glance, it does seem absurd that the FDA would take it upon itself to regulate all such online formula or algorithms. But the critics are overstating here – the FDA is not concerned that a patient may, out of curiosity, decide to use a BMI calculator to determine whether or not their weight falls within a healthy range. In fact, the FDA stated it has no intention to regulate “mobile apps that are solely used to log, record, track, evaluate, or make decisions or suggestions related to developing or maintaining general health and wellness.” The real concern here seems to be that clinicians might use these apps when “making a diagnosis or selecting a specific treatment for a patient.”

But what if it is the clinician who decides to use a BMI calculator when diagnosing his or her patients? Read the rest of this entry »

Written by

December 17th, 2011 at 10:31 pm

Posted in Commentary

Tagged with , ,

MTTLR Publishes Volume 18 Issue 1

leave a comment

On behalf of the Michigan Telecommunications and Technology Law Review, I am pleased to announce the publication of our Fall 2011 issue. Inside, you will find scholarship on the creativity effect in IP transactions, copyright reform, e-discovery, energy regulation, green technology in Michigan, government trade secrets, Internet regulation, and pharmaceutical patents.

Printed copies will be delivered to subscribers during the first week of January. Thanks to our authors for their fine scholarship and to our editors for their many hours of diligent editing.


Spoliation of Electronic Evidence:
Sanctions Versus Advocacy

Charles W. Adams
The People’s Trade Secrets?
David S. Levine
The Endowment Effect in IP Transactions:
The Case Against Debiasing

Ofer Tur-Sinai


Hatch-Waxmanizing Copyright
Michal Shur-Ofry
Governments, Privatization, and “Privatization”: ICANN and the GAC
Jonathan Weinberg


Toward Legitimacy Through Collaborative Governance: An Analysis of the Effect of South Carolina’s Office of Regulatory Staff on Public Utility Regulation
William H. Ellerbe
TEVA v. EISAI : What’s the Real “Controversy”?
Grace Wang

Symposium: Green Technology and
Economic Revitalization in Michigan


Creating a Plug-In Electric Vehicle Industry Cluster in Michigan:
Prospects and Policy Options

Thomas P. Lyon
Russell A. Baruffi, Jr.
The Case for Clean Energy Technology Manufacturing: Ten Steps Business and Industry Must Take to Optimize Opportunities in the Emerging Clean Energy Economy
Stanley “Skip” Pruss

Written by

December 16th, 2011 at 1:27 pm

Posted in MTTLR Journal

Louie and the Intermediaries

leave a comment

Louie C.K., the renowned comedian, released his latest comedy special in a most unorthodox way: charging $5 for an immediate, DRM-free download from his website. Instead of going with Comedy Central (which released his last special), HBO, or any other network, he chose to cut out the intermediary entertainment companies. Already, as CK discussed on his interview with Terry Gross on NPR’s Fresh Air, he’s made a profit.

By exhorting his fans to pay a relatively low price and take part in the experiment of whether he could actually make money using this distribution model, CK proves that the role of intermediaries like music companies, book publishers, and movie studios may not continue to be absolute, or desirable, for both creators and consumers. In our Fall 2010 issue, an Article by Leah Belsky, Byron Kahr, Max Berkelhammer, and Yochai Benkler discussed the models used by some artists that rely upon online cooperation, sharing culture, and variable pricing. The success of this particular experiment adds another entry to the prior instances discussed in our Article, and may herald a snowballing of creator-distributed content.

Written by

December 16th, 2011 at 11:03 am

Posted in Commentary

Netflix to Join Facebook Feed

leave a comment

Netflix may now be able to use Facebook to further alienate its consumers while pursuing a lucrative revenue stream. The House amended the Video Privacy Protection Act (VPPA) to relax written consent requirements for sharing information on movie rentals. This opens the door to Netflix having a Spotify-like presence on Facebook feeds and finally lets me see how many of my Facebook friends truly appreciate “The Room.”

The House’s vote, which was surprisingly more contentious than anticipated and attracted a bit of money, is an important step in changing our strict consent requirement (written consent required for every disclosure) for the sharing of video rental information. The amendment allows for continuous consent obtained on the Internet, while still asking for it to be “informed, written consent.” It requires such consent to be distinct and separate from other legal or financial obligations. This means consent need only be established once on the Internet, though it can be revoked. The full text is here, and will take longer to load than to read, and does not tell us how consent on the Internet is actually achieved.

The amendment has a startling lack of the words “opt-in” or “opt-out” along with any requirement of notification due to change in company policy. The bill looks like an opt-in regime (it still asks for “informed written consent”) that will require notification of change of significant policy, but it does not address the political divide around these words. Of course, given our propensity for opt-out schemes, I could be reading this completely reasonably but incorrectly. One representative (Hanna, R-NY) explains his reasoning that the amendment “clarifies” current consent law, requiring an “opt-in” but allowing an “opt-out” at any time.

Privacy advocates are skeptical of this change, bringing serious concerns over the loss of meaningful information privacy control. Mark Rotenberg of EPIC (the main opposition to this amendment) claims this destroys the right to meaningful consent. He reads the amendment as diminishing user’s control over their own personal information. The Center for Democracy and Technology was less condemning in its responses, suggesting this to be considerably less important than other privacy issues Congress should be tackling. The CDT also points out that the original VPPA is a high-water mark for privacy legislation, and any degradation of it will be taken as a general attack on privacy. Members of Congress were more concerned about their own personal problems rather than their constituents’ problems.

One concern that lurks in the background is that the amendment allows “consent” to be defined as “check/uncheck this box to continue on to your normal Netflix experience, and by the way we are sharing your information with all your Facebook friends and/or anyone that asks.” The actual concern might not be that ridiculous, but there is definitely a knee-jerk reaction here to the thought that this purportedly “opt-in” amendment will still allow an automatic enrollment in the service until you opt-out. Without thinking about the history of opt-in/opt-out, the statutory language potentially precludes that by requiring consent to be given in a context free of other legal and financial obligations. Still, it’s easy to remember Spotify’s rather unchallenged entrance into Facebook as feeling like this, considering how we were all mystified when people started asking about our seemingly endless love for Kate Bush (YouTube). This is not a minor concern, but it is also not something we give companies complete freedom to do.

The FTC sent clear signals to social media sites and advertisers that dramatic changes to privacy policies concerning personal information will not be acceptable without some sort of new consent. Facebook just got in a whole heap of trouble for this sort of thing, and is subject to privacy audits and requirements of privacy “opt-ins” from users for substantial changes to policy. It also just closed comments on another privacy enforcement action with a behavioral advertising company, ScanScout, that used “flash cookies” in a rather deceptive way. The proposed ScanScout consent order requires strict notification and meaningful opt-out requirements, resembling to some degree the FTC’s proposal for Do Not Track legislation or regulation. Netflix, Hulu, Amazon Instant, iTunes, and other “rental” services should be well aware of the problems they can run into and the broad power of enforcement the FTC is exercising, which will surprise no one if they start investigating abuses in an area of newly reduced privacy.

The FTC involvement in major industry problems is forcing the industry to take more accountability in hopes of avoiding run-ins with the FTC and reducing the need for regulatory action. Much of the principles centers on either notice before a practice starts (looking more like an opt-in) or a pervasive reminder of a service (looking like the elusive “meaningful opt-out”). While the FTC might only be able to go after the big fish, industry standards reflective of the FTC’s position seem to be taking hold.

In the end, I think this bill is “ok” and will not have the negative and destructive effects that Rotenberg implies. An individual’s consent can still be revoked, and it’s difficult to see this practice taking everyone by surprise. If it did, the FTC might have some further words with Mr. Zuckerberg about their prior agreement. While I’d like to see our politicians more directly confront what we expect from our privacy regime, I’m more comfortable letting the FTC experts, industry players, and privacy advocates come to a consensus on what “works” before Congress tells us what aspect of privacy is most important or opens the floodgates of private litigation.

Written by

December 12th, 2011 at 6:11 am

Concerns for Compensating Harms During Clinical Research

leave a comment

The first, and often only, financial concern of research participants is how much they are getting paid for participation.  In studies that do nothing more than measure reaction time to identify a number in a serial string of letters, there is little reason to be concerned about other financial issues.  However, for more involved research, such as clinical trials of new drugs, the possibility exists that something may go wrong.  If and when it does, how should the research participant be compensated?

The Presidential Commission for the Study of Bioethical Issues recently had a meeting in Boston, at which the commission addressed the need for compensation of human research subjects who are harmed outside the scope of the research study.  As noted by Mr. Kenneth Feinberg, it is not the current policy of federal sponsors of research to compensate for injuries caused by clinical studies.  Conversely, most private sponsors of research cover any medical costs that may fall to the participant, either by covering what private insurance does not, or by paying for all treatment before insurance.  For example, the research policy posted by Pfizer states that it “arranges for medical care for any physical injury or illness that occurs as a direct result of taking part in a Pfizer-sponsored clinical study. Pfizer reimburses this medical care at no expense to the subject.”


As simple as it may seem to say that participants should receive care for injuries that occur because of clinical research, there are always considerations that require more careful thought.  Mr. Feinberg identified seven considerations that the Commission needs to consider in its final report.  Some are easy, such as whether participants should be compensated for their injuries.  Considering that many upfront payments for research are designed to reimburse participants for their costs and not to compensate for the risk they accept, it is only logical that they should be compensated if that risk actually occurs.

More interesting from a technological standpoint, is who (and how) a factual determination should be made that the study was responsible for the injury.  For example, consider a situation where a study is comparing a standard-of-care surgically implanted device and a potentially improved device, and an injury occurs during the implantation procedure.  Obviously, this only occurs when a patient is in need of the procedure in the first place.  The question then becomes, was the adverse event caused by the new procedure, or a mistake during the surgery, or the natural progression of the underlying disease?  Who should make this determination is a similarly difficult proposition.  Under some possible theories of compensation, different causes of injury may warrant different levels of compensation.  With necessarily unknown elements to research, a doctor involved in the study may be the only person with detailed enough knowledge to make the determination.  However, the doctor will likely also have significant conflicts of interest.  Therefore, it is clearly important that there be independent review with an emphasis on the future needs of the participants, and not just whether the study continues to be worth the risk in light of new data.  Universities and other research centers could provide this review through already established ethics panels.  Alternatively, the primary investigators or study sponsors could be responsible for contracting outside personnel to review as events occur over the period of the study.

Written by

December 7th, 2011 at 5:52 pm

FCC Certification Process for Video Relay Service Providers Needs Review

leave a comment

According to Title IV of the Americans with Disability Act, the Federal Communications Commission (FCC) is required to provide deaf and hard of hearing citizens with Telecommunication Relay Services (TRS), in the most efficient manner possible. In long distance conversations between two deaf people or a hearing and deaf person, this used to be provided through the use of a TTY (also called a TDD, or Telecommunications Device for the Deaf). A TTY was used in a conversation between a hearing and deaf person with the help of a sign language interpreter, who would type the message a hearing person said to the deaf person through the TTY, and then relay the message typed back by the deaf person to the hearing caller. The 21st Century version has seen the TTY replaced with a new industry standard: Video Relay Service (VRS). With VRS, a hearing person calls a VRS provider, who then uses the internet to connect with a deaf person through a live-streaming video, that way connecting the two parties while translating the conversation between English and American Sign Language. These services are offered free to users, with the costs being covered by the Interstate TRS fund (Fund), amassed by collecting a portion of revenues from telecommunications companies throughout the United State.

The oversight for this fund is provided by the FCC, with VRS providers compensated for every minute of relay services rendered. The current rate for compensation is set between $3 to $6 per minute, and as some VRS companies are able to provide over 500,000 minutes of VRS per month, the amount of money involved in this industry is significant. Despite this large amount of money, the oversight provided by the FCC came under scrutiny recently after reports surfaced that the system for reimbursement had been corrupted.

In response to the reports of some providers using “dummy callers” to up the amount of VRS minutes that provider could report to the FCC for reimbursement, in July of this year the FCC created new certification standards for becoming a provider of VRS. The FCC stated that the goal of the new requirements were to detect and prevent fraud and abuse while giving the FCC more oversight of the VRS system to ensure providers were qualified. Among these new standards for receiving reimbursement through the fund are the requirements that each provider:

1) operate their own call centers and employ their own Communication Assistants (defined by the FCC as qualified sign language interpreters);

2) answer 80% of their VRS calls within 120 seconds;

3) must offer 24/7 VRS access; and

4) provide each VRS user with a unique 10-digit number, so that the VRS are able to make emergency calls

The new certification requirements also made the FCC the sole body that is capable of finding a VRS provider capable of receiving reimbursement from the Fund (previously states had been able to grant licenses for VRS providers), and made it necessary for each and every company providing VRS prior to these certification adjustments to reapply for certification.

While these requirements may seem necessary, in light of the recent corruption uncovered in the industry, a closer analysis indicates these certification requirements could prove to cause more problems than it solved. This is so for three reasons: first, the new requirements have proven overly burdensome for a majority of VRS providers; second, these requirements could lead to supplement the creation of a detrimental monopoly in the VRS industry; and lastly, the new requirements make NO quality assurance requirements for the type of VRS interpreters a provider employs.

The requirements these new certification standards establish has proven too burdensome for many of the prior VRS providers. As many VRS providers previously offered their services through subcontractors, the FCC requirement that all VRS systems and employees must be completely owned by a provider has cut the current number of providers from over 30 to less than 10. Despite arguments from the preeminent deaf university in the country that this requirement will limit the ability of deaf and hard of hearing citizens to gain access to VRS providers, the FCC has refused to edit this requirement.

Through these burdensome requirements, the amount of real competition that exists among the limited providers of VRS that remain is questionable. Reports indicate that one VRS provider, Sorenson, controls as much as 80% of the market share of the industry. There is no argument that Sorenson, should this market power exist, is able to use this power to their advantage in such a way as to hurt consumers.  Furthermore, the normal fear of monopolization may not exist in this scenario, as the FCC setting the rate for reimbursement through the Fund doesn’t allow Sorenson to independently control the cost of providing VRS.

The new certification requirements, however, lack any sort of quality assurance requirements for the sign language interpreters employed by these VRS providers, a problem for numerous reasons. Without significant competition in the VRS market, providers may have an incentive to hire less qualified (and therefore cheaper) interpreting services in order to extract the largest profit possible. Currently, there are no federal certification requirements for becoming a sign language interpreter. Instead, providers are given the task of determining who is qualified to work as a sign language interpreter during VRS. Currently Sorenson’s hiring standards, shared by other providers in the industry, give the following as accepting qualifications:

“NAD level IV/V; or a RID CI, CT, CI/CT, CSC; or NIC, NIC Advanced, NIC Master; or hold a state interpreter certificate at the Intermediate or Master Certificate skill levels or have the professional interpreting experience to become a Sorenson VRS interpreter, subject to skill set verification and screening.”

A close reading of this standard indicates that companies can hire interpreters that hold only a “state interpreter certificate”, or that “have the professional interpreting experience to become a Sorenson VRS interpreter.” While the first standard may sound sufficient, many states do not have any specified qualification requirements for providing sign language services at all. Furthermore, the second standard indicates that companies such as Sorenson “credential” their own interpreters. This is not to say that interpreters employed by the limited number of providers acceptable under the current FCC certification requirements aren’t qualified; it simply indicates there is nothing currently in place within the FCC that allows for an sort of quality assurance.

This step taken by the FCC to cut down on fraud and abuse of the VRS provider system is assuredly laudable. However, the hurdles created by these requirements for current VRS providers and potential future providers to enter the industry are so high that meaningful competition in the market might have been compromised. If the requirements have limited the amount of VRS providers to the point where meaningful competition does not exist, the ability of consumer demand to influence these providers to employ qualified sign language interpreters may be nonexistent. Finally, if the FCC is truly committed to ensuring VRS providers are “qualified”, the agency needs to address the new certification procedure’s lack of any requirements for providers to employ sign language interpreters with a certain qualification level.


Written by

December 7th, 2011 at 5:51 pm

Posted in Commentary

Tagged with , ,

No Overtime for Overworked IT Workers?

leave a comment

On October 20th Senator Kay Hagen (D-NC) introduced the Computer Professionals Update Act (CPU Act) for consideration in the Senate. The bill seeks to amend the Fair Labor Standards Act to expand the overtime exception for hourly workers to cover a wide swath of IT workers, including security specialists, software programers, and database administrators. Many of these workers are salaried employees, and thus already exempt from overtime requirements. However, there are still many IT workers that are paid on an hourly basis, as this admittedly unscientific survey shows.

The bill is co-sponsored by three Republican senators and one other Democratic senator, and has been assigned to the Senate Committee on Health, Education, Labor, and Pensions. While the passage of this bill is far from certain–most bills die in committee–the question of why this bill was introduced still looms. I suggest that Sen. Hagen is motivated by something beyond the typical IT worker: the growing video game industry in her home state of North Carolina.

North Carolina has at least fourteen game developers and publishers within its boarders, including the amazingly successful Gears of War developer Epic Games. As evidence of North Carolina’s push for part of the video game pie, the state recently enacted a fifteen percent tax credit for game developers. Should the Federal overtime exemption pass, the State would be able to further aid one of its major growth industries.

The question of overtime hours has been a hot button issue in the game design industry for the last few years, starting when a game developer’s spouse spoke out about the working conditions at Electronic Arts. As recently as July of this year game developers have been complaining of unfair wage practices during grueling production schedules. In an industry where twelve hour workdays are common, having a Federal law that exempts all your key employees from overtime pay may help the bottom line. Many people may dream of working in the video game and technology industry, but should this bill pass some entry level workers may lose out on some important legal protections.

As a final point, it is interesting that none of the co-sponsors are from the technology hot beds of California and Washington. North Carolina’s Technology Triangle may be growing, but without the support of the giants of the technology world it is doubtful that this bill completes its journey into law. This is definitely a bill for any budding tech workers to keep an eye on.

Written by

December 7th, 2011 at 4:43 pm

Expansion of Cyber Warfare… Possibly

leave a comment

In a small town outside Springfield, Illinois, a controversy emerged this past month as to whether or not the U.S. had fallen victim to its first known industrial cyber attack.  In a public water district, a water pump malfunctioned causing it to turn on and off until the piece of equipment eventually burned itself out.  Cyber-security expert and blogger Joe Weiss notified the media that the Illinois Statewide Terrorism & Intelligence Center had identified the event as a cyber attack launched from somewhere in Russia.  Subsequently, the Department of Homeland Security and FBI pursued investigations and concluded that there was no actual evidence of hacking of the controls to the facility.  No malicious intrusion appears to have occurred.  According to a source with DHS, the Russian IP address found in the computer log was present because the contractor, who had remote access to the computer system, was there on personal business.

As implausible as this and similar scenarios might seem, where hackers could gain control of industrial equipment anywhere in America—outside action movies—the U.S. has already been implicated in committing this exact activity.  Last year, the Stuxnet worm was discovered and linked to U.S. and Israeli governments as an attempt to derail Iran’s nuclear program.  The worm spread to hundreds of thousands of computers but was designed, ostensibly, so specifically as to execute a process only to destroy a network of the centrifuges in Iran’s nuclear facility.  While Stuxnet originally mystified security companies and programmers, it now exists as (1) a well-studied “playbook” for those wishing to design a similar computer worm and (2) part of an acknowledgement that the U.S. is innovating beyond cyber espionage and into industrial cyber warfare.  Realizing that the cyber arms race favors the innovation of hackers, which is often unpredictable for those working cyber defense, many are asking if there is any possible legal regime applicable to this type of attack.

Those trying to determine international rules of law are grappling with almost boundless uncertainty.  Questions of interpretation deal with whether a cyber attack might trigger the collective self-defense provision in Article V of the NATO Charter or qualify as the use of force according to Article 2(4) of the U.N. Charter.  However, a practical issue any lawmaker faces is that it may be next to impossible to know with certainty where an attack is coming from.

The U.S. has endeavored to establish a legal framework for cyber warfare within its own government regarding policies and rules of engagement, but even there deliberations are “ongoing.”  This year, instead of waiting for answers from international bodies, the Pentagon clarified the U.S. view that these attacks may constitute acts of war.  Just recently, the U.S. joined efforts at the NATO cyber defense research center in Estonia, whose government was temporarily crippled by a cyber attack years ago that is presumed to have come from Russia.  Likewise, in the past week the U.K. announced its own Cyber Security Strategy that voiced intentions to pursue an aggressive cyber defense policy.

Still, one important consideration should emerge while we’re worrying about cyber warfare: there is still no evidence of any significant physical harm befalling anyone due to cyber warfare.  These worries can be overblown.  There are few, if any, successful cases of cyber industrial sabotage—even Stuxnet probably only worked to destroy a tenth of its target centrifuges.  On the other hand, many people, even experts, may have vested interests in with raising cyber security fears.  As engaging and serious as this discussion sounds, we should take cyber security threats with a grain of salt.  Before considering retaliation, we especially need to make sure that the problem is not simply a glitch within our own equipment controls.


Written by

December 7th, 2011 at 4:12 pm

Opposition to SOPA Gaining Momentum

leave a comment

The Stop Online Piracy Act (along with the Senate version known as the Protect IP Act), introduced last month in the House by Representative Lamar Smith, aims to level a significant blow against offshore “rogue sites” that host copyright material. SOPA would allow the U.S. attorney general to obtain a court order against these sites and serve the relevant ISP to take down the site. Furthermore, the Act would allow the DOJ and copyright owners to seek court orders blocking payments to these sites from online ad networks and payment processors.

The bill, however, has been the target of harsh criticism from lawmakers and industry titans alike. The principle arguments against the Act is that it is detrimental to the economy and impinges on free speech. Indeed, the bill’s detractors point out that it “strikes at the very core of the internet” by introducing a walled garden ecosystem of censorship that sacrifices openness and innocent user-generated content for the whims of Hollywood. However, not even all of Hollywood is united in its effort to pass the bill — even international pop sensation Justin Bieber has offered his own two-cents, calling for Senator Amy Kloubchar (a sponsor of the Senate version) to be “locked up [and] put away in cuffs…”  The bill’s aim is to crack down on intellectual property infringement, but it has been lambasted for its overaggressiveness (as a disproportionate response to a relatively narrow issue of online IP infringement) and detrimental impact on user-generated services such as YouTube. Moreover, critics point out that the ingenuity and entrepreneurship epitomized by an open Internet would be compromised as start up costs for websites would rise exponentially to implement the necessary compliance measures demanded by SOPA.

The growing chorus of anger, disappointment , and skepticism directed towards SOPA put its future (at least in its current form) in grave doubt. Indeed, as an issue that has proven so inflammatory that it has united tech giants like Yahoo! and Google, prominent lawmakers, Justin Bieber, and even a group of illustrious law professors, the SOPA might find itself taken offline soon enough.


Written by

December 7th, 2011 at 3:59 pm