Attend the Retail Law 2014 Conference – October 15-17, 2014, Charlotte, North Carolina

The National Law Review is pleased to bring you information about the upcoming Retail Law Conference:

Retail Law 2014: At the Intersection of Technology and Retail Law
Retail Law 2014: At the Intersection of Technology and Retail Law

Register Today!

When

October 15-17, 2014

Where

Charlotte, NC

The 2014 Retail Law Conference takes place October 15-17 in Charlotte, NC. This year’s program is stronger than ever with relevant, compelling and interactive sessions focused on the legal issues affecting retailers. In partnership with the Retail Litigation Center (RLC), RILA will host legal counsel from leaders in the retail industry for the fifth annual event.

This year’s Retail Law Conference will feature issues at the intersection of technology and law, how the two spaces interact and the impact that they have on retailers. Topics will likely include:

  • Anatomy of a Data Breach: Prevention & Response
  • Privacy: Understanding New Technologies & Data Collection
  • Advertising Practices: Enforcement & Social Media
  • ADA Implications for New Technologies
  • Legal Implications for Future Payment Technologies
  • Policies & Procedures of The “Omnichannel” Age
  • Patent Litigation “Heat Maps”
  • Union Organizing Campaigns
  • Wage & Hour Litigation
  • EEOC Enforcement
  • Foreign Corrupt Practices Act
  • Corporate Governance & Disclosure
  • Election 2014
  • Dueling Views of The U.S. Supreme Court
  • Legal Ethics

The Retail Law Conference is open to executives from retail and consumer goods product manufacturing companies. All others, such as law firms and service providres, must sponsor in order to attend, and can do so by contacting Tripp Taylor at tripp.taylor@rila.org.

Fix These 4 Problems on Your Blog to Maximize Search Engine Optimization

Consultsweb Logo

1.   Make It Useful

Write about something that will provide value to the person reading it. Write with your audience in mind. Keep the writing simple but professional. Remember: Your clients do not have a law degree and if your writing confuses them, they will look for answers elsewhere.

Legal MarketingThink about your client base. Are they middle aged woman, seniors, mostly male, individuals with physical handicaps? Target your posts to their interests, needs and questions. Avoid general articles that could be for anyone. Have the reader in mind when you are writing content and show your expertise. Answer the reader’s unasked questions.

Targeting a specific demographic will help with the social signals as it will probably be shared more and will earn links. Fluff content may get you some rankings for staying relevant and regularly updating your website, but if an actual human goes on your site and does not find value in what you have posted, chances of a return visit are slim—and your ultimate goal should be people returning to your site based on the quality of its content.

2.   Make It Local

Think about your local area and any news or hot topics that you can cover in blog updates. Can you add unique value to these stories? The more your topics and writing speak to your local audience, the more engaged they will be with your site. Write about charities or events you are involved in.

3.   Engage the Audience

How does the page look? Content is not just words. Content can be text, images, videos, charts, graphics and data. Use video and image assets to help tell your story. Visual content engages the user and instills respect for the quality of the information presented on the page.

Also, long blog posts allow you to fit a lot of good information and keywords onto the page, but you will need to divide it in to short sections or into an FAQ format to enable visitors to scan the page for the information they seek.

Use your employees for feedback. Ask them to share your content. If three months have passed and no one has shared anything, it is time to start asking why.

4.   Get the Technical Details Right

Effective title structure is key to generating good organic traffic and a high-quality user experience. Utilizing headings (H1, H2, H3), alt text and description tagging is important for user experience (UX) and for search engines to understand and optimally display your content.

Article By:

Of:

10 Insights You Want to Gain from Your Social Media Monitoring

The Rainmaker Institute mini logo (1)

If you are participating in social media for your law firm, you should also be monitoring whether or not your time investment is paying dividends.

Social Media Insight

You should be creating Google Alerts or searching on Social Mention for the name of your law firm and the names of your attorneys at least once a month.  Create alerts for the areas of law you practice as well.  The social media blog site Buffer recommends you keep these 10 insights in mind when reviewing your results:

Sentiment — Are mentions generally position, neutral or negative?

Questions — Look for questions people may have that you can provide the answers to in your social media posts or blogs.

Feedback — If you see feedback on Avvo or Yelp or some other site that directly affects your firm, you need to listen and respond appropriately.

Links — keep track of who is retweeting or reposting your content and keep track of who is linking back to you.

Pain points — absorb what people are talking about online that is of concern to them and use that information to inform your future posts.

Content — this is where your alerts for your practice area come in handy.  Use these to mine for topics of interest to your target market.

Trends — recent court decisions or trending news in your practice area should be included in your posts so it is clear you are on top of all the trends.

Media — journalists spend a lot of time online so pay attention to the areas they are covering that might provide you with an opportunity to reach out as a spokesperson on those subjects.

Influencers — are there certain individuals who keep popping up in your feeds?  They may be someone it would be advantageous for you to know as an industry influencer.

Advocates — monitoring is a great way to find and recognize those people who are talking positively about you online.

Article By:

Of:

Alice v. CLS Bank: Supreme Court Continues to Grope in Dark for Contours of Abstract Idea Exception

Schwegman Lundberg Woessner

In Alice Corp. v. CLS Bank Int’l (2014), the Supreme Court unanimously affirmed the one-paragraph per curium opinion of the en banc Federal Circuit, which found all claims of U.S. Patent Nos. 5,970,479, 6,912,510, 7,149,720, and 7,725,375 invalid under 35 U.S.C. § 101 for being directed to an abstract idea.

The Court based its affirmance on an application of a two-step process outlined in Mayo Collaborative Services v. Prometheus Labs, 566 U.S. ___ (2012). The first step is the determination of whether the claims are directed to a patent-ineligible concept such as a law of nature, natural phenomenon, or abstract idea. This step implicitly includes the identification of the concept at issue. The second step is to determine if the claims recite “an element or combination of elements that is sufficient to ensure that the patent in practice amounts to significantly more than a patent upon the ineligible concept itself.”

The Court avoided providing “the precise contours of the ‘abstract ideas’ category” by relying on the similarity between Alice’s claims for intermediated settlement and Bilski’s claims for hedging. The Court characterized the Bilski claims as “a method of organizing human activity.” Accordingly, while only three justices signed Justice Sotomayor’s concurrence, stating that “any claim that merely describes a method of doing business does not qualify as a ‘process’ under §101,” the unanimous decision does implicate business methods as likely directed to abstract ideas.

At the Federal Circuit, the splintered opinion included a four-judge dissent that argued that the system claims should be patent-eligible even though the method claims were not. The Supreme Court disagreed with this view, finding that if the system claims were treated differently under §101, “an applicant could claim any principle of the physical or social sciences by reciting a computer system configured to implement the relevant concept” which would “make the determination of patent eligibility depend simply on the draftsman’s art.” To convey patent-eligibility, the claims at issue must be “significantly more than an instruction to apply the abstract idea … using some unspecified, generic computer.”

In my previous post regarding the oral argument before the Supreme Court, I noted that the Court seemed to be looking for reasonable and clear rules regarding the limits of the abstract idea exception to patentable subject matter, but did not get such a rule from any party. Perhaps as a result, this case was decided purely on its similarity to Bilski, and without providing much guidance as to the scope of the exception.

My thanks to Domenico Ippolito for this posting.

© 2014 Schwegman, Lundberg & Woessner, P.A. All Rights Reserved.

Article By:

Of:

Wisconsin’s Password Protection Law Mandates Review of Policies and Practices

Godfrey Kahn

Wisconsin has joined the ranks of other states who have limited the circumstances under which employees or applicants can be required to provide access to his or her personal Internet account. The Social Media Protection Act (2013 Wisconsin Act 208) became effective April 16, 2014. The new law makes it illegal for an employer to request or require an employee or applicant to disclose personal Internet account access information. A parallel prohibition within the Act applies to educational institutions and landlords.

A “personal Internet account” is defined as an Internet-based account that is created and used by an individual exclusively for purposes of personal communications. With the passage of the Act, employers are now prohibited from:

  • Requesting or requiring an employee or applicant, as a condition of employment, to disclose access information to the individual’s personal Internet account or to ask the individual to grant access to or allow observation of that account.
  • Discharging or otherwise discriminating against an employee for exercising his/her right to refuse to disclose personal Internet account access information.
  • Refusing to hire an applicant because the individual did not disclose personal Internet account access information.

While the law primarily protects the privacy of employees and applicants, it also offers employers a limited degree of protection. Specifically, employers can:

  • Request or require an employee to disclose access information to the employer in order for the employer to gain access to or operate an employer-provided (or employer-paid) electronic communications device provided by virtue of the employee’s employment relationship or used for the employer’s business purposes.
  • Discharge or discipline employees for transferring proprietary or confidential information or financial data to the employee’s personal Internet account without the employer’s authorization.
  • If the employer has reasonable cause, conduct an investigation or require an employee to cooperate in an investigation of any alleged unauthorized transfer of the employer’s proprietary or confidential information or financial data to the employee’s personal Internet account or to conduct an investigation of any other alleged employment-related misconduct, violation of the law or violation of the employer’s work rules. During the investigation, the employer can require the employee to grant access to or allow observation of the employee’s personal Internet account, but may not require the employee to disclose access information for that account.
  • Restrict or prohibit an employee’s access to certain Internet sites, while using an employer-provided (or paid for) electronic communications device, or while the employee is using the employer’s network or other resources.
  • View, access or use information about an employee or applicant that can be obtained without access information or that is available in the public domain.
  • Request or require an employee to disclose his or her personal electronic mail address.

A person who has been discharged, expelled, disciplined, or otherwise discriminated against for reasons provided under this law may file a complaint with Wisconsin’s Department of Workforce Development (the “DWD”).

Employers should make sure that their employment policies and practices conform to the requirements of 2013 Wisconsin Act 208. In particular, employers should make sure that employees using employer-provided or paid for electronic communication devices for business purposes do not have any expectation of privacy in such devices or the communications that flow from them.

In addition, employees should be informed that they are prohibited from disclosing proprietary or confidential information or financial data to anyone using personal Internet accounts and only for legitimate business reasons if using an employer-provided account. Lastly, employers should make sure that their employment policies are clear in reserving the right to conduct, and in expecting employees to cooperate in, investigations concerning the unauthorized transfer of proprietary, confidential or financial information.

Article By:

Of:

Apple Inc. v. Rensselaer Polytechnic Institute and Dynamic Advances, LLC, Decision Denying Institution

DrinkerBiddle

Takeaway: A voluntary dismissal of a litigation without prejudice will not nullify service of a complaint for purposes of 35 U.S.C. § 315(b) if that litigation is immediately continued in a consolidated case.

In its Decision, the Board denied institution of the Inter Partes Review as time-barred under 35 U.S.C. § 315(b) because it was not filed within the statutory period of 35 U.S.C. § 315(b).  The date of service of two different complaints was an issue of primary focus by the Board.

In a first patent litigation, Patent Owner (Dynamic Advances) filed a complaint on October 19, 2012. Dynamic Advances, LLC v. Apple Inc., No. 1:12-cv-01579-DNH-CFH (N.D.N.Y.)(Dynamic I).  The complaint for the first litigation was served on Petitioner (Apple) on October 23, 2012.  In a second patent litigation, Rensselaer Polytechnic Institute and Dynamic Advances jointly filed a complaint on June 3, 2013. Rensselaer Polytechnic Inst. & Dynamic Advances, LLC v. Apple Inc., No. 1:13-cv-00633-DNH-DEP (N.D.N.Y.)(Dynamic II).  The complaint for the second litigation was served on Petitioner (Apple) on June 6, 2013.

The Petition in the instant proceeding was filed on January 3, 2014.  Thus, the service date of October 23, 2012 for the first litigation (Dynamic I) was more than 12 months prior to the filing of the Petition, whereas the service date of June 6, 2013 for the second litigation (Dynamic II) was less than 12 months prior to the filing date of the Petition.  The Board found that service of the first complaint on October 23, 2012, rather than service of the second complaint on June 6, 2013, controlled for purposes of determining whether the requested inter partes review was time-barred under 35 U.S.C. § 315(b).  Because the service date of October 23, 2012 for the first litigation (Dynamic I) was more than 12 months prior to the filing of the Petition, the Board found that the Petition was not filed within the statutory period of 35 U.S.C. § 315(b).

The Board’s rationale in reaching this conclusion related to the fact that on July 22, 2013, the court ordered consolidation of Dynamic I and Dynamic II under Fed. R. Civ. P. 42.  In doing so, the court ordered that pursuant to a joint stipulation of the parties, Dynamic I was “dismissed without prejudice and the parties would proceed to litigate their claims and defenses in [Dynamic II].”

Petitioner argued that under the decision in Macauto U.S.A. v. BOS GmbH & KG, IPR2012-0004 (“holding that a voluntary dismissal without prejudice nullified service of the complaint for purposes of 35 U.S.C. § 315(b)”), service of the first complaint on October 23, 2012 was not effective.  According to Petitioner, as in Macauto, the facts of the present case have the effect of leaving the parties as if the first action had never been brought.

The Board disagreed, finding that “Dynamic I cannot be treated as if that case had never been filed under the rationale of Macauto.”  Instead, the Board found that it was “persuaded that the circumstances in the instant case weigh in favor of close scrutiny of the effect of the dismissal of Dynamic I, because that cause of action, although dismissed, was continued immediately in Dynamic II.”

This proceeding was the third time that Petitioner had petitioned for inter partes review against the ‘798 patent.  In IPR2014-00077, institution was denied.  IPR2014-00320 was filed concurrently with the petition for this proceeding.

Apple Inc. v. Rensselaer Polytechnic Institute and Dynamic Advances, LLC,IPR2014-00319
Paper 12: Decision Denying Institution of Inter Partes Review
Dated: June 12, 2014
Patent 7,177,798 B2
Before: Josiah C. Cocks, Bryan F. Moore, and Miriam L. Quinn
Written by: Moore
Related proceedings: IPR2014-00077; IPR2014-00320; Dynamic Advances, LLC v. Apple Inc., No. 1:12-cv-01579-DNH-CFH (N.D.N.Y.); Rensselaer Polytechnic Inst. & Dynamic Advances, LLC v. Apple Inc., No. 1:13-cv-00633-DNH-DEP (N.D.N.Y.)

Of:

Proposed Health Information Technology Strategy Aims to Promote Innovation

Sheppard Mullin 2012

On April 7, 2014, the Food and Drug Administration (FDA), in consultation with theOffice of the National Coordinator for Health Information Technology (ONC) and the Federal Communications Commission (FCC) released a draft report addressing a proposed strategy and recommendations on an “appropriate, risk-based regulatory framework pertaining to health information technology.”

This report, entitled “FDASIA Health IT Report: Proposed Strategy and Recommendations for a Risk-Based Framework”, was mandated by Section 618 of the Food and Drug Administration and Innovation Act, and establishes a proposed blueprint for the regulation of health IT.  The FDA, ONC and FCC (the Agencies) noted that risk and controls on such risk should focus on health IT functionality, and proposed a flexible system for categorizing health IT and evaluating the risks and need for regulation for each category.

The Agencies set out four key priority areas: (1) promote the use of quality management principles, (2) identify, develop, and adopt standards and best practices, (3) leverage conformity assessment tools, and (4) create an environment of learning and continual improvement.

The Agencies are seeking public comment on the specific principles, standards, practices, and tools that would be appropriate as part of this regulatory framework.  In addition, the Agencies propose establishing a new Health IT Safety Center that would allow reporting of health IT-related safety events that could then be disseminated to the health IT community.

The Agencies also divided health IT into three broad functionality-based groups: (1) administrative, (2) health management, and (3) medical device. The Agencies noted that health IT with administrative functionality, such as admissions, billing and claims processing, scheduling, and population health management pose limited or no risk to the patient, and as a result no additional oversight is proposed.

Health IT with health management functionality, such as health information and data exchange, data capture and encounter documentation, provider order entry, clinical decision support, and medication management, would be subject the regulatory framework proposed in the report.  In addition, the FDA stated that a product with health management functionality that meets the statutory definition of a medical device would not be subject to additional oversight by the FDA.

The report had a spotlight on clinical decision support (CDS), which provides health care providers and patients with knowledge and person-specific information, intelligently filtered or presented at appropriate times, to enhance health and health care.  The report concluded that, for the most part, CDS does not replace clinicians’ judgment, but rather assists clinicians in making timely, informed, higher quality decisions.  These functionalities are categorized as health management IT, and the report believes most CDS falls into this category.

However, certain CDS software – those that are medical devices and present higher risks – warrant the FDA’s continued focus and oversight.  Medical device CDS includes computer aided detection/diagnostic software, remote display or notification of real-time alarms from bedside monitors, radiation treatment planning, robotic surgical planning and control, and electrocardiography analytical software.

The FDA intends to focus its oversight on health IT with medical device functionality, such as described above with respect to medical device CDS.  The Agencies believe that this type of functionality poses the greatest risk to patient safety, and therefore would be the subject of FDA oversight.  The report recommends that the FDA provide greater clarity related to medical device regulation involving health IT, including: (1) the distinction between wellness and disease-related claims, (2) medical device accessories, (3) medical device CDS software, (4) medical device software modules, and (5) mobile medical apps.

The comment period remains open through July 7, 2014, and therefore the report’s recommendations may change based on comments received by the Agencies. In the meantime, companies in the clinical software and mobile medical apps industry should follow the final guidance recently published by the FDA with respect to regulation of their products.

In the meantime, health information technology companies should follow the final guidance recently published by the FDA with respect to regulation of their products.

Article By:

Of:

The White House Big Data Report & Apple’s iOS 8: Shining the Light on an Alternative Approach to Privacy and Biomedical Research

DrinkerBiddle

Big data derives from “the growing technological ability to capture, aggregate, and process an ever-greater volume, velocity, and variety of data.”[i] Apple’s just-releasediOS 8 software development kit (“iOS 8 SDK”) highlights this growth.[ii] The iOS 8 SDK touts over 4,000 application programming interface calls including “greater extensibility” and “new frameworks.”iii For example, HomeKit and HealthKit, two of these new frameworks, serve as hubs for data generated by other applications and provide user interfaces to manage that data and related functionality.[iv] HealthKit’s APIs “provide the ability for health and fitness apps to communicate with each other … to provide a more comprehensive way to manage your health and fitness.”[v] HomeKit integrates home automation functions in a central location within the iOS device, allowing users to lock/unlock doors, turn on/off cameras, change or view thermostat settings, turn lights on/off, open garage doors and more – all from a single app.[vi] The iOS 8 SDK will inevitably lead to the development of countless apps and other technologies that “capture, aggregate, and process an ever-greater volume, velocity, and variety of data,” contributing immense volumes of data to the already-gargantuan big data ecosystem.

In the context of our health and wellbeing, big data – which includes, but is definitely not limited to, data generated by future iOS 8-related technologies – has boundless potential and can have a momentous impact on biomedical research, leading to new therapies and improved health outcomes. The big data reports recently issued by the White House and the President’s Council of Advisors on Science and Technology (“PCAST”) echo this fact. However, these reports also emphasize the challenges posed by applying the current approach to privacy to big data, including the focus on notice and consent.

After providing some background, this article examines the impact of big data on medical research. It then explores the privacy challenges posed by focusing on notice and consent with respect to big data. Finally, this article describes an alternative approach to privacy suggested by the big data reports and its application to biomedical research.

Background

On May 1, 2014, the White House released its report on big data, “Big Data: Seizing Opportunities, Preserving Values” (“WH Report”). The WH Report was supported by a separate effort and report produced by PCAST, “Big Data and Privacy: A Technological Perspective” (“PCAST Report”).[vii] The privacy implications of the eports on biomedical research – an area where big data can arguably have the greatest impact – are significant.

Notice and consent provide the foundation upon which privacy laws are built. Accordingly, it can be difficult to envision a situation where these conceptual underpinnings, while still important, begin to yield to a new approach. However, that is exactly what the reports suggest in the context of big data. As HealthKit and iOS 8 SDK demonstrate, we live in a world where health data is generated in numerous ways, both inside and outside of the traditional patient-doctor relationship. If given access to all this data, researchers can better analyze the effectiveness of existing therapies, develop new therapies faster, and more accurately predict and suggest measures to avoid the onset of disease, all leading to improved health outcomes. However, existing privacy laws often restrict researchers’ access to such data without first soliciting and obtaining proof of appropriate notice and consent.[viii] Focusing on individual notice and consent in some instances can be unnecessarily restrictive and can stall the discovery and development of new therapies. This is exacerbated by the fact that de-identification (or pseudonymization) – a process typically relied upon to alleviate some of these obstacles – is losing its effectiveness or would require stripping data of much meaningful value. Recognizing these flaws, the WH Report suggests a new approach where the focus is taken off of the collection of data and turned to the ways in which parties, including biomedical researchers, use data – an approach that allows researchers to maximize the possibilities of big data, while protecting individual privacy and ensuring that data is processed in a reasonable way.

The Benefits of Big Data to Biomedical Research

Before discussing why a new approach to privacy in the context of big data and biomedical research may be necessary, it is first important to understand the role of big data in research. As noted, the concept of big data encompasses “the growing technological ability to capture, aggregate, and process an ever-greater volume, velocity, and variety of data.”[ix] The word “growing” is essential here, as the sources of data contributing to the big data ecosystem are extensive and will continue to expand, especially as Internet-enabled devices such as those contemplated by HomeKit continue to develop.[x] These sources include not only the traditional doctor-patient relationship, but also consumer-generated and other non-traditional sources of health data such as those contemplated by HealthKit, including wearable technologies (e.g., Fitbit), patient-support sites (e.g., PatientsLikeMe.com), wellness programs, electronic/personal health records, etc. These sources expand even further when non-health data is combined with lifestyle and financial data.[xi]

The WH Report recognizes that these new abilities to collect and process information have the potential to bring about “unexpected … advancements in our quality of life.”[xii] The ability of researchers to analyze this vast amount of data can help “identify clinical treatments, prescription drugs, and public health interventions that may not appear to be effective in smaller samples, across broad populations, or using traditional research methods.”[xiii] In some instances, big data can in fact be the necessary component of a life-changing discovery.[xiv]

Further, the WH Report finds that big data holds the key to fully realizing the promise of predictive medicine, whereby doctors and researchers can fully analyze an individual’s health status and genetic information to better predict the onset of disease and/or how an individual might respond to specific therapies.[xv] These findings have the ability to affect not only particular patients but also family members and others with a similar genetic makeup.[xvi] It is worth noting that the WH Report highlights bio-banks and their role in “confronting important questions about personal privacy in the context of health research and treatment.”[xvii]

In summary, big data has a profound impact on biomedical research and, as a necessary result, on those that benefit from the fruits of researchers’ labor. The key to its realization is a privacy regime that can unlock for researchers vast amounts of different types of data obtained from diverse sources.

Problems With the Current Approach

Where the use of information is not directly regulated by the existing privacy framework, providing consumers with notice and choice regarding the processing of their personal information has become the de facto rule. Where the collection and use of information is specifically regulated (e.g., HIPAA, FCRA, etc.), notice and consent is required whenever information is used or shared in a way not permitted under the relevant statute. For example, under HIPAA, a doctor can disclose a patient’s personal health information for treatment purposes (permissible use) but would need to provide the patient with notice and obtain consent before disclosing the same information for marketing purposes (impermissible use). To avoid this obligation, entities seeking to share data in a way not described in the privacy notice and/or permitted under applicable law can de-identify the data, to purportedly make the data anonymous (for example, John Smith drives a white Honda and makes $55,000/year (identified) v. Person X drives a white Honda and makes $55,000/year (de-identified)).[xviii] Except under very limited circumstances (e.g., HIPAA limited data sets), the requirements regarding notice and consent apply equally to biomedical research as to more commercial uses.

In the context of big data, the first problem with notice and consent is that it places an enormous burden on the individual to manage all of the relevant privacy notices applicable to the processing of that individual’s data. In other words, it requires individuals to analyze each and every privacy notice applicable to them (which could be hundreds, if not more), determine whether those data collectors share information and with whom, and then attempt to track that information down as necessary. As the PCAST Report not-so-delicately states, “[i]n some fantasy world, users actually read these notices, understand their legal implications (consulting their attorneys if necessary), negotiate with other providers of similar services to get better privacy treatment, and only then click to indicate their consent. Reality is different.”[xix] This is aggravated by the fact that relevant privacy terms are often buried in privacy notices using legalese and provided on a take-it-or-leave-it basis.[xx] Although notice and consent may still play an important role where there is a direct connection between data collectors and individuals, it is evident why such a model loses its meaning when information is collected from a number of varied sources and those analyzing the data have no direct relationship with individuals.

Second, even where specific privacy regulations apply to the collection and use of personal information, such rules rarely consider or routinely allow for the disclosure of that information to researchers for biomedical research purposes, thus requiring researchers to independently provide notice and obtain consent. As the WH Report points out, “[t]he privacy frameworks that currently cover information now used in health may not be well suited to … facilitate the research that drives them.”[xxi] And as previously noted, often times biomedical researchers require non-health information, including lifestyle and financial data, if they want to maximize the benefits of big data. “These types of data are subjected to different and sometimes conflicting federal and state regulation,” if any regulation at all.[xxii]

Lastly, the ability to overcome de-identification is becoming easier due to “effective techniques … to pull the pieces back together through ‘re-identification’.”[xxiii] In fact, the very techniques used to analyze big data for legitimate purposes are the same advanced algorithms and technologies that allow re-identification of otherwise anonymous data.[xxiv] Moreover, “meaningful de-identification may strip the data of both its usefulness and the ability to ensure its provenance and accountability.”[xxv] In other words, de-identification is not as useful as it once was and further stripping data in an effort to overcome this fact could well extinguish any value the data may have (using the example above, car type and salary may still provide marketers with meaningful information (e.g., individuals with a similar salary may be interested in that car type), but the information “white Honda” alone is worthless). [xxvi]

The consequences of all this are either 1) biomedical researchers are deprived of valuable data or provided meaningless de-identified data, or 2) individuals have no idea that their information is being processed for research purposes. Both the benefits and obstacles relating to big data and biomedical research led to the WH Report’s recognition that we may need “to look closely at the notice and consent framework” because “focusing on controlling the collection and retention of personal data, while important, may no longer be sufficient to protect personal privacy.”xxvii] Further, as the PCAST Report points out, and as reflected in the WH Report, “notice and consent is defeated by exactly the positive benefits that big data enables: new, non-obvious, unexpectedly powerful uses of data.”xxviii So what does this new approach look like?

Alternative Approach to Big Data: Focus on Use, Not Collection[xxix]

The WH Report does not provide specific proposals. Rather, it suggests a framework for a new approach to big data that focuses on the type of use of such data and associated security controls, as opposed to whether notice was provided and consent obtained at the point of its collection. Re-focusing attention to the context and ways big data is used (including the ways in which results generated from big data analysis are used) could have many advantages for individuals and biomedical researchers. For example, as noted above, the notice and consent model places the burden on the individual to manage all of the relevant privacy notices applicable to the processing of that individual’s data and provides no backstop when those efforts fail or no attempt to manage notice provisions is made. Where the attention focuses on the context and uses of data, it shifts the burden of managing privacy expectations to the data collector and it holds entities that utilize big data (e.g., researchers) accountable for how data is used and any negative consequences it yields.[xxx]

The following are some specific considerations drawn from the reports regarding how a potential use framework might work:

  • Provide that all information used by researchers, regardless of the source, is subject to reasonable privacy protections similar to those prescribed under HIPAA.[xxxi] For example, any data relied upon by researchers can only be used and shared for biomedical research purposes.
  • Create special authorities or bodies to determine reasonable uses for big data utilized by researchers so as to realize the potential of big data while preserving individual privacy expectations.[xxxii] This would include recognizing and controlling harmful uses of data, including any actions that would lead to an adverse consequence to an individual.[xxxiii]
  • Develop a central research database for big data accessible to all biomedical researchers, with universal standards and architecture to facilitate controlled access to the data contained therein.[xxxiv]
  • Provide individuals with notice and choice whenever big data is used to make a decision regarding a particular individual.[xxxv]
  • Where individuals may not want certain data to enter the big data ecosystem, allow them to create standardized data use profiles that must be honored by data collectors. Such profiles could prohibit the data collector from sharing any information associated with such individuals or their devices.
  • Require reasonable security measures to protect data and any findings derived from big data, including encryption requirements.[xxxvi] 
  • Regulate inappropriate uses or disclosures of research information, and make parties liable for any adverse consequences of privacy violations.[xxxvii]

By offering these suggestions for public debate, the WH and PCAST reports have only initiated the discussion of a new approach to privacy, big data and biomedical research. Plainly, these proposals bring with them numerous questions and issues that must be answered and resolved before any transition can be contemplated (notably, what are appropriate uses and who determines this?).

Conclusion

Technologies utilizing the iOS 8 SDK, including HealthKit and HomeKit, illustrate the technological growth contributing to the big data environment. The WH and PCAST reports exemplify the endless possibilities that can be derived from this environment, as well as some of the important privacy issues affecting our ability to harness these possibilities. The reports constitute their authors’ consensus view that the existing approach to big data and biomedical research restricts the true potential big data can have on research, while providing individuals with little-to-no meaningful privacy protections. Whether the suggestions contained in the WH and PCAST reports will be – or should be – further developed is an open question that will undoubtedly lead to a healthy debate. Yet, in the case of the PCAST Report, the sheer diversity of players recognizing big data’s potential and associated privacy implications – including, but not limited to, leading representatives and academics from the Broad Institute of Harvard and MIT, UC-Berkeley, Microsoft, Google, National Academy of Engineering, University of Texas at Austin, University of Michigan, Princeton University, Zetta Venture Partners, National Quality Forum and others – provides hope that this potential will one day be realized – in a way that appropriately protects our privacy.[xxxviii]

WH Report Summary: click here.

PCAST Report Summary: click here.

Article By:

Of:

[i] WH Report, p. 2.

[ii] See Apple’s June 2, 2014, press release, Apple Releases iOS 8 SDK With Over 4,000 New APIs, last found at http://www.apple.com/pr/library/2014/06/02Apple-Releases-iOS-8-SDK-With-Over-4-000-New-APIs.html.

[iii] Id.

[iv] Id.

[v] Id.

[vi] Id.

[vii] The White House and PCAST issued summaries of their respective reports, including their policy recommendations, which can be easily found at the links following this article.

[viii] WH Report, p. 7.

[ix] WH Report, p. 2.

[x] WH Report, p. 5.

[xi] WH Report, p. 23.

[xii] WH Report, p. 3.

[xiii] WH Report, p. 23.

[xiv] WH Report, p. 6 (the WH Report includes two research-related examples of the impact of big data on research, including a study whereby the large number of data sets made “the critical difference in identifying the meaningful genetic variant for a disease.”).

[xv] WH Report, p. 23.

[xvi] WH Report, p. 23.

[xvii] WH Report, p. 23.

[xviii] In privacy law, “anonymous” data is often considered a subset of “de-identified” data. “Anonymized” data means the data has been de-identified and is incapable of being re-identified by anyone. “Pseudonymized” data, the other primary subset of “de-identified” data, replaces identifying data elements with a pseudonym (e.g., random id number), but can be re-identified by anyone holding the key. If the key was destroyed, “pseudonymized” data would become “anonymized” data.

[xix] PCAST Report, p. 38.

[xx] PCAST Report, p. 38.

[xxi] WH Report, p. 23.

[xxii] WH Report, p. 23.

[xxiii] WH Report, p. 8.

[xxiv] WH Report, p. 54; PCAST Report, pp. 38-39.

[xxv] WH Report, p. 8.

[xxvi] The PCAST Report does recognize that de-identification can be “useful as an added safeguard.” SePCAST Report, p. 39. Further, other leading regulators and academics consider de-identification a key part of protecting privacy, as it “drastically reduces the risk that personal information will be used or disclosed for unauthorized or malicious purposes.“ Dispelling the Myths Surrounding De-identification: Anonymization Remains a Strong Tool for Protecting Privacy, Ann Cavoukian, Ph.D. and Khaled El Emam, Ph.D. (2011), last found at http://www.ipc.on.ca/images/Resources/anonymization.pdf. Drs. Cavourkian and El Emam argue that “[w]hile it is clearly not foolproof, it remains a valuable and important mechanism in protecting personal data, and must not be abandoned.” Id.

[xxvii] WH Report, p. 54.

[xxviii] PCAST Report, p. 38; WH Report, p. 54.

[xxix] This approach is not one of the official policy recommendations contained in the WH Report. However, as discussed above, the WH Report discusses the impact of big data on biomedical research, as well as this new approach, extensively. Further, to the extent order has any meaning, the first recommendation made in the PCAST Report is that “[p]olicy attention should focus more on the actual uses of big data and less on its collection and analysis.” PCAST Report, pp. 49-50.

[xxx] WH Report, p. 56.

[xxxi] WH Report, p. 24.

[xxxii] WH Report, p. 23.

[xxxiii] PCAST Report, p. 44.

[xxxiv] WH Report, p. 24.

[xxxv] PCAST Report, pp. 48-49.

[xxxvi] PCAST Report, p. 49.

[xxxvii] PCAST Report, pp. 49-50.

[xxxviii] It must be noted that many leading regulators and academics have a different view on the importance and role of notice and consent, and argue that these principles in fact deserve more focus. Seee.g.The Unintended Consequences of Privacy Paternalism, Ann Cavoukian, Ph.D., Dr. Alexander Dix, LLM, and Khaled El Emam, Ph.D. (2014), last found at http://www.privacybydesign.ca/content/uploads/2014/03/pbd-privacy_paternalism.pdf.

Getting Lawyers Up to Speed: The Basics for Understanding ITIL®

Morgan Lewis logo

As more clients use ITIL®—a standard for best practices in providing IT services—IT lawyers who are unfamiliar with the standard should familiarize themselves with its basic principles. This is particularly important as clients are integrating ITIL terminology and best practices (or modified versions thereof) into their service delivery and support best practices as well as the structure and substantive provisions of their IT outsourcing and services contracts.

Most IT professionals are well versed in ITIL and its framework. They will introduce the concepts into statements of work and related documents with the expectation that their lawyers and sourcing professionals understand the basics well enough to identify issues and requirements and negotiate in a meaningful way.

With this in mind, it is time for IT lawyers and sourcing professionals to get up to speed. Below are some of the basics to get started:

  • ITIL—which stands for the “Information Technology Infrastructure Library”—is a set of best practice publications for IT service management that are designed to provide guidance on the provision of quality IT services and the processes and functions used to support them.
  • ITIL was created by the UK government almost 20 years ago and is being adopted widely as the standard for best practice in the provision of IT services. The current version of ITIL is known as the ITIL 2011 edition.
  • The ITIL framework is designed to cover the full lifecycle of IT and is organized around five lifecycle stages:
    1. Service strategy
    2. Service design
    3. Service transition
    4. Service operation
    5. Continual service improvement
  • Each lifecycle stage, in turn, has associated common processes. For example, processes under the “service design” stage include:
    1. Design coordination
    2. Service catalogue management
    3. Service level management
    4. Availability management
    5. Capacity management
    6. IT service continuity management
    7. Information security management systems
    8. Supplier management
  • The ITIL glossary defines each of the lifecycle stages and each of the covered processes.

ITIL® is a registered trademark of AXELOS Limited.

Of:

HEARTBLEED: A Lawyer’s Perspective on the Biggest Programming Error in History

Jackson Lewis Logo

By now you have probably heard about Heartbleed, which is the biggest security threat to the Internet that we have ever seen. The bottom line of Heartbleed is that for the past two years most web sites claiming to besecure, shown by the HTTPS address (the S added to the end of the usual HTTP address was intended to indicate a web secured by encryption), have not been secure at all. Information on those webs could easily have beenbled out by any semi-skilled hacker who discovered the defect. That includes your user names and passwords, maybe even your credit card and bank account information.

For this reason every security expert that I follow, or have talked to about this threat, advises everyone to change ALL of their online passwords. No one knows who might have acquired this information in the past two years. Unfortunately, the nature of this software defect made it possible to steal data in an untraceable manner. Although most web sites have upgraded their software by now, they were exposed for two years. The only safe thing to do is assume your personal information has been compromised.

Change All of Your Passwords

After you go out and change all of your passwords – YES – DO IT NOW – please come back and I will share some information on Heartbleed that you may not find anywhere else. I will share a quick overview of a lawyer’s perspective on a disaster like this and what I think we should do about it.

Rules of the Internet

One of the things e-discovery lawyers like me are very interested in, and concerned about, is data security. Heartblead is the biggest threat anyone has ever seen to our collective online security, so I have made a point of trying to learn everything I could about it. My research is ongoing, but I have already published on detailed report on my personal blog. I have also been pondering policy changes, and changes in the laws governing the Internet that be should made to avoid this kind of breach in the future.

I have been thinking about laws and the Internet since the early 1990s. As I said then, the Internet is not a no-mans-land of irresponsibility. It has laws and is subject to laws, not only laws of countries, but of multiple independent non-profit groups such as ICANN. I first pointed this out out as a young lawyer in my 1996 book for MacMillan, Your Cyber Rights and Responsibilities: The Law of the Internet, Chapter 3 of Que’s Special Edition Using the Internet. Anyone who commits crimes on the Internet must and will be prosecuted, no matter where their bodies are located. The same goes for negligent actors, be they human, corporate, or robot. I fully expect that several law suits will be filed as a result of Heartbleed. Time will tell if any of them succeed. Many of the facts are still unknown.

One Small Group Is to Blame for Heartbleed

The surprising thing I learned in researching Heartbleed is that this huge data breach was caused by a small mistake in software programming by a small unincorporated association called OpenSSL. This is the group that maintains the open source that two-thirds of the Internet relies upon for encryption, in other words, to secure web sites from data breach. It is free software and the people who write the code are unpaid volunteers.

According to the Washington Post, OpenSSL‘s headquarters — to the extent one exists at all — is the home of the group’s only employee, a part timer at that, located on Sugarloaf Mountain, Maryland. He lives and works amid racks of servers and an industrial-grade Internet connection. Craig Timberg, Heartbleed bug puts the chaotic nature of the Internet under the magnifying glass (Washington Post, 4/9/14).

The mistake that caused Heartbleed was made by a lone math student in Münster, Germany. He submitted an add-on to the code that was supposed to correct prior mistakes he had found. His add on contained what he later described as a trivial error. Trivial or not, this is the biggest software coding error of all time based upon impact. What makes the whole thing suspicious is that he made this submission at one minute before midnight on New Year’s Eve 2011.

Once the code was received by OpenSSL, it was reviewed by it before it was added onto the next version of the software. Here is where we learn another surprising fact, it was only reviewed by one person, and he again missed the simple error. Then the revised code with hidden defect was released onto an unsuspecting world. No one detected it until March 2014 when paid Google security employees finally noticed the blunder. So much for the basic crowd sourcing rationale behind the open source software movement.

Conclusion

Placing the reliance of the security of the Internet on only one open source group, OpenSSL, a group with only four core members, is too high a risk in today’s world. It may have made sense back in the early nineties when an open Internet first started, but not now. Heartbleed proves this. This is why I have called upon leaders of the Internet, including open source advocates, privacy experts, academics, governments, political leaders and lawyers to meet to consider various solutions to tighten the security of the Internet. We cannot continue business as usual when it comes to Internet data security.

Article By:

Of: