Can Artificial Intelligence Assist with Cybersecurity Management?

AI has great capability to both harm and to protect in a cybersecurity context. As with the development of any new technology, the benefits provided through correct and successful use of AI are inevitably coupled with the need to safeguard information and to prevent misuse.

Using AI for good – key themes from the European Union Agency for Cybersecurity (ENISA) guidance

ENISA published a set of reports earlier last year focused on AI and the mitigation of cybersecurity risks. Here we consider the main themes raised and provide our thoughts on how AI can be used advantageously*.

Using AI to bolster cybersecurity

In Womble Bond Dickinson’s 2023 global data privacy law survey, half of respondents told us they were already using AI for everyday business activities ranging from data analytics to customer service assistance and product recommendations and more. However, alongside day-to-day tasks, AI’s ‘ability to detect and respond to cyber threats and the need to secure AI-based application’ makes it a powerful tool to defend against cyber-attacks when utilized correctly. In one report, ENISA recommended a multi-layered framework which guides readers on the operational processes to be followed by coupling existing knowledge with best practices to identify missing elements. The step-by-step approach for good practice looks to ensure the trustworthiness of cybersecurity systems.

Utilizing machine-learning algorithms, AI is able to detect both known and unknown threats in real time, continuously learning and scanning for potential threats. Cybersecurity software which does not utilize AI can only detect known malicious codes, making it insufficient against more sophisticated threats. By analyzing the behavior of malware, AI can pin-point specific anomalies that standard cybersecurity programs may overlook. Deep-learning based program NeuFuzz is considered a highly favorable platform for vulnerability searches in comparison to standard machine learning AI, demonstrating the rapidly evolving nature of AI itself and the products offered.

A key recommendation is that AI systems should be used as an additional element to existing ICT, security systems and practices. Businesses must be aware of the continuous responsibility to have effective risk management in place with AI assisting alongside for further mitigation. The reports do not set new standards or legislative perimeters but instead emphasize the need for targeted guidelines, best practices and foundations which help cybersecurity and in turn, the trustworthiness of AI as a tool.

Amongst other factors, cybersecurity management should consider accountability, accuracy, privacy, resiliency, safety and transparency. It is not enough to rely on traditional cybersecurity software especially where AI can be readily implemented for prevention, detection and mitigation of threats such as spam, intrusion and malware detection. Traditional models do exist, but as ENISA highlights they are usually designed to target or’address specific types of attack’ which, ‘makes it increasingly difficult for users to determine which are most appropriate for them to adopt/implement.’ The report highlights that businesses need to have a pre-existing foundation of cybersecurity processes which AI can work alongside to reveal additional vulnerabilities. A collaborative network of traditional methods and new AI based recommendations allow businesses to be best prepared against the ever-developing nature of malware and technology based threats.

In the US in October 2023, the Biden administration issued an executive order with significant data security implications. Amongst other things, the executive order requires that developers of the most powerful AI systems share safety test results with the US government, that the government will prepare guidance for content authentication and watermarking to clearly label AI-generated content and that the administration will establish an advanced cybersecurity program to develop AI tools and fix vulnerabilities in critical AI models. This order is the latest in a series of AI regulations designed to make models developed in the US more trustworthy and secure.

Implementing security by design

A security by design approach centers efforts around security protocols from the basic building blocks of IT infrastructure. Privacy-enhancing technologies, including AI, assist security by design structures and effectively allow businesses to integrate necessary safeguards for the protection of data and processing activity, but should not be considered as a ‘silver bullet’ to meet all requirements under data protection compliance.

This will be most effective for start-ups and businesses in the initial stages of developing or implementing their cybersecurity procedures, as conceiving a project built around security by design will take less effort than adding security to an existing one. However, we are seeing rapid growth in the number of businesses using AI. More than one in five of our survey respondents (22%), for instance, started to use AI in the past year alone.

However, existing structures should not be overlooked and the addition of AI into current cybersecurity system should improve functionality, processing and performance. This is evidenced by AI’s capability to analyze huge amounts of data at speed to provide a clear, granular assessment of key performance metrics. This high-level, high-speed analysis allows businesses to offer tailored products and improved accessibility, resulting in a smoother retail experience for consumers.

Risks

Despite the benefits, AI is by no-means a perfect solution. Machine-learning AI will act on what it has been told under its programming, leaving the potential for its results to reflect an unconscious bias in its interpretation of data. It is also important that businesses comply with regulations (where applicable) such as the EU GDPR, Data Protection Act 2018, the anticipated Artificial Intelligence Act and general consumer duty principles.

Cost benefits

Alongside reducing the cost of reputational damage from cybersecurity incidents, it is estimated that UK businesses who use some form of AI in their cybersecurity management reduced costs related to data breaches by £1.6m on average. Using AI or automated responses within cybersecurity systems was also found to have shortened the average ‘breach lifecycle’ by 108 days, saving time, cost and significant business resource. Further development of penetration testing tools which specifically focus on AI is required to explore vulnerabilities and assess behaviors, which is particularly important where personal data is involved as a company’s integrity and confidentiality is at risk.

Moving forward

AI can be used to our advantage but it should not been seen to entirely replace existing or traditional models to manage cybersecurity. While AI is an excellent long-term assistant to save users time and money, it cannot be relied upon alone to make decisions directly. In this transitional period from more traditional systems, it is important to have a secure IT foundation. As WBD suggests in our 2023 report, having established governance frameworks and controls for the use of AI tools is critical for data protection compliance and an effective cybersecurity framework.

Despite suggestions that AI’s reputation is degrading, it is a powerful and evolving tool which could not only improve your business’ approach to cybersecurity and privacy but with an analysis of data, could help to consider behaviors and predict trends. The use of AI should be exercised with caution, but if done correctly could have immeasurable benefits.

___

* While a portion of ENISA’s commentary is focused around the medical and energy sectors, the principles are relevant to all sectors.

Avoid Losing Money: Achieve Full Remote Access with Speed, Security & Scalability

Are your employees fully capable of accomplishing the same work that they could have done while in the office? Ideally, their in-office PC experience can be duplicated (securely) at home without any latency issues. If that’s not the case, your organization could be losing money with lost billable hours, or underutilization of existing solutions, etc. It’s paramount for the bottom line that your remote access capabilities are allowing your employees to achieve maximum efficiency to conduct business in a remote capacity.

There are three key areas of focus that need attention when planning a cost-effective and capable remote access strategy: speed, security, and scalability. “Putting effective security measures in place today along with mitigating remote access performance issues and ensuring the ability to adjust user access and scale will undoubtedly put you at a competitive advantage and positively affect your organization’s bottom line,” says Donnie W. Downs, President & CEO of Plan B Technologies, Inc.

First and foremost, the reliance on your employee’s end user device (or lack thereof) has a significant impact on what must be considered. There are two paths an organization can take to provide remote access to end users. The first is to allow end user devices to join the network as though they were plugged into a network jack in the office. The most common way to achieve this type of direct access is through a Virtual Private Network or VPN. The second approach is to present desktops and applications in a virtual session. This allows applications to be run on server horsepower in the organization’s datacenter and be used remotely from an end user device. Several products provide this capability, usually referred to as VDI or Terminal Services.

These options result in significantly different architectures. The primary difference is the level of dependency on the end user’s device. The VPN style solution relies heavily on the device’s capability and configuration. It’s required to provide all of the applications and computing power required by each end user. The VDI/Terminal services style solution requires much less from the end users devices. It is simply an interface to the remote session. The tradeoff is that a much more robust infrastructure is required in the organization’s data center or cloud.

Regardless of which way your organization is providing remote access today (VPN or virtual session), the speed, security and scalability (or lack thereof) will directly impact your cost.

SPEED

“To remain productive while working remotely, users need the same capabilities and performance they have when in the office,” says Downs. This translates to several things. They should be able to access all of the software and data they need. They should be able to access these resources using familiar workflows that don’t require separate remote access training. However, the most commonly missed requirement is that the remote access platform needs to provide adequate performance, so the remote access experience feels just like being in the office. Any latency will no doubt cause frustration and could ultimately affect your billable hours.

For direct access platforms this is a simple, yet potentially expensive formula. The remote access system needs to provide enough bandwidth so that the client device can access application servers, file servers, and other resources without slowing down. On the datacenter side, this means designing sufficient connectivity to the on-prem or cloud environments. Connectivity on the client-side, however, will always be more unpredictable. Slow residential connections, unreliable WIFI, and inconsistent cellular coverage are all challenges that will need to be addressed on this type of solution.

Performance within VDI/Terminal Services platforms is much more complex. Similar to direct access, we need to provide adequate bandwidth from the client to the remote access systems. However, this type of system typically has less demanding network requirements than a direct access system.  Advanced VDI/Terminal Services platforms also offer a wide variety of protocol optimizations that can accommodate high latency or low bandwidth connections. That’s only half of the puzzle though. Because the user is accessing a virtual session running in the datacenter, that session needs to provide adequate performance. At a basic level, this means that the CPU and memory must be sized correctly to accommodate the number of users. But the platform also needs to match in-office capabilities such as multiple monitors, 3D acceleration, printing, and video capability. Full-featured VDI/Terminal Services platforms provide these capabilities, but they must be properly designed and deployed to realize their full potential.

SECURITY

“Remote access can expose your business to many risks – but it doesn’t have to be this way,” says Downs. “Whether your organization is supporting 10 remote users or 1,000, you need to provide the necessary access while guarding your organization against outside threats.” For successful and secure remote access, it’s necessary to manage the risks and eliminate your blind spots to prevent data loss, phishing, or ransomware attacks.

On the surface, securing remote access environments requires many of the same basic considerations as any other public-facing infrastructure. These include mandatory multifactor authentication, application-aware firewalls, and properly configured encryption to guard your organization against security risks and protect corporate data. Remote access security is unique due to the risk introduced by the devices used by your employees. These devices can include IT managed devices that are allowed to leave the office or employee-owned unmanaged devices. If your remote access end users are logging in with their own devices, over the internet, there is room for a security breach without conducting these three protocols:

1/ Conduct Endpoint Posture Assessments

For direct access remote connectivity, security is especially relevant since the end user device is being provided a conduit into the organization network. Ideally, devices connecting to a direct access solution should be IT managed devices. This ensures that IT has the capability to control the endpoint configuration and security. However, there are many environments where direct access is required by employee-owned devices. In either case, the remote access solution should have the capability to do endpoint posture assessment. This allows an end user device to be scanned for compliance with security policies. These policies should include up to date operating system updates, valid and updated endpoint protection/antivirus, and enabled device encryption. The results of the scan (or assessment) can then be used to ensure only properly secured devices are able to connect to the network.

2/ Protect Against Key Logging and Other Malware

VDI/Terminal Services remote access systems rely on the end user device only as an interface to the virtual session. As a result, these solutions provide the ability to insulate the organization’s network from the end user device more than a direct access connection. Administrators can and should limit the ability for end user devices to pass file, print, and clipboard data, effectively preventing a compromise of the end user device from affecting the infrastructure. However, there is a gap in this insulation that is almost always overlooked. Malware on the end user device with key logging, screen recording, or remote-control capability can still allow the VDI/Terminal Services session to be compromised. Advanced VDI/Terminal Services platforms have protection for these types of attacks built in. This should be a mandatory requirement when selecting and implementing a VDI/Terminal Services solution.

3/ Deploy Robust Endpoint Protection

Regardless of the overall remote access strategy, both IT managed and employee-owned end user devices should have robust endpoint protection. Traditional definition-based antivirus products no longer provide sufficient protection. These should be combined with, or replaced by, solutions that perform both behavior analytics and advanced persistent thread (APT) protection.

SCALABILITY

Capacity planning for remote access can be very challenging. It is often one of the most varied or “bursty” workloads in an organization. Under normal operations it is used for dedicated remote workers or employees traveling. But when circumstances require large numbers of employees to be remote, as they do today, demand for these capabilities will spike. Proper planning can allow remote access systems to deal with this and keep the entire organization productive, regardless of where they are working.

There are three key elements that affect the scalability of direct access and VDI/Terminal Services solutions: software licensing, network bandwidth, and hardware capacity. It’s important to remember that these three pieces are interconnected. Upgrading any one of them will likely also require an upgrade to the others.

1/ Software Licensing

Licensing for remote access solutions is generally straight forward. There are variables in choosing the correct license type such as feature set and concurrent vs named users. But, in terms of sizing, direct access, and VDI/Terminal Services solutions are usually licensed based on the number of users they can service. Proper scalability relies on having a license pool large enough to support the entire user base. Purchasing licensing for an entire user base can be prohibitively expensive, so some vendors offer more flexible licensing. Two common flexible license models are subscription and burst licenses. Subscription licensing can often be increased or decreased as needed. Burst licensing allows for the purchase of a break-glass pool of licensing that allows for an increased user count for a short period of time. Both of these models allow remote access systems to rapidly expand to accommodate emergency remote workers. This type of flexibility should be considered when selecting a remote access platform to help save your organization from unnecessary costs.

2/ Network Bandwidth

Bandwidth and hardware flexibility are much more difficult to plan for. Indirect access and VDI/Terminal Services scenarios, each additional user requires more WAN bandwidth and more hardware resources. WAN circuits for on-prem datacenters can require significant lead time to provision and resize. There are solutions such as SD-WAN or burstable circuits that can allow flexibility and agility in these circuits. But this must be carefully preplanned and not left as a to-do item when the expanded capacity is actually needed.

3/ Hardware Capacity

Hardware scaling has similar limitations. Adding remote access capacity can require hardware resources ranging from larger firewalls to additional servers depending on the specific remote access platform. Expanding physical firewall and server platforms requires the procurement of additional hardware. During widespread emergencies, unpredictable availability of hardware can lead to significant delays in getting this done. Fortunately, most remote access platforms allow the integration of on-prem and public cloud-based deployments. A common strategy is to deploy systems into the public cloud as an extension of the normal production environment. These systems can then be spun up when needed to provide the additional capacity. This is a complex architecture that requires diligent design and planning, but it can provide a vast amount of scalability at reasonable cost.

Positioning your organization with a remote access strategy that can scale will save you time and money in the future. It’s unknown how long the effects of the coronavirus pandemic will impact the landscape of remote work for organizations. Planning and preparing to continue to conduct business with a secure and robust remote access strategy in place will put you ahead of your competition.


© 2020 Plan B Technologies, Inc. All Rights Reserved.

For more on remote working see the Labor & Employment section of the National Law Review.

You Can be Anonymised But You Can’t Hide

If you think there is safety in numbers when it comes to the privacy of your personal information, think again. A recent study in Nature Communications found that, given a large enough dataset, anonymised personal information is only an algorithm away from being re-identified.

Anonymised data refers to data that has been stripped of any identifiable information, such as a name or email address. Under many privacy laws, anonymising data allows organisations and public bodies to use and share information without infringing an individual’s privacy, or having to obtain necessary authorisations or consents to do so.

But what happens when that anonymised data is combined with other data sets?

Researchers behind the Nature Communications study found that using only 15 demographic attributes can re-identify 99.98% of Americans in any incomplete dataset. While fascinating for data analysts, individuals may be alarmed to hear that their anonymised data can be re-identified so easily and potentially then accessed or disclosed by others in a way they have not envisaged.

Re-identification techniques were recently used by the New York Times. In March this year, they pulled together various public data sources, including an anonymised dataset from the Internal Revenue Service, in order to reveal a decade’s worth of Donald Trump’s negatively adjusted income tax returns. His tax returns had been the subject of great public speculation.

What does this mean for business? Depending on the circumstances, it could mean that simply removing personal information such as names and email addresses is not enough to anonymise data and may be in breach of many privacy laws.

To address these risks, companies like Google, Uber and Apple use “differential privacy” techniques, which adds “noise” to datasets so that individuals cannot be re-identified, while still allowing access to the information outcomes they need.

It is a surprise for many businesses using data anonymisation as a quick and cost effective way to de-personalise data that more may be needed to protect individuals’ personal information.

If you would like to know more about other similar studies, check out our previous blog post ‘The Co-Existence of Open Data and Privacy in a Digital World’.

Copyright 2019 K & L Gates
This article is by Cameron Abbott of  K&L Gates.
For more on internet privacy, see the National Law Review Communications, Media & Internet law page.

R2-Me2? How Should Employers Respond to Job Loss Caused by Robots?

There is no question that the use of robots, along with other similar technological changes in the workplace, will continue to eliminate or downgrade jobs. Indeed, it has been estimated that on average, each workplace robot eliminates six jobs. This article will examine (1) the impact such changes will have on women and (2) whether these changes can be subject to legal challenge as prohibited gender discrimination.

The gender pay gap has become a much debated and controversial topic, but this article will stay out of the fray. However, data produced by the consultancy firm Korn Ferry has concluded that women in Britain make just one percent less than men who have the same function and level at the same employer.  Therefore, some have suggested that the main problem today is not necessarily unequal pay for equal work, but rather the forces and circumstances that lead women to be forced into and stuck in lower-paid jobs at lower-paying organizations. According to The Economist, this is the true gender “pay gap,” which is a much more difficult problem to solve.

Current research suggests that, unless addressed, this gender “pay gap” will increase rather than decrease. Last month, a report to the World Economic Forum in Davos, Switzerland, predicted that “artificial intelligence, robotics and other digital developments,” and the consequent job disruption, are likely to widen rather than diminish the gender pay gap. See “Towards a Reskilling Revolution” at p. 3. Citing statistics published by the federal Bureau of Labor Statistics, the report concluded that of the 1.4 million U.S. jobs that are projected to become “disrupted” because of robotic and other technological changes between now and 2026, 57 percent will be held by women.

But there could be good news for those concerned about gender wage equality. The report argued that an increased awareness of the impending effect of these changes, along with a concerted plan by governments, employers, businesses, labor unions and employees themselves to retrain or “reskill” disrupted workers, will present displaced workers with more opportunities for jobs at higher pay levels than their current wages. In a summary of the main report, the authors predicted that reskilling programs could result in higher wages for 74 percent of all currently at-risk female workers, thereby narrowing the gender wage gap.

Although job disruption from the use of robots will disproportionately impact women, the fact that it will result from “business necessity” means that employees may have difficultymounting successful legal challenges to this practice. Instead, thoughtful employers may want to focus their energies on learning more about the scope of this looming problem and, wherever possible, create or participate in programs that will reskill impacted employees, and thereby provide them with more opportunities in expanding and higher-paid occupations.  Nor is this an unrealistic proposition as, overall, in the decade ending in 2026, the U.S. job market is projected to create 11.5 million new jobs.

 

© 2018 Foley & Lardner LLP
This post was written by Gregory W. McClune of Foley & Lardner LLP.

Office for Civil Rights (OCR) to Begin Phase 2 of HIPAA Audit Program

Mcdermott Will Emery Law Firm

The U.S. Department of Health and Human Services’ Office for Civil Rights (OCR) will soon begin a second phase of audits (Phase 2 Audits) of compliance with Health Insurance Portability and Accountability Act of 1996 (HIPAA) privacy, security and breach notification standards (HIPAA Standards) as required by the Health Information Technology for Economic and Clinical Health (HITECH) Act. Unlike the pilot audits during 2011 and 2012 (Phase 1 Audits), which focused on covered entities, OCR will conduct Phase 2 Audits of both covered entities and business associates.  The Phase 2 Audit Program will focus on areas of greater risk to the security of protected health information (PHI) and pervasive noncompliance based on OCR’s Phase I Audit findings and observations, rather than a comprehensive review of all of the HIPAA Standards.  The Phase 2 Audits are also intended to identify best practices and uncover risks and vulnerabilities that OCR has not identified through other enforcement activities.  OCR will use the Phase 2 Audit findings to identify technical assistance that it should develop for covered entities and business associates.  In circumstances where an audit reveals a serious compliance concern, OCR may initiate a compliance review of the audited organization that could lead to civil money penalties.

The following sections summarize OCR’s Phase 1 Audit findings, describe the Phase 2 Audit program and identify steps that covered entities and business associates should take to prepare for the Phase 2 Audits.

Phase 1 Audit Findings

OCR audited 115 covered entities under the Phase 1 Audit program, with the following aggregate results:

  • There were no findings or observations for only 11% of the covered entities audited;
  • Despite representing just more than half of the audited entities (53%), health care providers were responsible for 65% of the total findings and observations;
  • The smallest covered entities were found to struggle with compliance under all three of the HIPAA Standards;
  • Greater than 60% of the findings or observations were Security Standard violations, and 58 of 59 audited health care provider covered entities had at least one Security Standard finding or observation even though the Security Standards represented only 28% of the total audit items;
  • Greater than 39% of the findings and observations related to the Privacy Standards were attributed to a lack of awareness of the applicable Privacy Standard requirement; and
  • Only 10% of the findings and observations were attributable to a lack of compliance with the Breach Notification Standards

The Phase 2 Audit Program

Selection of Phase 2 Audit Recipients

Unlike the Phase 1 Audit Program, which focused on covered entities, OCR will conduct Phase 2 Audits of both covered entities and business associates.  OCR has randomly selected a pool of 550–800 covered entities through the National Provider Identifier database and America’s Health Insurance Plans’ databases of health plans and health care clearinghouses.  OCR will issue a mandatory pre-audit screening survey to the pool of covered entities this summer.  The survey will address organization size measures, location, services and contact information.  Based on the responses, the agency will select approximately 350 covered entities, including 232 health care providers, 109 health plans and 9 health care clearinghouses, for Phase 2 Audits.  OCR intends to select a wide range of covered entities and will conduct the audits between October 2014 and June 2015.

OCR will notify and send data requests to the 350 selected covered entities this fall.  The data requests will ask the covered entities to identify and provide contact information for their business associates.  OCR will select the business associates that will participate in the Phase 2 Audits from this pool.

Audit Process

OCR will audit approximately 150 of the 350 selected covered entities and 50 of the selected business associates for compliance with the Security Standards, 100 covered entities for compliance with the Privacy Standards and 100 covered entities for compliance with the Breach Notification Standards.  OCR will initiate the Phase 2 Audits of covered entities by sending the data requests this fall and then initiate the Phase 2 Audits of business associates in 2015.

Covered entities and business associates will have two weeks to respond to OCR’s audit request.  The data requests will specify the content, file names and other documentation requirements, and the auditors may contact the covered entities and business associates for clarifications or additional documentation.  OCR will only consider current documentation that is submitted on time.  Failure to respond to a request could lead to a referral to the applicable OCR Regional Office for a compliance review.

Unlike the Phase 1 Audits, OCR will conduct the Phase 2 Audits as desk reviews with an updated audit protocol and not on-site at the audited organization.  OCR will make the Phase 2 Audit protocol available on its website so that entities may use it for internal compliance assessments.

The Phase 2 Audits will target HIPAA Standards that were sources of high numbers of non-compliance in the Phase 1 Audits, including:  risk analysis and risk management; content and timeliness of breach notifications; notice of privacy practices; individual access; Privacy Standards’ reasonable safeguards requirement; training to policies and procedures; device and media controls; and transmission security.  OCR also projects that Phase 2 Audits in 2016 will focus on the Security Standards’ encryption and decryption requirements, facility access control, breach reports and complaints, and other areas identified by earlier Phase 2 Audits.  Phase 2 Audits of business associates will focus on risk analysis and risk management and breach reporting to covered entities.

OCR will present the organization with a draft audit report to allow management to comment before it is finalized.  OCR will then take into account management’s response and issue a final report.

What Should You Do to Prepare for the Phase 2 Audits?

Covered entities and business associates should take the following steps to ensure that they are prepared for a potential Phase 2 Audit:

  • Confirm that the organization has recently completed a comprehensive assessment of potential security risks and vulnerabilities to the organization (the Risk Assessment);
  • Confirm that all action items identified in the Risk Assessment have been completed or are on a reasonable timeline to completion;
  • Ensure that the organization has a complete inventory of business associates for purposes of the Phase 2 Audit data requests;
  • If the organization has not implemented any of the Security Standards’ addressable implementation standards for any of its information systems, confirm that the organization has documented (i) why any such addressable implementation standard was not reasonable and appropriate and (ii) all alternative security measures that were implemented;
  • Ensure that the organization has implemented a breach notification policy that accurately reflects the content and deadline requirements for breach notification under the Breach Notification Standards;
  • Health care provider and health plan covered entities should ensure that they have a compliant Notice of Privacy Practices and not only a website privacy notice;
  • Ensure that the organization has reasonable and appropriate safeguards in place for PHI that exists in any form, including paper and verbal PHI;
  • Confirm that workforce members have received training on the HIPAA Standards that are necessary or appropriate for a workforce member to perform his/her job duties;
  • Confirm that the organization maintains an inventory of information system assets, including mobile devices (even in a bring your own device environment);
  • Confirm that all systems and software that transmit electronic PHI employ encryption technology or that the organization has a documented the risk analysis supporting the decision not to employ encryption;
  • Confirm that the organization has adopted a facility security plan for each physical location that stores or otherwise has access to PHI, in addition to a security policy that requires a physical security plan; and
  • Review the organization’s HIPAA security policies to identify any actions that have not been completed as required (e.g., physical security plans, disaster recovery plan, emergency access procedures, etc.)
ARTICLE BY

Of:

Proposed Health Information Technology Strategy Aims to Promote Innovation

Sheppard Mullin 2012

On April 7, 2014, the Food and Drug Administration (FDA), in consultation with theOffice of the National Coordinator for Health Information Technology (ONC) and the Federal Communications Commission (FCC) released a draft report addressing a proposed strategy and recommendations on an “appropriate, risk-based regulatory framework pertaining to health information technology.”

This report, entitled “FDASIA Health IT Report: Proposed Strategy and Recommendations for a Risk-Based Framework”, was mandated by Section 618 of the Food and Drug Administration and Innovation Act, and establishes a proposed blueprint for the regulation of health IT.  The FDA, ONC and FCC (the Agencies) noted that risk and controls on such risk should focus on health IT functionality, and proposed a flexible system for categorizing health IT and evaluating the risks and need for regulation for each category.

The Agencies set out four key priority areas: (1) promote the use of quality management principles, (2) identify, develop, and adopt standards and best practices, (3) leverage conformity assessment tools, and (4) create an environment of learning and continual improvement.

The Agencies are seeking public comment on the specific principles, standards, practices, and tools that would be appropriate as part of this regulatory framework.  In addition, the Agencies propose establishing a new Health IT Safety Center that would allow reporting of health IT-related safety events that could then be disseminated to the health IT community.

The Agencies also divided health IT into three broad functionality-based groups: (1) administrative, (2) health management, and (3) medical device. The Agencies noted that health IT with administrative functionality, such as admissions, billing and claims processing, scheduling, and population health management pose limited or no risk to the patient, and as a result no additional oversight is proposed.

Health IT with health management functionality, such as health information and data exchange, data capture and encounter documentation, provider order entry, clinical decision support, and medication management, would be subject the regulatory framework proposed in the report.  In addition, the FDA stated that a product with health management functionality that meets the statutory definition of a medical device would not be subject to additional oversight by the FDA.

The report had a spotlight on clinical decision support (CDS), which provides health care providers and patients with knowledge and person-specific information, intelligently filtered or presented at appropriate times, to enhance health and health care.  The report concluded that, for the most part, CDS does not replace clinicians’ judgment, but rather assists clinicians in making timely, informed, higher quality decisions.  These functionalities are categorized as health management IT, and the report believes most CDS falls into this category.

However, certain CDS software – those that are medical devices and present higher risks – warrant the FDA’s continued focus and oversight.  Medical device CDS includes computer aided detection/diagnostic software, remote display or notification of real-time alarms from bedside monitors, radiation treatment planning, robotic surgical planning and control, and electrocardiography analytical software.

The FDA intends to focus its oversight on health IT with medical device functionality, such as described above with respect to medical device CDS.  The Agencies believe that this type of functionality poses the greatest risk to patient safety, and therefore would be the subject of FDA oversight.  The report recommends that the FDA provide greater clarity related to medical device regulation involving health IT, including: (1) the distinction between wellness and disease-related claims, (2) medical device accessories, (3) medical device CDS software, (4) medical device software modules, and (5) mobile medical apps.

The comment period remains open through July 7, 2014, and therefore the report’s recommendations may change based on comments received by the Agencies. In the meantime, companies in the clinical software and mobile medical apps industry should follow the final guidance recently published by the FDA with respect to regulation of their products.

In the meantime, health information technology companies should follow the final guidance recently published by the FDA with respect to regulation of their products.

Article By:

Of:

Getting Lawyers Up to Speed: The Basics for Understanding ITIL®

Morgan Lewis logo

As more clients use ITIL®—a standard for best practices in providing IT services—IT lawyers who are unfamiliar with the standard should familiarize themselves with its basic principles. This is particularly important as clients are integrating ITIL terminology and best practices (or modified versions thereof) into their service delivery and support best practices as well as the structure and substantive provisions of their IT outsourcing and services contracts.

Most IT professionals are well versed in ITIL and its framework. They will introduce the concepts into statements of work and related documents with the expectation that their lawyers and sourcing professionals understand the basics well enough to identify issues and requirements and negotiate in a meaningful way.

With this in mind, it is time for IT lawyers and sourcing professionals to get up to speed. Below are some of the basics to get started:

  • ITIL—which stands for the “Information Technology Infrastructure Library”—is a set of best practice publications for IT service management that are designed to provide guidance on the provision of quality IT services and the processes and functions used to support them.
  • ITIL was created by the UK government almost 20 years ago and is being adopted widely as the standard for best practice in the provision of IT services. The current version of ITIL is known as the ITIL 2011 edition.
  • The ITIL framework is designed to cover the full lifecycle of IT and is organized around five lifecycle stages:
    1. Service strategy
    2. Service design
    3. Service transition
    4. Service operation
    5. Continual service improvement
  • Each lifecycle stage, in turn, has associated common processes. For example, processes under the “service design” stage include:
    1. Design coordination
    2. Service catalogue management
    3. Service level management
    4. Availability management
    5. Capacity management
    6. IT service continuity management
    7. Information security management systems
    8. Supplier management
  • The ITIL glossary defines each of the lifecycle stages and each of the covered processes.

ITIL® is a registered trademark of AXELOS Limited.

Of:

Tri-Agency Health Information Technology Report Issued

MintzLogo2010_Black

On Thursday, April 3rd, the three federal agencies charged with regulating components of health information technology (“Health IT”) issued their long-awaited Health IT Report: Proposed Strategy and Recommendations for a Risk-Based Framework (the “Report”).  The Report seeks to develop a strategy to address a risk-based regulatory framework for health information technology that promotes innovation, protects patient safety, and avoids regulatory duplication.

Congress mandated the development of the Report as part of the 2012 Food and Drug Administration Safety and Innovation Act, requiring the Food and Drug Administration (“FDA”), the Office of the National Coordinator for Health Information Technology (“ONC”), and the Federal Communications Commission (“FCC”) to coordinate their efforts to regulate Health IT.  Notably, the Report identifies and distinguishes between three types of Health IT: (i) health administration Health IT, (ii) health management Health IT, and (iii) medical device Health IT.

The recommendations in the Report include continued interagency cooperation and collaboration, the creation of a public-private safety entity—the Health IT Safety Center—and a risk based approach to the regulation of Health IT.  The Report emphasizes that the functionality of Health IT and not the platform for the technology (mobile, cloud-based, or installed software) should drive the analysis of the risk and the regulatory controls on Health IT.

In very good news for the Health IT community, the Report included a recommendation that, “no new or additional areas of FDA oversight are needed.”  The report emphasized that even if the functionality of health management Health IT meets the statutory definition of a medical device, the FDA will not focus its oversight attention in this area.  The Report gives additional guidance on clinical decision support (“CDS”) tools, clarifying that a number of CDS tools can be categorized as health management Health IT and do not require further regulation by FDA.  However, the Report noted that certain types of CDS tools that are currently regulated as medical devices by the FDA would continue to be so regulated.  These FDA-regulated CDS tools include computer aided detection and diagnostic software and robotic surgical planning and control tools.

The agencies intend to convene a public meeting on the proposed strategy within 90 days and to finalize the Report based on public input.

Of:

Ellen L. Janos

By: