The Increasing Role of Cybersecurity Experts in Complex Legal Disputes

The testimonies and guidance of expert witnesses have been known to play a significant role in high-stakes legal matters, whether it be the opinion of a clinical psychiatrist in a homicide case or that of a career IP analyst in a patent infringement trial. However, in today’s highly digital world—where cybercrimes like data breaches and theft of intellectual property are increasingly commonplace—cybersecurity professionals have become some of the most sought-after experts for a broadening range of legal disputes.

Below, we will explore the growing importance of cybersecurity experts to the litigation industry in more depth, including how their insights contribute to case strategies, the challenges of presenting technical and cybersecurity-related arguments in court, the specific qualifications that make an effective expert witness in the field of cybersecurity, and the best method for securing that expertise for your case.

How Cybersecurity Experts Help Shape Legal Strategies

Disputes involving highly complex cybercrimes typically require more technical expertise than most trial teams have on hand, and the contributions of a qualified cybersecurity expert can often be transformative to your ability to better understand the case, uncover critical evidence, and ultimately shape your overall strategy.

For example, in the case of a criminal data breach, defense counsel might seek an expert witness to analyze and evaluate the plaintiff’s existing cybersecurity policies and protective mechanisms at the time of the attack to determine their effectiveness and/or compliance with industry regulations or best practices. Similarly, an expert with in-depth knowledge of evolving data laws, standards, and disclosure requirements will be well-suited to determining a party’s liability in virtually any matter involving the unauthorized access of protected information. Cybersecurity experts are also beneficial during the discovery phase when their experience working with certain systems can assist in potentially uncovering evidence related to a specific attack or breach that may have been initially overlooked.

We have already seen many instances in which the testimony and involvement of cybersecurity experts have impacted the overall direction of a legal dispute. Consider the Coalition for Good Governance, for example, that recently rested its case(Opens an external site in a new window) as the plaintiffs in a six-year battle with the state of Georgia over the security of touchscreen voting machines. Throughout the process, the organization relied heavily on the testimony of multiple cybersecurity experts who claimed they identified vulnerabilities in the state’s voting technology. If these testimonies prove effective, it will not only sway the ruling in the favor of the plaintiffs but also lead to entirely new policies and impact the very way in which Georgia voters cast their ballots as early as this year.

The Challenges of Explaining Cybersecurity in the Courtroom

While there is no denying the growing importance of cybersecurity experts in modern-day disputes, it is also important to note that many challenges still exist in presenting highly technical arguments and/or evidence in a court of law.

Perhaps most notably, there remains a significant gap in both legal and technological language, as well as in the knowledge and understanding of cybersecurity professionals and judges, lawyers, and the juries tasked with parsing particularly dense information. In other words, today’s trial teams need to work carefully with cybersecurity experts to develop communication strategies that adequately illustrate their arguments but do not result in unnecessary confusion or a misunderstanding of the evidence being presented. Visuals are a particularly useful tool in helping both litigators and experts explain complex topics while also engaging decision-makers.

Depending on the nature of the data breach or cybercrime in question, you may be tasked with replicating a digital event to support your specific argument. In many cases, this can be incredibly challenging due to the evolving and multifaceted nature of modern cyberattacks, and it may require extensive resources within the time constraints of a given matter. Thus, it is wise to use every tool at your disposal to boost the power of your team—including custom expert witness sourcing and visual advocacy consultants.

What You Should Look for in a Cybersecurity Expert

Determining the qualifications of a cybersecurity expert is highly dependent on the details of each individual case, making it critical to identify an expert whose experience reflects your precise needs. For example, a digital forensics specialist will offer an entirely different skill set than someone with a background in data privacy regulations and compliance.

Making sure an expert has the relevant professional experience to assess your specific cybersecurity case is only one factor to consider. In addition to verifying education and professional history, you must also assess the expert’s experience in the courtroom and familiarity with relevant legal processes. Similarly, expert witnesses should be evaluated based on their individual personality and communication skills, as they will be tasked with conveying highly technical arguments to an audience that will likely have a difficult time understanding all relevant concepts in the absence of clear, simplified explanations.

Where to Find the Most Qualified Cybersecurity Experts

Safeguarding the success of your client or firm in the digital age starts with the right expertise. You need to be sure your cybersecurity expert is uniquely suited to your case and primed to share critical insights when the stakes are high.

Introducing the New SmartExpert: Self-driving Car "Drivers"

The National Highway Traffic Safety Administration has deemed the artificial intelligence that controls Google’s self-driving car a qualified “driver” under federal regulations. So, if a computer can drive, must we have a computer testify as to whether this new “driver” was negligent? It sounds laughable: “Do you, computer, swear to tell the truth?” But, with so many new potential avenues of litigation opening up as a result of “machines at the wheel,” it made us wonder how smart the new expert will have to be?

With its heart beating in Silicon Valley and its position well-established as a proponent of computer invention and progress, it was surprising when California was the first state to suggest we need a human looking over the computer’s shoulder. That is essentially what the draft regulations from the California Department of Motor Vehicles for the regulation of self-driving vehicles proposes – that self-driving cars have a specially-licensed driver prepared to take the wheel at all times. After years spent developing and testing self-driving cars in its home town of Mountain View, California, Google may now be looking elsewhere for testing and production. The rule proposed by the California DMV would make Google’s car impossible in the state.  Why?  Because humans cannot drive the Google self-driving car. It has no steering wheel and no pedals. The Google car could not let a human take over the wheel. Does that thought make you pause?

It apparently didn’t give the National Highway Traffic Safety Administration any cause for concern, as they approved Google’s self-driving software, finding the artificial intelligence program could be considered a bonafide “driver” under federal regulations. In essence, Google’s driving and you are simply a passenger. If you would hesitate to get in, Google’s Chris Urmson, lead engineer on the self-driving car program explains: “We need to be careful about the assumption that having a person behind the wheel will make the technology safer.” Urmson is basically saying computers are safer than humans. When you think about the number of automobile accident-related deaths in the United States alone, he may be right.  If he is right, wouldn’t artificial intelligences sophisticated enough to drive a car more safely than humans be able to learn to do other things better as well? Couldn’t they drive a forklift, perform surgery on humans, manage a billion dollar hedge fund? If that is where things are heading, who will testify as to the applicable standards of behavior for these machines? In the hedge fund example, will it be a former hedge fund manager who has years of experience handling large, bundled securities or a software developer who has years of experience programming artificial intelligence?

Who do you think will be able to testify in cases where an artificially-intelligent machine plays a role? Liability at the hands of a machine is bound to emerge. Someone will have to speak to the standard of judgment, discretion, and care applicable to machines. Maybe Google will be allowed to text while driving. Who’s to say?

© Copyright 2002-2016 IMS ExpertServices, All Rights Reserved.

Introducing the New SmartExpert: Self-driving Car “Drivers”

The National Highway Traffic Safety Administration has deemed the artificial intelligence that controls Google’s self-driving car a qualified “driver” under federal regulations. So, if a computer can drive, must we have a computer testify as to whether this new “driver” was negligent? It sounds laughable: “Do you, computer, swear to tell the truth?” But, with so many new potential avenues of litigation opening up as a result of “machines at the wheel,” it made us wonder how smart the new expert will have to be?

With its heart beating in Silicon Valley and its position well-established as a proponent of computer invention and progress, it was surprising when California was the first state to suggest we need a human looking over the computer’s shoulder. That is essentially what the draft regulations from the California Department of Motor Vehicles for the regulation of self-driving vehicles proposes – that self-driving cars have a specially-licensed driver prepared to take the wheel at all times. After years spent developing and testing self-driving cars in its home town of Mountain View, California, Google may now be looking elsewhere for testing and production. The rule proposed by the California DMV would make Google’s car impossible in the state.  Why?  Because humans cannot drive the Google self-driving car. It has no steering wheel and no pedals. The Google car could not let a human take over the wheel. Does that thought make you pause?

It apparently didn’t give the National Highway Traffic Safety Administration any cause for concern, as they approved Google’s self-driving software, finding the artificial intelligence program could be considered a bonafide “driver” under federal regulations. In essence, Google’s driving and you are simply a passenger. If you would hesitate to get in, Google’s Chris Urmson, lead engineer on the self-driving car program explains: “We need to be careful about the assumption that having a person behind the wheel will make the technology safer.” Urmson is basically saying computers are safer than humans. When you think about the number of automobile accident-related deaths in the United States alone, he may be right.  If he is right, wouldn’t artificial intelligences sophisticated enough to drive a car more safely than humans be able to learn to do other things better as well? Couldn’t they drive a forklift, perform surgery on humans, manage a billion dollar hedge fund? If that is where things are heading, who will testify as to the applicable standards of behavior for these machines? In the hedge fund example, will it be a former hedge fund manager who has years of experience handling large, bundled securities or a software developer who has years of experience programming artificial intelligence?

Who do you think will be able to testify in cases where an artificially-intelligent machine plays a role? Liability at the hands of a machine is bound to emerge. Someone will have to speak to the standard of judgment, discretion, and care applicable to machines. Maybe Google will be allowed to text while driving. Who’s to say?

© Copyright 2002-2016 IMS ExpertServices, All Rights Reserved.

Apple Gets Another Bite At $368 Million Verdict

IMS_expert_blktype-transparent

You don’t get two bites at the apple, it is sometimes said, but Apple Inc. is getting a second bite at defending itself against a massive damages award after the Federal Circuit Court of Appeals vacated a $368 million jury verdict in a patent infringement case.

The court vacated the award because it found that the plaintiff’s damages expert improperly relied on a model known as the Nash Bargaining Solution to calculate reasonable royalty damages.

Federal district courts have split on whether to allow expert testimony using the Nash Bargaining Solution, but the Federal Circuit held that the expert failed to establish the tie between the Nash theorem and the facts of this case.

The underlying issue in the case was whether two of Apple’s products — FaceTime, which allows secure video calling between Apple devices, and VPN On Demand, which creates a virtual private network from an iOS device — infringed four patents owned by VirnetX, a Nevada software and licensing company.

A jury in federal court in Tyler, Texas, concluded that all four patents were valid and that Apple had infringed them. It awarded VirnetX damages of $368 million.

Proving Reasonable Royalties

On appeal, the Federal Circuit upheld the jury’s verdict of infringement with regard to the VPN On Demand product but reversed and remanded aspects of the infringement verdict with regard to the FaceTime product.

The court then turned its attention to Apple’s challenges of the testimony of VirnetX’s damages expert. In a patent case, when infringement is found, a court is to award damages “adequate to compensate for the infringement, but in no event less than a reasonable royalty for the use made of the invention by the infringer.”

To establish a reasonable royalty rate in this case, VinetX’s expert offered three alternative methods of calculation. Apple challenged the admissibility of all three methods under Daubert, but over Apple’s objection, the trial court admitted the testimony.

On appeal, however, the Federal Circuit found problems with all three theories.

Smallest Salable Unit

The expert’s first theory was to apply a one percent royalty rate to the base. He derived the one percent rate from the royalty VirnetX typically sought in licensing its patents. For the base, he used what he called the “smallest salable unit” — the lowest sale price of each model of the iOS devices that contained the challenged features. With this theory, he calculated the total damages to be $708 million.

Apple argued that the expert erred by using the entire market value of its products as the royalty base without demonstrating that the patented features drove the demand for those products. The Federal Circuit agreed.

“The law requires patentees to apportion the royalty down to a reasonable estimate of the value of its claimed technology, or else establish that its patented technology drove demand for the entire product,” the court explained. “VirnetX did neither.”

The court went on to say that the expert “did not even try to link demand for the accused device to the patented feature, and failed to apportion value between the patented features and the vast number of nonpatented features contained in the accused products.”

The Nash Bargaining Solution

For both the expert’s second and third theories — each of which he used only with regard to FaceTime — he relied on the Nash Bargaining Solution, a so-called game theory developed in 1950 by John Nash, a mathematician and co-winner of the 1994 Nobel Prize in economics.

In his first use of the Nash theorem, the expert began by calculating the profits associated with the use of FaceTime. He did this based on the revenue generated by Apple’s addition of a front-facing camera on its mobile devices. He then determined that, under the Nash theory, the parties would have split this revenue 50/50. However, after accounting for Apple’s stronger bargaining position, he concluded that Apple would have taken 55 percent of the profits and VirnetX, 45 percent. That amounted to $588 million in damages.

For his second use of the Nash theorem, the expert assumed that FaceTime “drove sales” for Apple’s iOS products. Eighteen percent of all iOS sales would not have occurred without the addition of the FaceTime feature, he concluded. Based on that, he calculated the amount of Apple’s profits that he believed were attributable to FaceTime and apportioned 45 percent of those profits to VirnetX. That amounted to $606 million in damages for FaceTime.

Apple challenged both these theories, arguing that the expert’s use of the 50/50 split as a starting point was akin to the 25 percent rule of thumb for royalties that the Federal Circuit had rejected in earlier cases. Here again, the Federal Circuit agreed with Apple’s argument.

The problem, the court explained, is that the Nash theorem arrives at a result that follows from a certain set of premises. Here, the expert never tied his use of the theorem to the facts of the case.

“Anyone seeking to invoke the theorem as applicable to a particular situation must establish that fit, because the 50/50 profit-split result is proven by the theorem only on those premises,” the court said. In this case, the expert never did that.

Based on these conclusions, the court vacated the damages award and sent the case back to the district court for further proceedings.

Has an expert ever used the Nash Bargaining Solution, or other game theory, in one of your cases? If so, was the result positive?

OF

Court-Appointed Experts: The Future of Litigation?

IMS_expert_blktype-transparent

After black-market dealing for approximately two years in relative anonymity, the secretive Silk Road drug-dispensing site was targeted by U.S. federal authorities and was subsequently shut down. Its alleged owner and operator was arrested.

However, one lawyer and technology expert is claiming that the FBI is lying about how it found the Silk Road server that allowed authorities to seize the site as well as millions of dollars in cyber coinage. It is a complicated question of computer evidence, one which the courts may not be capable of fully understanding.

As the worlds of cybercrime, criminal law, economics, and evidence continue to collide, the technological war between law enforcement and crypto-criminals is requiring prosecutors to enter a new realm of trial advocacy and courtroom tactics – one in which tech experts and computer specialists are vital for judicial clarity and jury instructions.

At a time when iron bars and jailhouse walls can do little to stop crimes and communications from taking place over the intangible and worldwide web connections, stopping cybercrime is one thing, but explaining it to a judge or jury is a much different task.

From Drug Money to Bonafied Bitcoins

Earlier this month, after Silk Road 2.0’s alleged owner and operator, Blake “Defcon” Benthall, was arrested by the FBI, the defendant reportedly began tweeting, just hours after his arrest, from jail and requesting bitcoin donations. Many law enforcement officials didn’t even know what this meant or what the defendant was soliciting.

Bitcoin is a form of cryptocurrency that has garnered international recognition in the last couple of years after it was revealed to be the form of monetary tender used to purchase drugs from the original Silk Road website.

However, the currency also opened the eyes of legitimate businessmen, economists, and financial experts as well – some of whom believe that bitcoin and other cryptocurrencies could become the money form of the future. Our BullsEye blog examined the world of bitcoins in a March 2014 article entitled “What The #!$% Is Bitcoin?”

Three months after that article’s publication, the U.S. Marshal’s Service held an online auction and sold nearly 30,000 of the bitcoins it had seized from Silk Road. At the time, the value was approximately $18 million. They were purchased by American venture capitalist Tim Draper, who has just brought in former SEC Chairman Arthur Levitt as an advisor for his new bitcoin-investor platform rebranded as “Mirror.”

The FBI, however, claims that the auctioned bitcoins that Draper purchased represent less than a quarter of those seized from Silk Road and its alleged mastermind Ross William Ulbricht. Thirty-year-old Ulbricht, of Austin, Texas, is alleged to be the original Silk Road founder, who called himself “Dread Pirate Roberts,” named after the sword-wielding character in the movie The Princess Bride.

In a September 2013 interview with Forbes magazine, the libertarian-minded Dread Pirate Roberts is quoted as saying, “We’ve won the State’s War on Drugs because of Bitcoin.”

Ulbricht was arrested in San Francisco just days after the article was published. He was charged with money laundering, computer hacking, conspiracy to traffic narcotics, and attempted murder of witnesses. His federal trial is expected to begin in January in Manhattan.

The FBI said that it is holding on to the 144,342 bitcoins seized from Ulbricht’s computer until after the resolution of the criminal trial. Presumably, if Ulbricht is convicted and the seizure is deemed valid, the bitcoins will be auctioned off to the public. The approximate value of that cache of bitcoins is over $56 million today.

Cybercrime Confusing Courts

Expert witness and attorney Joshua J. Horowitz, however, claims in court documents released last month that the FBI is lying about how it accessed the Silk Road back-end server. In an 18-page declaration filed with the U.S. District Court for the Southern District of New York, Horowitz writes about “Nginx access logs,” “tarball mtimes” and “phpmyadmin virtual host site configurations,” claiming that he can show that the FBI could not have infiltrated Silk Road via the manner that it claims in the indictment and other court documents.

“[B]ased on the Silk Road Server’s configuration files provided in discovery, former Special Agent [Christopher] Tarbell’s explanation of how the FBI discovered the server’s IP address is implausible,” Horowitz states.

However, much of Horowitz’s technologically sophisticated declaration is unreadable and incomprehensible to an average attorney or jurist. With many of these issues being evidentiary in nature, the question of whether certain physical evidence is admitted at trial will be left up to one judge.

How will a federal judge – many of whom were middle-aged well before Steve Jobs and Steve Wozniak began tinkering away inside a garage in 1976 – be capable of ruling on these evidentiary issues based on court documents and legal arguments that are communicated in a specialized, seemingly foreign, language?

“The critical configuration lines from the live-ssl file are: ‘allow 127.0.0.1; allow 62.75.246.20; deny all;.’ These lines tell the web server to allow access from IP addresses 127.0.0.1 and 65.75.246.20, and to deny all other IP addresses from connecting to the web server.… Based on this configuration, it would have been impossible for Special Agent Tarbell to access the portion of the .49 server containing the Silk Road market data, including a portion of the login page, simply by entering the IP address of the server in his browser,” Horowitz writes, seemingly in an attempt to “dumb down” the explanation of the process.

While the Kentucky-born, Yale-educated U.S. District Judge J. Paul Oetken is very young compared to his life-appointed colleagues, to assume that the 49-year-old jurist (or even his law clerk) can understand even the basics of Horowitz’s argument is unlikely. In order for him to rule on these evidentiary issues properly, one would assume that technology experts will need to be hired by the courts to examine the specific allegations and pretrial disputes.

Unlike the decision to admit or deny expert witnesses in federal court, during which the judge must determine whether the witness is qualified enough to proffer evidence to the jury, the decision to entirely admit or deny the actual physical evidence that was searched and seized is solely up to the judge. In the case of the Ulbricht prosecution, one would assume that allowing the FBI’s evidence gathered from the Silk Road site to be admissible at trial would be far more critical than any other issues presented before the jury once the evidence is deemed admissible.

This will not be an easy decision for the judge.

“The active phpmyadmin configuration file contained in Item 1 of discovery contains the following lines: ‘listen 80; root /usr/share/phpmyadmin; allow 127.0.0.1;.’ These lines direct the phpmyadmin virtual host to listen on port 80, which is the standard port for web traffic, and also tells Nginx to serve files from the phpmyadmin folder. The absence of ‘deny all’ means that it would be possible for an IP address outside the Tor network to connect to the .49 server. However, an IP address outside the Tor network would have been able to access only the login page for phpmyadmin and the files contained in the phpmyadmin folder, not any part of the Silk Road market or even the login screen, as claimed in the Tarbell Declaration,” Horowitz explains further.

If Judge Oetken thinks this is confusing, just wait until the experts start explaining what a bitcoin is.

When it comes to complicated technological issues that are procedural in nature and that are therefore not intended for the jury, will courts now need to hire experts to explain and inform judges? Or do today’s judges really have no business making these highly specialized decisions on evidence?

OF