Autonomous Vehicles: Job Killer?

According to the 2014 Census data, more than 4.4 million Americans work as drivers. Will autonomous vehicles kill most of these driver required jobs? With the growth and advancement in autonomous vehicle technologies, many Americans are in danger of losing their job or taking significant cuts in their income because a new and convenient technology is taking their place. Autonomous vehicles are expected to reduce labor cost, fuel cost and accidents. The potential savings will outweigh the human cost, especially as companies fight for profit margins. While companies plot to save money in the future through using this new tech innovation, some individuals will lose money and be left with limited job options in their field.

Take truck driving for instance. According to Census Bureau occupational data, almost two percent of Americans working as drivers are truck drivers. Truck driving is one of the most common jobs around the country and this industry has already displayed hints of being affected by autonomous vehicles. Last year, the Colorado Department of Transportation agreed to let an autonomous truck from Otto, a company recently acquired by Uber, deliver 51,744 cans of Budweiser with no one in the driver’s seat. (There was still a driver present in the truck for safety purposes.) This year, Uber plans to have thousands of trucks equipped with autonomous technology.


Autonomous Vehicles and All That Data

In an earlier post, we discussed the potential ownership models for autonomous vehicles, also known as driverless cars (“AVs”). Models range from true traditional ownership as we understand it today, to licensed-based models (vehicles owned by someone else but you can use them on an exclusive or non-exclusive basis), to service-based models (you do not own the vehicle, but you can call it when you want it, e.g. cab, Uber).  In this post we will explore the data-intensiveness of autonomous vehicles, the impending data “land grab,” and who will own and control all of the data generated by AVs.

An AV can be thought of as a massive, always-on computer.  Sensors in the AV interface with sensors in the environment.  Data from satellite navigation systems is also in play.  Who owns all that data?  Is it the owner of the AV, the ‘driver’ of the AV (e.g. the licensee of a leased AV), or the party collecting the data? 


Autonomous Vehicles: A Regulatory Perspective

The coming innovation of autonomous vehicles (i.e. self-driving cars) has been covered pretty widely in the news over the past 18-24 months.  Not long ago, the reality of autonomous vehicles was unknown to most Americans.  But it is now creeping into the consciousness of more and more Americans.  As the certainty of this new technology approaches, it is becoming clearer that it will cause massive disruption in an area of American life that is intensely regulated at every level.  If you think about it, the manufacture, distribution, sale, ownership, and operation of cars are all regulated by federal, state and local government.  When autonomous vehicles come into the commercial marketplace (as they soon will), the revolutionary transformation they will bring will include significant regulatory changes.

The federal government is embracing the movement from traditional vehicles to autonomous vehicles. In 2015, the White House announced the Smart City initiative which promoted the connectivity of the autonomous vehicle and the environment (e.g. roads, buildings etc.). The White House released a statement pledging an investment of $160 million in federal research and technology collaborations to improve the technologies in cities. In addition, the government proposed about $4 billion in the federal budget for autonomous vehicle research and development over ten years. The government is taking active steps to prepare the country for the change that autonomous vehicles are going to bring, including regulatory change.


Cyber Security and Social Engineering: A Big Low Tech Problem

Headline-grabbing cyber hacks of email accounts belonging to celebrities, corporations, government officials and political campaigns are becoming the norm.  Cybersecurity intended to guard against these acts brings to mind high tech computer hardware and software fixes delivered by knowledgeable IT professionals, who are expected to prevent network intrusions, stolen passwords, viruses, ransomware attacks and other hacks.

But the most recent notable cyber hacks were not caused by high tech espionage.  Rather, they were the product of low tech social engineering – the use of deception to manipulate users into divulging confidential passwords and other personal information.  This kind of hack takes many forms – examples include security alerts from what appear to be trusted websites to update passwords and phishing emails from what appear to be known, trusted contacts asking to download files or click on provided links.


The Anthem Breach – A Retrospective (Part II)

We published Part I of our “Anthem Breach Retrospective” in January 2017.  Coincidentally, at around the same time several plaintiffs in one of the earliest filed cases arising out of the Anthem data breach voluntarily asked a judge in the Northern District of California to dismiss their lawsuits. The requests for dismissal came after Judge Cousins ordered select plaintiffs to comply with a discovery request by Anthem, requiring them to submit their computers to an independent forensic examiner to determine whether malware caused data or credentials to be stolen from the plaintiffs’ computers even before the breach of Anthem’s systems. In other words, Anthem wanted to know whether someone else caused the plaintiffs’ alleged injuries.

Legally, it isn’t surprising that Anthem should be entitled to this kind of information through discovery because it pertains to the issue of causation. Anthem wanted to know if the plaintiffs’ personal information was compromised under circumstances having nothing to do with Anthem, months before the Anthem breach. In discovery, it was fair game for Anthem to seek to compel these plaintiffs to comply with its request – even if it requires the disclosure of confidential information. But, it appears that at least one of these plaintiffs dropped out of the suit because he did not wish to disclose possibly confidential information in a lawsuit where he is suing because of alleged negligence with respect to confidential information.


ISO’s Privacy Standard for Cloud Service Providers

In July 2014, the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) issued a new security standard – ISO 27018 – which attempts to outline best practices for public cloud service providers on how to better protect personally identifiable information.  Although the standard expressly only applies to public cloud providers, it’s instructive to any cloud provider –public or private.

Like all ISO standards, compliance with ISO 27018 is voluntary, and certification under the standard is not required by any law. However, over time, more and more cloud service contracts are requiring compliance with or certification to this standard. Adhering to the ISO 27018 standard can help build a foundation of trust between a cloud provider and its customers. During the contract negotiation stage, the standard can serve as a very beneficial framework for providing assurances that most customers can understand and rely on. Customers moving to the cloud are giving up control of their sensitive data and relying on the cloud provider to maintain adequate safeguards to protect it. New cloud adopters may be nervous, and the cloud provider will likely need to provide assurances and manage their customer’s qualms in order to get the customer under contract.


Key HIPAA Settlement Agreements by HHS’s Office for Civil Rights in 2015 & 2016

The last time this blog presented an overview of key HIPAA settlement agreements at the Office for Civil Rights in the U.S. Department of Health and Human Services was a review of 2014.  The number of complaints that year had spiked up compared to 2013: around a 25% increase.  This post will examine key cases from 2015 and 2016.  While the number of complaints in 2015 was relatively steady with 2014, it appears, based on preliminary numbers, that 2016 was the busiest year ever for the Office.

HHS has data through November 2016 currently posted on its website, but no data for December 2016.  There it notes that, from April 14, 2003 through November 2016, it has received 144,662 complaints.  Elsewhere, the agency has the number of complaints received by year, from 2003 through 2015: 125,641.  Thus, even without the data for December 2016, it appears that in 2016 the Office received 19,021 complaints.  The previous highest year, 2014, saw 18,015 complaints.

Here’s a brief summary of some key agreements from 2015 and 2016:

Cancer Care Group, P.C. is 13-doctor radiation oncology practice in Indiana.  In September 2015, Cancer Care agreed to a $750,000 settlement with OCR.  This grew out of, initially, the discovery that a laptop was stolen from a Cancer Care employee’s car.  The laptop contained unencrypted names, dates of birth, SSNs, insurance information, and clinical information on around 55,000 current and former Cancer Care patients.  A subsequent investigation revealed that Cancer Care “was in widespread non-compliance with the HIPAA Security Rule.”  Proper encryption must be a part of an organization’s approach to data management.


The Anthem Breach – A Retrospective

Anthem #2

Many people and news outlets have opined, weighed in, and informed the public about the 2015 Anthem breach. It remains a hot topic in January 2017, because it currently lines up with other hot stories about hacking ordered by foreign governments.  But even before the Anthem breach was linked to one of the biggest issues of the 2016 election cycle, it was an important data incident, for several reasons.

  1. Why was the Anthem breach important at that time?

The Anthem breach was notable because it was the first major data breach that potentially involved protected health information. Media coverage about the breach in 2015 reported that personal information of affected individuals was apparently sitting on Anthem’s servers unencrypted.  Encryption of PHI at rest (i.e., data that is not moving) is a much more common data security practice in 2017, in part because of the lessons learned from the Anthem breach. Some laws now even require personal information to be encrypted when at rest.

Another novelty at the time was a tactic the hackers employed in the Anthem breach.  When Anthem learned of the breach, it quickly notified affected individuals by e-mail and through public announcements, saying it would send follow-up information about next steps. This speedy notification was lauded by many as a best practice.  But in the wake of Anthem’s public announcements, scammers sent fake e-mails to untold thousands of Anthem members and former members, which appeared to be from the company, as a ruse to scam impacted data subjects into providing additional sensitive personal information.  Again, this provided a valuable lesson for the future, to Anthem and other companies impacted by hacker-caused data breaches.


Legal Considerations for Website Privacy Policies

privacy-policyYou finally created your website.

  • Did you include eye-catching graphics? Check.
  • Did you include an attention-grabbing banner slogan?
  • Did you post all of your social media handles?
  • Did you include a privacy policy for the website? Maybe…

We get questions from clients about whether they are required to include a privacy policy and, if so, what should it say.  The answers may surprise you, but a privacy policy should definitely not be an afterthought for website owners.  It certainly isn’t a best practice to simply copy and paste the privacy policy of another’s company’s website.  The representations made in website privacy policies may subject website owners to legal risk, so thought and consideration is critical.

We generally convey one broad message when it comes to a published privacy policy – “Say what you do, and do what you say.”


Cloud-Computing Lessons using Software as a Service (SaaS)

Businessman drawing a CloudLong before anyone referred to “the cloud” as something related to the Internet, software companies began shifting away from expensive, customized, on-site software implementations to something we used to call Software as a Service (SaaS).  Now, “the cloud” is widely recognized as a place where Internet-based computing resources are shared, but SaaS is still out there.  In fact, it’s probably the most widespread type of “cloud” computing; you just don’t hear it called “SaaS” that much anymore.  But SaaS is, fundamentally, the same as it ever was – a type of Internet-based computing that provides shared computer device processing resources and data on demand. The SaaS software distribution model otherwise known as “on demand software” gives users access to application software and databases without having to manage and host those resources on their own.

In helping some clients who operate their business completely in the cloud, we’ve learned some things over time. The list below is not exhaustive, but four of the most important lessons are as follows: