Articles Posted in Technology

In this article, we plan to discuss the Fifth Amendment implications of requirements to digitally identify oneself, for example by facial or thumbprint recognition.

The spread of data-encryption services has made the retrieval of information more difficult for law enforcement officials.  Over half the attempts the FBI made to unlock devices in 2017, for example, were thwarted by encryption.  As such investigatory bodies would have it, the government could simply compel a suspect to hand over the password.  Their biggest obstacle, however, remains to be the Fifth Amendment.

Fifth Amendment jurisprudence has come to bear on this issue in the past decade, yet remains somewhat unsettled.  Back in 1975, Fisher v. United States set a foundation for the issue.  The case involved the IRS attempting to compel the defendants to give up certain documents, which they refused on the grounds that they would be incriminating themselves, and were protected by the Fifth Amendment.  The Supreme Court ruled that the Fifth Amendment’s words: “[n]o person … shall be compelled in any criminal case to be a witness against himself” only protect a suspect from having to communicate incriminating testimonialevidence, and that the production of that case’s physical evidence wouldn’t compel the person to “restate, repeat or affirm the truth of it.”  The Court later fleshed out the term testimonial in a case regarding the subpoena of bank records and said that it’s “[t]he contents of an individual’s mind [that] fall squarely within the protection of the Fifth Amendment.”  Generally, the courts don’t protect people from having to produce physical evidence, which is not considered “testimony” or the “contents of an individual’s mind.”

Do the courts have the ability to subpoena user identity information from Instagram?  Can a person file a lawsuit against the operators of an Instagram page for defamation? An advertising executive was fired after being posted about on an Instagram account, Diet Madison Avenue. The account is known for outing sexual harassment and discrimination in the advertising industry.  The fired executive, Ralph Watson, is now suing Diet Madison Avenue, and the people who ran it for defamation.  The lawsuit names “Doe Defendants”for the people who ran the page and currently remain anonymous.

Watson claims that Diet Madison Avenue made false allegations  about him that cost his job.  Several other agencies have fired men whose names appeared on the Instagram account. Since being fired, Watson claims that he is unable to find work.  “Trial by social media” has been used to describe the incidents.  Watson claims that he has never sexually harassed anyone, but says that his career and reputation have been ruined overnight. Watson hopes that the trial will bring the operators of the account into court, where they must present the evidence and defend their claims.

The operators of the account have said that the allegations were independently researched and confirmed before any names were posted on the account.  The specific post in question called Watson an “unrepentant serial predator” who “targeted and groomed women,” among other things.  Watson also filed a wrongful termination lawsuit against the advertising firm he worked for by alleging defamation, wrongful termination, and breach of contract.

Is a warrant required for law enforcement to access a suspect’s location information generated by the suspect’s cell phone?  Would obtaining such data violate a person’s Fourth Amendment rights?  In this blog, we will be discussing whether a warrant is required for law enforcement to access a user’s location information from cell phone service providers.  As geolocation information is almost continually updated when users interact with apps and send and receive messages, such location information is almost always available.  But also, as constantly available are Fourth Amendment rights, namely the right to be free from unreasonable searches and seizures.

In Carpenter v. United States, the Supreme Court analyzed this Fourth Amendment issue.  In order to obtain a search warrant, police typically must submit a warrant application to an independent judge or magistrate.  In the application, the police must outline facts leading the judicial officer to believe there is probable cause that the suspect is engaging in criminal behavior.  This showing of likely criminal behavior is known as “probable cause” and is required for police to conduct a search of a place or person.

There is an applicable federal law.  Section 2703(d) of the Stored Communications Act, which protects privacy information and the stored content of electronics, allows an exemption to the typical warrant required for a search.  Orders made under 2703(d) can compel the production of certain stored communications or non-content information if “specific and articulable facts show that there are reasonable grounds to believe that the contents of a wire or electronic communication, or the records or other information sought, are relevant and material to an ongoing criminal investigation.” This is closer to what is known as the “reasonable suspicion” standard than “probable cause.”  Reasonable suspicion comes into play when police pull over a vehicle, for example, or conduct a stop and frisk of a suspicious person who they believe may be concealing a weapon.  Reasonable suspicion is a much lower bar to meet than probable cause.

For this week’s blog post, we will be discussing a recently decided copyright law case, in which a foreign broadcaster was held liable for violating the Copyright Act when they allowed United States users to access copyrighted material through a video-on-demand website.  The specific case is Spanski Enterprises, Inc. v. Telewizja Polska, which was decided on appeal by the United States District Court for the District of Columbia.

In this case, Spanski, who is a foreign broadcaster, uploaded copyrighted television episodes to its website, and then projected the episodes onto computer screens in the United States for visitors to view.  The court held that in doing this, Spanski was in violation of the Copyright Act.

Taking a step back, we will briefly discuss what makes a work copyrightable.  In order for a work of authorship to be copyrightable, the work must: (1) be fixed in a tangible medium of expression; (2) be original and not a derivative work; and (3) display some level of creativity (i.e., typically just slightly more than a trivial amount).

For this week’s blog post, we will be continuing with a discussion of another recently decided Supreme Court case.  Specifically, we will cover United States v. Microsoft Corporation, and talk about the ramification’s the Court’s decision has on the world of internet technology.

This case involves user data privacy rights and the ability of US based technology companies to refuse to comply with federal warrants when user data is stored overseas.  The case had to do with the extraterritorial (outside of the United States) application of the Stored Communications Act (SCA), and whether warrants issued under SCA could be effective with regard to new internet technology such as cloud storage and data centers.

In 2013, FBI agents obtained a warrant requiring Microsoft to disclose emails and information related to a customer’s account who was believed to be involved in drug trafficking.  Microsoft attempted to quash the warrant, claiming that all of the customer’s emails and information were stored in Microsoft data centers in Dublin, Ireland.  The court held Microsoft in civil contempt for refusing to give agents the emails, but this decision was reversed by the Second Circuit.  The Second Circuit held that requiring Microsoft to give federal agents emails that were stored overseas would be outside the realm of permissible extraterritorial application of the Stored Communications Act (18 U.S.C. 2703).

This month on our blog we have been talking about internet law related legislation and development. This week will continue the topic with a discussion of artificial intelligence and privacy issues. Specifically, we will talk about Google’s new Duplex technology, which allows users to make appointments, book reservations, and call businesses via an artificial intelligence robot.

Last month, Google introduced Google Duplex at its annual conference. Google’s CEO presented the technology on stage and played a recorded phone call in which the digital assistant booked a hair appointment for a client.  A second conversation of the digital assistant making a restaurant reservation was also played.  What was immediately striking about these conversations is how realistic the robot sounded.  Upon listening, the robot voice and interaction is nearly indistinguishable from a human conversation.  The digital assistant uses phrases such as “uh” and “uhm,” and also pauses before responding as if it is “thinking” of a response.  More often than not, the party on the other end of the line likely has no idea they are talking with a robot, or even further that they are being recorded. This may pose a liability for Google.

Google has been recording the conversations that the robots have in order to improve and modify the technology.  By recording the conversation, engineers at Google can see exactly what phrases, or questions, make the robot function in a less than optimal way.  When engineers and programmers are able to recognize these challenging variants in conversation, they are more able to program an appropriate response for the robot, so as to make the technology more user-friendly and successful. Problems arise in the legal world, however, when Google developers fail to tell the party on the other end of the line that they are being recorded.

In our first June blog post, we discussed a bill passed by the State Senate which would provide net neutrality rules for ISPs in the State of California.  We continue this week with the theme of internet regulating laws being proposed in our state.

The California Consumer Privacy Act of 2018 (CCPA) is a ballot measure, which would provide unprecedented protection for user data in California.  Users would have the ability to prevent companies from selling their data to third parties, as well as demand full disclosure of all data being collected.  Consumers would also have the ability to sue companies in violation of the law.

The CCPA was started by Alastair Mactaggart, a real estate developer in the San Francisco area, along with Rick Arney, a finance executive, and Mary Stone Ross, an attorney who has worked on national security matters with the House of Representatives and was a former CIA analyst.  The group says they are just three people living in California who want what is best for their kids and the future of Californians.  They believe the “bargaining” that occurs between big companies and users regarding consumer privacy, which is basically take-it-or-leave-it is not bargaining at all.  With the practical necessity of laptops and cell phones today, they want users to have more choice and power in terms of what information is collected, and how that information is used.

Mentioned in passing, in our first December blog post is another potential pitfall for operators of Internet-based services such as websites or applications. This one pitfall in particular comes out of the State of California. However, given the role of the internet as a wide-spread source of information, this is a lesson for any individual pitching to minors online. This law is Business & Professions Code 22580-22582 (“BPC 22580-22582”) otherwise known as “Privacy Rights for California Minors in the Digital World.” What does this law pertain to in general? What kind of entities need to be concerned about California Minors? What are the privacy rights these minors are allowed to enjoy?

What is BPC 22580-22582?

BPC 22580-22582 is a sub-part of the California Business and Professions Code.  It applies to operators of Internet websites and services, including, but not limited to, applications that are directed towards children and those same entities where the entities know the websites or services are used or visited by children. Here, “directed to” means it was created mainly for children, and is not intended for a general audience, including, but not limited to, adults. The law states, for children with registered accounts, entities must:

Last week we discussed smart toys, and we mentioned “COPPA” in that article.  As such, some of you may be asking what is COPPA?” In short, COPPA is a federal law specifically tailored towards children, and stands for “Children’s Online Privacy Protection Act.” This law is meant to protect children from over exposure and prohibit businesses from gathering invasive amounts of analytics on children using their products or services. This remains a legitimate concern, attempting to curtail some of the worst aspects of online life.  What exactly does COPPA prohibit? Is there any limitation? Does it provide guidelines for a business to follow and ensure compliance?

COPPA Prohibitions

The spirit of COPPA can be summarized as follows: It is unlawful for an operator or a website or online service directed to children or with knowledge that it is collecting or maintaining a child’s information, to violate this federal statute by failing to give notice on the website of what information it collects, how it’s used, and how it’s disclosed, failing to obtain parental consent, providing reasonable means for parents to review or cancel the use of the service or website, to not condition participation in a game, offering of a prize or other activity by disclosing more personal information than is necessary, and failing to establish and maintain procedures to protect the confidentiality, security and integrity of the children’s information.

We have finally reached December, and with it, comes the time for shopping. Of course, some people will focus on the youngest members of their families – i.e., children.  However, it needs to be emphasized that even with children, there are special concerns. The law considers juveniles and their decision-making capabilities, and in the age of the “smart toy,” this could have far ranging impacts on businesses and the emerging market. What is a smart toy? How might it differ from an average toy? What would a business need to be aware of?  What about a parent?

Smart Toys

Smart toys, alternatively known as “connected” toys, are those devices that can be used for play, but also connect to the internet or cloud.  This concept may sound like the internet of things–and these smart toys are just another part. A good example of this may be something like the “Hello Barbie” dolls from 2015. These dolls were akin to a smart chat program, or a more personable Siri/Cortana/Alexa. While Barbie’s operating system would not allow her to break significantly off a script, she would remember and adapt to a child’s thoughts, concerns, or desires.