Articles Posted in Internet Law

This week our focus shifts to a topic buzzing about the modern world. We have written on numerous occasions about cryptocurrency, but we have not discussed more pointedly the technological mechanism that yields it – i.e., the blockchain.  A complex, decentralized technology with the power, accuracy, and security to replace traditional financial systems, blockchain is the process that gives cryptocurrencies their true mechanism and value.  Its international scope can pose jurisdictional questions, its decentralized nature can puzzle tort plaintiffs, and the enforceability of “smart contracts” is an issue of first impression for most courts.  Additionally, lines must be drawn with regard to intellectual property.

To provide a brief background, the blockchain is the structure by which value is produced and conserved in cryptocurrencies.  Through a complex system of checks and balances, rewarded for solving algorithms, “miners” validate transactions by mathematically verifying them against previous transactional history of the asset in question. A “block” is created when transactions consolidate after nodes in a given network unanimously corroborate their veracity. From the block, the “miners” compete to solve a highly complex algorithm; the winner receives a coin and the block is added to a “chain.” An innovation has thus emerged onto which legal institutions must overlay their concepts.

Firstly, blockchain disputes run up against jurisdictional issues. The ubiquitous and decentralized nature of the blockchain requires careful consideration of the relevant contractual doctrines.  Applying the rules of whatever jurisdiction in which each node transacts would pose two problems: (1) the location of the transaction in question would be incredibly difficult to pinpoint; and (2) requiring compliance with every single potential location’s rules would be overwhelmingly unwieldy.  Therefore, choosing a governing law for the entire network is essential to ensure certainty.

This week’s article explores the European Union’s brewing copyright law and its possible effects on the internet.  Proponents intend for the law to modernize and suit copyright law for the digital age.  Critics say the law will make the internet substantially less free.  Today we discuss the Directive on Copyright in the Digital Single Market, and more specifically, three of its most recently approved provisions that could pose problems to internet freedom: its right for press publishers, its filtering obligations, and its text-and-data-mining stipulations.

The law’s right for press publishers would allow news companies to collect compensation when their stories are shared on social media platforms.  Known as the “link tax,” it would require platforms to purchase a license to post current-events information coming from news institutions.  Current copyright law already protects journalistic articles as literary works; republishers must ask permission to use such content.  The proposed right, however, effectively expands this protection to data and facts that have already been published. Whereas only creative descriptions or puns in headlines are now protected, mere non-creative fact could be too; this would effectively hold information for ransom.  The purpose of copyright law is to grant a limited monopoly over specific creative works and original ideas.  To extend the law to envelop full ideas or factual content is nonsensical, and stymies the very processes copyright is meant to assist.  Rather than foster innovation by protecting its fruit, the law would chill it by stealing its raw material.  It would obstruct citizens from running businesses and from creating original products using factual information.  In a region without the First Amendment, there is cause for concern.

The law’s filtering provision would require all website hosting providers to use filtering software that checks content against a database of copyright material.  As the law stands, platforms such as YouTube, Facebook, and Twitter are not liable for the copyright infringement of their users, as long as when they are notified of it, they take it down.  The users who post it, however, are still liable to authors or authorship-rights holders.  The current law attempts a balance between honoring the investment of creative authors and promoting innovation through the spread of information.  The “notice and takedown” process allows rights holders to notify the platform, requires that the platform take action but only once it’s told, and reminds users that they may ultimately be held accountable for infringement; this spreads liability out somewhat evenly. The proposed version would subject this process to automation.  This would nominally place the majority of liability on platforms by forcing them to monitor content proactively.  However, the users and their speech will feel the brunt due to the platforms’ much stricter resultant guidelines.  The arbiter of this would be a machine, checking content against a copyright database, which would include factual material.  The necessary software also doesn’t exist—allowed uses of copyrighted content like parody or criticism would be at risk because artificial intelligence cannot distinguish them from infringement.  This imperils important content such as university lectures, for example.

In the accelerating information frenzy of the modern world, the specter of hacking has become more threatening as technology progresses.  For example, information is more accessible and vulnerable especially when it is valuable. Public and private institutions rely heavily on electronic communications and storage, which raises the stakes of a transgression.  Currently, there are legal barricades and consequences for accessing or exploiting another individual’s digital information without permission, but most are defensive, and some are largely ineffective.  The need for hacking countermeasures has been introduced and debated, but not satisfied.  International cooperation has largely helped, but is ultimately undergirded by political motive rather than principle.  To a degree, the law remains irresolute as to how to best combat online hacking and similar misconduct.

The federal government has exacted large punishments for hacking computer systems without authorization.  It defines “hacking” as accessing a computer without authorization or exceeding one’s authorization access, obtaining information that the United States government determines to be classified for reasons relating to national defense or foreign relations, or willfully communicating or attempting to communicate the information to any foreign nation, or willfully retaining the information and failing to deliver it to the officer or employee of the United States entitled to receive it.  It can be punished as a misdemeanor or a felony depending on the circumstances, resulting in a up to one year in prison and a $100,000 fine or up to ten years and $250,000, respectively.

So, hacking private companies or individuals can yield similar consequences.  Private companies are no strangers to cyberattacks.  In recent years, though, the scope of offense has broadened from companies contracted with the government or armed forces, to victims as diverse as movie studios and financial institutions.  As it stands, businesses have limited avenues to justice.  They may monitor, take defensive action, and fix whatever damage they incur on their own.  A Congressional bill recently drafted aims to allow businesses to “hack back” legally.  This can mean anything from simply tracing an attack, to identifying the attacker, to actually damaging the attacker’s devices.  However, the bill in its current form is discouragingly vague, and a company’s misstep could risk violating the same laws that were meant to protect it.  So, companies may be unwilling to take that risk.  Another criticism of the bill is that it does little to protect innocent third parties from retaliation where their systems might simply have been hijacked in a hacker’s scheme.  This concern is exacerbated by vagueness in the bill’s language allowing retaliation against “persistent unauthorized intrusion.”

On April 10, 2018, Mark Zuckerberg, founder and chief executive of Facebook, took a chair beneath an array of Senators to answer for the uneasiness his company’s behavior had been giving the public.  The testimony comprised a broad variety of concerns – from user privacy to election meddling, to misinformation and an alleged bias in combatting it. The latter concern has fascinating legal implications we will discuss today.

More pointedly speaking, allegations that the large social media companies’ community guidelines have been enforced selectively have sparked a public controversy.  The accounts of some particularly controversial speakers, for better or worse, have been shut down, and others report that the volume of exposure their content gets has suddenly dwindled.  Pundits, for the most part on the right wing, have strongly condemned the companies, and ensuing arguments tend to hit all the philosophical tenets of the classical debate over free speech.

The First Amendment does not ensure anyone’s place on a private platform; it only restricts the government from discriminating with regard to speech, including, but not limited to, hate speech.  For the most part, it is left to market pressures to correct any perceived bias or wrongdoing on the part of the social media companies.  There are other areas of the law, however, that social media companies have some potential to run afoul of.  Critics and commentators have brought up both antitrust law and publishing law issues.  Although, there is debate over the likelihood that companies like Facebook infract upon either, yet the potential does exist.

Do you monitor what personal information companies access and store when you visit a website?  Do you wish you had more ability to know what companies do with such data?  In 2018, user data privacy rights have become a major topic for discussion. Starting with Europe’s enactment of the General Data Privacy Regulation earlier in the year, and California’s passing of the Consumer Privacy Act, we have seen many changes in the online legal world.  The trend continues, with internet giants now lobbying for a federal regulatory scheme, which would ease the number of laws they have to comply with if each state follows California and enacts its own user privacy legislation.  In this blog, we will provide an overview of the recent changes.

After California passed a law this year, which grants consumers greater data privacy rights, there has been much backlash from technology giants.  Facebook, Google, Microsoft, and IBM are currently lobbying officials in Washington for a federal privacy law that would overrule California’s legislation.  These technology giants are hoping for such legislation to be passed through Congress, as the lobbyists would influence how the law is written, giving them discretion over their ability to use personal data and information.  Because federal law on such a matter would supersede state law, California’s user privacy law may become naught.

According to Ernesto Falcon of Electronic Frontier Foundation, a user rights group, the strategy of Facebook, Google, and Microsoft here is “to neuter California[‘s law] for something much weaker on the federal level.  The companies are afraid of California because it sets the bar for other states.”  As user data and information is such a key part of the business model of the social media companies – who use such information to sell advertisements – they want as much freedom as possible to collect and exploit such data.

In this article, we plan to discuss the Fifth Amendment implications of requirements to digitally identify oneself, for example by facial or thumbprint recognition.

The spread of data-encryption services has made the retrieval of information more difficult for law enforcement officials.  Over half the attempts the FBI made to unlock devices in 2017, for example, were thwarted by encryption.  As such investigatory bodies would have it, the government could simply compel a suspect to hand over the password.  Their biggest obstacle, however, remains to be the Fifth Amendment.

Fifth Amendment jurisprudence has come to bear on this issue in the past decade, yet remains somewhat unsettled.  Back in 1975, Fisher v. United States set a foundation for the issue.  The case involved the IRS attempting to compel the defendants to give up certain documents, which they refused on the grounds that they would be incriminating themselves, and were protected by the Fifth Amendment.  The Supreme Court ruled that the Fifth Amendment’s words: “[n]o person … shall be compelled in any criminal case to be a witness against himself” only protect a suspect from having to communicate incriminating testimonialevidence, and that the production of that case’s physical evidence wouldn’t compel the person to “restate, repeat or affirm the truth of it.”  The Court later fleshed out the term testimonial in a case regarding the subpoena of bank records and said that it’s “[t]he contents of an individual’s mind [that] fall squarely within the protection of the Fifth Amendment.”  Generally, the courts don’t protect people from having to produce physical evidence, which is not considered “testimony” or the “contents of an individual’s mind.”

Do the courts have the ability to subpoena user identity information from Instagram?  Can a person file a lawsuit against the operators of an Instagram page for defamation? An advertising executive was fired after being posted about on an Instagram account, Diet Madison Avenue. The account is known for outing sexual harassment and discrimination in the advertising industry.  The fired executive, Ralph Watson, is now suing Diet Madison Avenue, and the people who ran it for defamation.  The lawsuit names “Doe Defendants”for the people who ran the page and currently remain anonymous.

Watson claims that Diet Madison Avenue made false allegations  about him that cost his job.  Several other agencies have fired men whose names appeared on the Instagram account. Since being fired, Watson claims that he is unable to find work.  “Trial by social media” has been used to describe the incidents.  Watson claims that he has never sexually harassed anyone, but says that his career and reputation have been ruined overnight. Watson hopes that the trial will bring the operators of the account into court, where they must present the evidence and defend their claims.

The operators of the account have said that the allegations were independently researched and confirmed before any names were posted on the account.  The specific post in question called Watson an “unrepentant serial predator” who “targeted and groomed women,” among other things.  Watson also filed a wrongful termination lawsuit against the advertising firm he worked for by alleging defamation, wrongful termination, and breach of contract.

Is a warrant required for law enforcement to access a suspect’s location information generated by the suspect’s cell phone?  Would obtaining such data violate a person’s Fourth Amendment rights?  In this blog, we will be discussing whether a warrant is required for law enforcement to access a user’s location information from cell phone service providers.  As geolocation information is almost continually updated when users interact with apps and send and receive messages, such location information is almost always available.  But also, as constantly available are Fourth Amendment rights, namely the right to be free from unreasonable searches and seizures.

In Carpenter v. United States, the Supreme Court analyzed this Fourth Amendment issue.  In order to obtain a search warrant, police typically must submit a warrant application to an independent judge or magistrate.  In the application, the police must outline facts leading the judicial officer to believe there is probable cause that the suspect is engaging in criminal behavior.  This showing of likely criminal behavior is known as “probable cause” and is required for police to conduct a search of a place or person.

There is an applicable federal law.  Section 2703(d) of the Stored Communications Act, which protects privacy information and the stored content of electronics, allows an exemption to the typical warrant required for a search.  Orders made under 2703(d) can compel the production of certain stored communications or non-content information if “specific and articulable facts show that there are reasonable grounds to believe that the contents of a wire or electronic communication, or the records or other information sought, are relevant and material to an ongoing criminal investigation.” This is closer to what is known as the “reasonable suspicion” standard than “probable cause.”  Reasonable suspicion comes into play when police pull over a vehicle, for example, or conduct a stop and frisk of a suspicious person who they believe may be concealing a weapon.  Reasonable suspicion is a much lower bar to meet than probable cause.

For this week’s blog post, we will be discussing a recently decided copyright law case, in which a foreign broadcaster was held liable for violating the Copyright Act when they allowed United States users to access copyrighted material through a video-on-demand website.  The specific case is Spanski Enterprises, Inc. v. Telewizja Polska, which was decided on appeal by the United States District Court for the District of Columbia.

In this case, Spanski, who is a foreign broadcaster, uploaded copyrighted television episodes to its website, and then projected the episodes onto computer screens in the United States for visitors to view.  The court held that in doing this, Spanski was in violation of the Copyright Act.

Taking a step back, we will briefly discuss what makes a work copyrightable.  In order for a work of authorship to be copyrightable, the work must: (1) be fixed in a tangible medium of expression; (2) be original and not a derivative work; and (3) display some level of creativity (i.e., typically just slightly more than a trivial amount).

For this week’s blog post, we will be continuing with a discussion of another recently decided Supreme Court case.  Specifically, we will cover United States v. Microsoft Corporation, and talk about the ramification’s the Court’s decision has on the world of internet technology.

This case involves user data privacy rights and the ability of US based technology companies to refuse to comply with federal warrants when user data is stored overseas.  The case had to do with the extraterritorial (outside of the United States) application of the Stored Communications Act (SCA), and whether warrants issued under SCA could be effective with regard to new internet technology such as cloud storage and data centers.

In 2013, FBI agents obtained a warrant requiring Microsoft to disclose emails and information related to a customer’s account who was believed to be involved in drug trafficking.  Microsoft attempted to quash the warrant, claiming that all of the customer’s emails and information were stored in Microsoft data centers in Dublin, Ireland.  The court held Microsoft in civil contempt for refusing to give agents the emails, but this decision was reversed by the Second Circuit.  The Second Circuit held that requiring Microsoft to give federal agents emails that were stored overseas would be outside the realm of permissible extraterritorial application of the Stored Communications Act (18 U.S.C. 2703).