Today, we continue our focus on blockchain technology and surrounding rules and regulations. It is a nascent area and the subject of much debate. Last week, we discussed the jurisdictional issues this technology poses, as well as questions of liability, contract, and intellectual property. Today, we narrow our focus to one particular area of the blockchain realm: Asset-backed ventures.

The blockchain is by definition an open-ended and malleable tool. One of its most useful applications is to provide liquidity and capital where previous market inefficiencies precluded them.  This makes the biggest difference for small and mid-market ventures.  Crowd-sourcing income for job-producing smaller corporations will compound wealth for the international community in the decades to come in the developed world, and even more starkly in Africa and South Asia.

An emerging innovation in the blockchain space is to hinge digital coins’ value on an asset – i.e., the area of asset-backed tokens.  Variations of this idea include a coinage system based on the productivity of oil and gas ventures.  Investors purchase coins and fund the venture in its early stages and throughout.  Once the venture starts producing and oil is sold, the investor has a right to exchange his or her coin for the market price of that asset. Hence, the supply of oil rises, the price declines, communities prosper, and investors get a healthy return.  If they do not, there is a mechanism within the coin-value determination that adjusts for the poor judgment of the investor and devalues the coin. So, it behooves owners of the coin to choose fruitful projects.

This week our focus shifts to a topic buzzing about the modern world. We have written on numerous occasions about cryptocurrency, but we have not discussed more pointedly the technological mechanism that yields it – i.e., the blockchain.  A complex, decentralized technology with the power, accuracy, and security to replace traditional financial systems, blockchain is the process that gives cryptocurrencies their true mechanism and value.  Its international scope can pose jurisdictional questions, its decentralized nature can puzzle tort plaintiffs, and the enforceability of “smart contracts” is an issue of first impression for most courts.  Additionally, lines must be drawn with regard to intellectual property.

To provide a brief background, the blockchain is the structure by which value is produced and conserved in cryptocurrencies.  Through a complex system of checks and balances, rewarded for solving algorithms, “miners” validate transactions by mathematically verifying them against previous transactional history of the asset in question. A “block” is created when transactions consolidate after nodes in a given network unanimously corroborate their veracity. From the block, the “miners” compete to solve a highly complex algorithm; the winner receives a coin and the block is added to a “chain.” An innovation has thus emerged onto which legal institutions must overlay their concepts.

Firstly, blockchain disputes run up against jurisdictional issues. The ubiquitous and decentralized nature of the blockchain requires careful consideration of the relevant contractual doctrines.  Applying the rules of whatever jurisdiction in which each node transacts would pose two problems: (1) the location of the transaction in question would be incredibly difficult to pinpoint; and (2) requiring compliance with every single potential location’s rules would be overwhelmingly unwieldy.  Therefore, choosing a governing law for the entire network is essential to ensure certainty.

This week’s article explores the European Union’s brewing copyright law and its possible effects on the internet.  Proponents intend for the law to modernize and suit copyright law for the digital age.  Critics say the law will make the internet substantially less free.  Today we discuss the Directive on Copyright in the Digital Single Market, and more specifically, three of its most recently approved provisions that could pose problems to internet freedom: its right for press publishers, its filtering obligations, and its text-and-data-mining stipulations.

The law’s right for press publishers would allow news companies to collect compensation when their stories are shared on social media platforms.  Known as the “link tax,” it would require platforms to purchase a license to post current-events information coming from news institutions.  Current copyright law already protects journalistic articles as literary works; republishers must ask permission to use such content.  The proposed right, however, effectively expands this protection to data and facts that have already been published. Whereas only creative descriptions or puns in headlines are now protected, mere non-creative fact could be too; this would effectively hold information for ransom.  The purpose of copyright law is to grant a limited monopoly over specific creative works and original ideas.  To extend the law to envelop full ideas or factual content is nonsensical, and stymies the very processes copyright is meant to assist.  Rather than foster innovation by protecting its fruit, the law would chill it by stealing its raw material.  It would obstruct citizens from running businesses and from creating original products using factual information.  In a region without the First Amendment, there is cause for concern.

The law’s filtering provision would require all website hosting providers to use filtering software that checks content against a database of copyright material.  As the law stands, platforms such as YouTube, Facebook, and Twitter are not liable for the copyright infringement of their users, as long as when they are notified of it, they take it down.  The users who post it, however, are still liable to authors or authorship-rights holders.  The current law attempts a balance between honoring the investment of creative authors and promoting innovation through the spread of information.  The “notice and takedown” process allows rights holders to notify the platform, requires that the platform take action but only once it’s told, and reminds users that they may ultimately be held accountable for infringement; this spreads liability out somewhat evenly. The proposed version would subject this process to automation.  This would nominally place the majority of liability on platforms by forcing them to monitor content proactively.  However, the users and their speech will feel the brunt due to the platforms’ much stricter resultant guidelines.  The arbiter of this would be a machine, checking content against a copyright database, which would include factual material.  The necessary software also doesn’t exist—allowed uses of copyrighted content like parody or criticism would be at risk because artificial intelligence cannot distinguish them from infringement.  This imperils important content such as university lectures, for example.

In the accelerating information frenzy of the modern world, the specter of hacking has become more threatening as technology progresses.  For example, information is more accessible and vulnerable especially when it is valuable. Public and private institutions rely heavily on electronic communications and storage, which raises the stakes of a transgression.  Currently, there are legal barricades and consequences for accessing or exploiting another individual’s digital information without permission, but most are defensive, and some are largely ineffective.  The need for hacking countermeasures has been introduced and debated, but not satisfied.  International cooperation has largely helped, but is ultimately undergirded by political motive rather than principle.  To a degree, the law remains irresolute as to how to best combat online hacking and similar misconduct.

The federal government has exacted large punishments for hacking computer systems without authorization.  It defines “hacking” as accessing a computer without authorization or exceeding one’s authorization access, obtaining information that the United States government determines to be classified for reasons relating to national defense or foreign relations, or willfully communicating or attempting to communicate the information to any foreign nation, or willfully retaining the information and failing to deliver it to the officer or employee of the United States entitled to receive it.  It can be punished as a misdemeanor or a felony depending on the circumstances, resulting in a up to one year in prison and a $100,000 fine or up to ten years and $250,000, respectively.

So, hacking private companies or individuals can yield similar consequences.  Private companies are no strangers to cyberattacks.  In recent years, though, the scope of offense has broadened from companies contracted with the government or armed forces, to victims as diverse as movie studios and financial institutions.  As it stands, businesses have limited avenues to justice.  They may monitor, take defensive action, and fix whatever damage they incur on their own.  A Congressional bill recently drafted aims to allow businesses to “hack back” legally.  This can mean anything from simply tracing an attack, to identifying the attacker, to actually damaging the attacker’s devices.  However, the bill in its current form is discouragingly vague, and a company’s misstep could risk violating the same laws that were meant to protect it.  So, companies may be unwilling to take that risk.  Another criticism of the bill is that it does little to protect innocent third parties from retaliation where their systems might simply have been hijacked in a hacker’s scheme.  This concern is exacerbated by vagueness in the bill’s language allowing retaliation against “persistent unauthorized intrusion.”

On April 10, 2018, Mark Zuckerberg, founder and chief executive of Facebook, took a chair beneath an array of Senators to answer for the uneasiness his company’s behavior had been giving the public.  The testimony comprised a broad variety of concerns – from user privacy to election meddling, to misinformation and an alleged bias in combatting it. The latter concern has fascinating legal implications we will discuss today.

More pointedly speaking, allegations that the large social media companies’ community guidelines have been enforced selectively have sparked a public controversy.  The accounts of some particularly controversial speakers, for better or worse, have been shut down, and others report that the volume of exposure their content gets has suddenly dwindled.  Pundits, for the most part on the right wing, have strongly condemned the companies, and ensuing arguments tend to hit all the philosophical tenets of the classical debate over free speech.

The First Amendment does not ensure anyone’s place on a private platform; it only restricts the government from discriminating with regard to speech, including, but not limited to, hate speech.  For the most part, it is left to market pressures to correct any perceived bias or wrongdoing on the part of the social media companies.  There are other areas of the law, however, that social media companies have some potential to run afoul of.  Critics and commentators have brought up both antitrust law and publishing law issues.  Although, there is debate over the likelihood that companies like Facebook infract upon either, yet the potential does exist.

Most, if not all, of our readers are familiar with e-commerce websites and related transactions.  Notably, Amazon.com’s empire, as well as other forms of e-commerce such as iTunes subscription services or purchasing an e-Book are part of these transactions.  In recent news, one of China’s largest e-commerce websites is being sued in the United States for selling counterfeit and knock-off products. Shares of the company, Pinduoduo, plummeted after news of the lawsuit was made public.  Currently, six law firms are in the process of filing class actions on behalf of investors who purchased shares of Pinduoduo.  The company went public in the United States earlier this year, raising over $1.6 billion from investors. Pinduoduo is traded on NASDAQ under PDD, and currently has a $25 billion market cap.

Pinduoduo is known for combining online shopping with entertainment. It was founded in September 2015 by Colin Guang, a former Google employee.  As of now, however, the company has faced an influx of negative media in China, with claims that the platform sells knock-offs of major brand names. This selling of fake goods could give investors standing to sue.  If the investors were misled, and invested because of the false information, they will have a cause of action under the federal securities laws. Executives of companies, and insiders who communicate information to investors about a company, have a duty not to make any misleading or materially false statements about the company.  This includes information about the financial health of the business.

Started only three years ago, Pinduoduo had 295 million active users and 4.3 billion total orders in 2017.  China is the largest online retail market, with other e-commerce names such as Alibaba and JD.com.  Pinduoduo sells groceries, electronics, clothing, and household items, among other things.  While Amazon may be the number one e-commerce website in the United States, Pinduoduo is the second larges e-commerce website in China behind Alibaba.

Do you monitor what personal information companies access and store when you visit a website?  Do you wish you had more ability to know what companies do with such data?  In 2018, user data privacy rights have become a major topic for discussion. Starting with Europe’s enactment of the General Data Privacy Regulation earlier in the year, and California’s passing of the Consumer Privacy Act, we have seen many changes in the online legal world.  The trend continues, with internet giants now lobbying for a federal regulatory scheme, which would ease the number of laws they have to comply with if each state follows California and enacts its own user privacy legislation.  In this blog, we will provide an overview of the recent changes.

After California passed a law this year, which grants consumers greater data privacy rights, there has been much backlash from technology giants.  Facebook, Google, Microsoft, and IBM are currently lobbying officials in Washington for a federal privacy law that would overrule California’s legislation.  These technology giants are hoping for such legislation to be passed through Congress, as the lobbyists would influence how the law is written, giving them discretion over their ability to use personal data and information.  Because federal law on such a matter would supersede state law, California’s user privacy law may become naught.

According to Ernesto Falcon of Electronic Frontier Foundation, a user rights group, the strategy of Facebook, Google, and Microsoft here is “to neuter California[‘s law] for something much weaker on the federal level.  The companies are afraid of California because it sets the bar for other states.”  As user data and information is such a key part of the business model of the social media companies – who use such information to sell advertisements – they want as much freedom as possible to collect and exploit such data.

In this article, we plan to discuss the Fifth Amendment implications of requirements to digitally identify oneself, for example by facial or thumbprint recognition.

The spread of data-encryption services has made the retrieval of information more difficult for law enforcement officials.  Over half the attempts the FBI made to unlock devices in 2017, for example, were thwarted by encryption.  As such investigatory bodies would have it, the government could simply compel a suspect to hand over the password.  Their biggest obstacle, however, remains to be the Fifth Amendment.

Fifth Amendment jurisprudence has come to bear on this issue in the past decade, yet remains somewhat unsettled.  Back in 1975, Fisher v. United States set a foundation for the issue.  The case involved the IRS attempting to compel the defendants to give up certain documents, which they refused on the grounds that they would be incriminating themselves, and were protected by the Fifth Amendment.  The Supreme Court ruled that the Fifth Amendment’s words: “[n]o person … shall be compelled in any criminal case to be a witness against himself” only protect a suspect from having to communicate incriminating testimonialevidence, and that the production of that case’s physical evidence wouldn’t compel the person to “restate, repeat or affirm the truth of it.”  The Court later fleshed out the term testimonial in a case regarding the subpoena of bank records and said that it’s “[t]he contents of an individual’s mind [that] fall squarely within the protection of the Fifth Amendment.”  Generally, the courts don’t protect people from having to produce physical evidence, which is not considered “testimony” or the “contents of an individual’s mind.”

Do the courts have the ability to subpoena user identity information from Instagram?  Can a person file a lawsuit against the operators of an Instagram page for defamation? An advertising executive was fired after being posted about on an Instagram account, Diet Madison Avenue. The account is known for outing sexual harassment and discrimination in the advertising industry.  The fired executive, Ralph Watson, is now suing Diet Madison Avenue, and the people who ran it for defamation.  The lawsuit names “Doe Defendants”for the people who ran the page and currently remain anonymous.

Watson claims that Diet Madison Avenue made false allegations  about him that cost his job.  Several other agencies have fired men whose names appeared on the Instagram account. Since being fired, Watson claims that he is unable to find work.  “Trial by social media” has been used to describe the incidents.  Watson claims that he has never sexually harassed anyone, but says that his career and reputation have been ruined overnight. Watson hopes that the trial will bring the operators of the account into court, where they must present the evidence and defend their claims.

The operators of the account have said that the allegations were independently researched and confirmed before any names were posted on the account.  The specific post in question called Watson an “unrepentant serial predator” who “targeted and groomed women,” among other things.  Watson also filed a wrongful termination lawsuit against the advertising firm he worked for by alleging defamation, wrongful termination, and breach of contract.

Is a warrant required for law enforcement to access a suspect’s location information generated by the suspect’s cell phone?  Would obtaining such data violate a person’s Fourth Amendment rights?  In this blog, we will be discussing whether a warrant is required for law enforcement to access a user’s location information from cell phone service providers.  As geolocation information is almost continually updated when users interact with apps and send and receive messages, such location information is almost always available.  But also, as constantly available are Fourth Amendment rights, namely the right to be free from unreasonable searches and seizures.

In Carpenter v. United States, the Supreme Court analyzed this Fourth Amendment issue.  In order to obtain a search warrant, police typically must submit a warrant application to an independent judge or magistrate.  In the application, the police must outline facts leading the judicial officer to believe there is probable cause that the suspect is engaging in criminal behavior.  This showing of likely criminal behavior is known as “probable cause” and is required for police to conduct a search of a place or person.

There is an applicable federal law.  Section 2703(d) of the Stored Communications Act, which protects privacy information and the stored content of electronics, allows an exemption to the typical warrant required for a search.  Orders made under 2703(d) can compel the production of certain stored communications or non-content information if “specific and articulable facts show that there are reasonable grounds to believe that the contents of a wire or electronic communication, or the records or other information sought, are relevant and material to an ongoing criminal investigation.” This is closer to what is known as the “reasonable suspicion” standard than “probable cause.”  Reasonable suspicion comes into play when police pull over a vehicle, for example, or conduct a stop and frisk of a suspicious person who they believe may be concealing a weapon.  Reasonable suspicion is a much lower bar to meet than probable cause.