Articles Posted in Technology

Electronic data has been growing in size and proportion for several decades. The sheer amount of electronic files (e.g., emails, pictures, videos) has consumed local and remote databases. The cloud storage facilities have been put together to hold this information for us. Cloud storage facilities have certain obligations towards their customers which include secure storage of electronic files by using industry-approved protocols. The rules for proper storage should not change based on the particular industry. In fact, the cloud storage facilities are supposed to use similar protection measures for all electronic files – e.g., encryption – to ensure safety.

Encryption is a tool or resource that allows the files to be scrambled and hidden from plain sight. The encrypted data is called “ciphertext” which can only be decrypted with the right key. There are two types of encryption. First, is symmetric encryption. Second, is asymmetric encryption. Symmetric encryption uses one key for encryption and decryption. Asymmetric encryption uses two different keys for encryption and decryption – i.e., the private and public key. The public key can be shared with the general public but the private key remains a secret and is only accessible by the right individual. There are various encryption technologies such as AES, Triple DES, RSA, and Blowfish.

Electronic data retention includes collecting, storing, and managing information. Private and public organizations should have the right rules and regulations that help define how electronic information should be located, identified, and stored. There are government regulations, international standards, industry regulations and internal policies. Government regulations are set by state or federal governmental agencies such as the Federal Trade Commission and Internal Revenue Service. International standards are set by the International Organization for Standardization like ISO/IEC 27040, IS 9001, ISO 17068:2017. Industry regulations include the GDPR, PCI-DSS, and CCPA. Finally, internal policies include data version controls and employee record retention.

Data disposal is a key process in a legal entity’s policies and procedures for managing personal and confidential information. In general, private and public entities store data on their servers. This information may include financial and health information which should not fall into the wrong hands. So, there must be a proper procedure for destroying and disposing that information by using industry approved methods.

The Federal Trade Commission has implemented a data disposal rule in relation to consumer reports and records to prevent unauthorized access to or use of that information. In California, several statutes have been promulgated to address this issue. For example, California Civil Code Sections 1798.81, 1798.81.5, and 1798.84 are applicable. In fact, Civil Code 1798.81 states as follows: “A business shall take all reasonable steps to dispose, or arrange for the disposal, of customer records within its custody or control containing personal information when the records are no longer to be retained by the business by (a) shredding, (b) erasing, or (c) otherwise modifying the personal information in those records to make it unreadable or undecipherable through any means.” Therefore, there are standards to follow and implement to avoid unnecessary complications. The state legislature has encouraged the implementation of “reasonable security” for personal information under Civil Code 1798.81.5. Also, Civil Code 1798.84 outlines the legal remedies which include initiating a civil action.

The proper retention of emails is paramount especially if the electronic messages include private, confidential or proprietary information. For example, “email archiving” is one method to retain electronic messages especially if there is the possibility of litigation. The emails should be backed up in a searchable format for practical reasons. Electronic discovery allows the parties to request and obtain electronic documents during litigation. In most cases, the electronic discovery process is time consuming and complicated especially because there is a large volume of data involved in the lawsuit. Also, more importantly, the failure to comply with electronic discovery requests may result in sanctions.

There are no mandatory data retention laws in the United States of America. See https://www.eff.org/issues/mandatory-data-retention; Cf. Anne Cheung & Rolf H. Weber, Internet Governance and the Responsibility of Internet Service Providers, 26 Wis. Int’l L.J. 403 (2008); Christopher Soghoian, An End to Privacy Theater: Exposing and Discouraging Corporate Disclosure of User Data to the Government, 12 Minn. J.L. Sci. & Tech. 191, 209-214 (noting that some ISPs in Sweden have enacted zero data retention policies in response to customer demands, but none of the major American ISPs or telecommunications carriers have made such enactments). There is a probability that service providers will delete the relevant data from their database servers in the near future. So, if the plaintiff or petitioner fails to take timely action, then their database servers may no longer yield the requested basic subscriber information.

In addition, from an international aspect, organizations that are subject to the General Data Protection Regulation (“GDPR”) should know its requirements wherein includes personal data being “kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed.” It’s important to note that some states such as California and Virginia have promulgated similar statutes on this topic. The California Privacy Rights Act (“CPRA”) and Virginia’s Consumer Data Protection Act (“CDPA”) have the same or similar provisions in this respect.

The courts have recognized that, absent a court-ordered subpoena, many of ISPs, that qualified as “cable operators” for purposes of state or federal laws (e.g., 47 U.S.C. § 522) were effectively prohibited from disclosing identities of putative defendants to plaintiff. Digital Sin, Inc. v. Does 1-176 (S.D. N.Y. 2012) 279 F.R.D. 229. Thus, Internet service providers should comply with the subpoena pursuant to the rules. Plaintiffs can issue subpoenas to request basic subscriber information from the service provider that yields the identifiable information. Plaintiffs should utilize any and all options to resolve the discovery dispute without judicial intervention. However, if the service provider fails or refuses to comply with the subpoena, then the plaintiff must seek a court order to obtain the necessary information (i.e., basic subscriber information) to identify the anonymous defendants. Our law firm regularly conducts investigations to prove a specific account was used to access our client’s electronic devices, email accounts, or online storage devices.

Neurolaws and privacy rights are still in the development stages. Neurological advances have allowed scientists to connect electrodes to the brain for analytical procedures. These electrodes can be connected in a non-invasive manner so they can download brain data. Now, this brain data can be analyzed to help patients with brain disorders such as epilepsy, depression, Parkinson’s disease or Alzheimer’s disease. Moreover, a human’s brain data may be analyzed to determine the truth and existence of intent.

Neuroscientists have been able to use advanced non-invasive techniques to observe and analyze cerebral neurochemical changes in the human brain. They have access to several technologies including, PET, SPECT, MRI, fMRI, and EEG. In fact, functional MRI (fMRI) is able to measure the brain’s activities under resting and activated conditions. It can be used to identify, investigate and monitor brain tumors, congenital anatomical abnormalities, trauma, strokes, and chronic nervous system disorders (e.g., multiple sclerosis).

Therefore, there is the potential of abuse when it comes to this new technology and that’s why legal scholars are concerned about privacy rights. Accordingly, needless to say, the right of privacy should be protected according to the state or federal rules, including, but not limited to, the Health Insurance Portability and Accountability Act (“HIPAA”) which was passed to address medical privacy concerns.  Scholars have argued that voluntary informed consent must be granted by the individual to use brain information. In other words, this type of confidential medical information cannot be used without the person’s knowledge and permission. The Bill of Rights has granted the right of privacy to citizens under certain terms and conditions. In fact, the Fourth Amendment protects privacy rights against unreasonable searches and seizures by the government. Also, every state has promulgated similar privacy laws which can be more strict than their counterpart federal laws. However, the question is whether our thoughts belong to us.

According to Biomedcentral, the four laws of neurotechnology are as follows: (1) right to cognitive liberty; (2) right to mental privacy; (3) right to mental integrity; and (4) right to psychological continuity. We’ve discussed some of the legal and ethical issues related to neurotechnology laws in previous articles. Today, our plan is to discuss neurolaws and evaluate the legal and ethical issues.

What are neuroscientists doing at this time?

Neurotechnology is on the verge of expansion especially since there more interest on the topic by the medical and technology sectors. Neuroscientists have thought about the possibilities of connecting electronic devices such as electrodes to the brain and analyzing the information. Now, it has become a tangible possibility due to the expansion of science and information technology which allow structural measures, neural activity and connectivity, molecular composition, and genomic variation of the brain. These abilities have been made easier by exponential advances in computational ability, artificial intelligence, machine learning, and development of sophisticated databases. Neurotechnology can predict a person’s danger level, probability to recidivate, evaluate intent, evaluate competence to stand trial, reveal biological mitigating factors that could explain criminal behavior, distinguish chronic pain from malingering, regain lost memory, and differentiate between true and false memories.

Neurotechnology device manufacturers should take legal and ethical issues into consideration when implementing microchips into a patient’s body. There are two methods being used at this time. First, is the “non-invasive” method where electrodes are inserted on the head’s surface as electrode caps which pick up electrical fields from the brain. So, the electrodes are not penetrating the patient’s body. Second, is the “invasive” method where the electrodes are placed inside the brain’s tissue which can be used to diagnose neurological diseases such as epilepsy.

Neurotechnology raises important legal issues since the human brain is connected to an external device such as a computer. For instance, one question is whether the external device changes the human’s brain activity, and if so, what could be the potential consequences. Artificial intelligence will inevitably change the legal framework in the near future especially since it will be used in the human body. So, the issues of privacy and data security will always come up.

What are the major concerns?

We can all agree that the brain is one of the most important organs we have in our bodies. The human brain is in charge of biological and neurological procedures such as memory, speech, perception, sleep, and emotion.

What is neurotechnology?

Neurotechnology is a scientific field that consolidates and connects electronic devices with the nervous system. Neurotechnology can pose interesting and complex ethical and legal issues since it can be used to create a so-called “interface” between the brain and computers. Neuralink is an example of this technology which is initiated by inserting a microchip into the brain. The brain-computer interfacing technology is arguably a positive step towards merging humans and artificial intelligence. The proponents argue that this technology could allow humans to overcome diseases and disorders such as Alzheimer’s, Parkinson’s disease, blindness, anxiety, depression, and insomnia. The opponents argue that this technology will be overly invasive and could create unanticipated complications.

The European Union has developed an artificial intelligence strategy to simplify research and rules and regulations. It’s focusing on building a trustworthy environment. The European Union’s approach to this new technology is to implement a legal framework to address fundamental rights and safety risks. It plans to implement rules to address liability issues. It also plans to revise the sectoral safety legislation and modify the rules and regulations. The new framework grants developers, deployers, and users a certain amount of clarity if it becomes necessary for them to intervene if legislation does not cover the issues.

Artificial intelligence can be used in critical infrastructures such as manufacturing and transportation. This technology can be used in education and vocational training such as preparing, taking, and scoring exams. Robotic technologies are already being used in medical products that would allow robot-assisted medical procedures. Law enforcement agencies can use this technology but they should not interfere with the general public’s fundamental rights (e.g., free speech, religious beliefs, privacy). The state and federal courts can use it for assistance in evidence comparison and evaluation. Biometric technology can also be used in conjunction with artificial intelligence.

On April 21, 2021, the European Commission has published a proposed set of laws to regulate the usage of artificial intelligence in its jurisdictions. The Artificial Intelligence Act consolidates a risk-based approach on the pyramid of criticality to assess the risks. There are four major risk levels starting from minimal risk, limited risk, high risk, and unacceptable risk. This proposed legislation implements a new enforcement agency called the European Artificial Intelligence Board (“EAIB”) which consists of national supervisors who will have oversight power. The Artificial Intelligence Act will have an extraterritorial effect on all providers, marketers, or distributors whose products or services reach the European Union’s market. This regulation defines artificial intelligence as a collection of software development frameworks that include machine learning, expert and logic systems, and statistical methods.

Artificial intelligence (“AI”) is defined as a system that imitates human intelligence to conduct similar tasks by improving itself based on the submitted or collected information. Artificial intelligence can be used in various industries such as manufacturing, automobiles, education, medicine, and financial services. Artificial intelligence can be used to detect and defend against cybersecurity intrusions, solve technical problems, lower production management tasks, and assess internal compliance for accepted vendors. Artificial intelligence technology is affordable and can produce faster results when compared to human interactions.

The terms artificial intelligence, machine learning, neural networks, and deep learning are not the same. Machine learning is a subset of artificial intelligence. Deep learning is a subset of machine learning. Neural networks create the backbone of deep learning algorithms and imitate the human brain by using specialized algorithms. It’s also important to realize that deep learning is different from machine learning. There are three main types of artificial intelligence: (1) Artificial Narrow Intelligence; (2) Artificial General Intelligence; and (3) Artificial Super Intelligence. For example, chatbots and virtual assistants (e.g., Alexa, Siri) are considered artificial narrow intelligence since they’re unable to incorporate human behaviors or interpret emotions, reactions, or tones.

What are the potential cybersecurity issues?

The experts describe “metaverse” as an alternate digital real-time existence offering a persistent, live, synchronous, and interoperable experience. It is where the real world meets the virtual world. Well, it sounds like science fiction but it’s going to be reality soon. The question is what are the legal issues and what remedies are available now.

Intellectual Property Issues

First, can copyright licenses protect the use of a work? The copyright laws should protect the original works as described in the applicable statutes such as the Copyright Act. In general, it protects original works of authorship, including, literary, dramatic, musical, and artistic works, such as poetry, novels, movies, songs, computer software, and architecture.