This article is an overview of recent legislation in the United States and California focused on social media regulation and protections for children such as state statutes, federal proposals, court cases, and policy debates:

  1. California’s Landmark SB 976: Protecting Our Kids from Social Media Addiction Act
  • Signed into law by Governor Newsom on September 20, 2024, California’s SB 976 sought to curb addictive design features targeted at minors by requiring:

Artificial intelligence (AI) is transforming everything from product recommendations to customer service, search engine optimization, fraud detection, and beyond. However, with great power comes a rising wave of regulatory scrutiny. As lawmakers in the United States and abroad grapple with the risks of AI — from bias to privacy violations and misinformation — businesses using or deploying AI must understand the legal landscape. Whether you’re a tech startup building AI tools, an e-commerce platform using AI for personalization, or a search engine deploying machine learning for ranking and indexing, the regulatory ground is shifting fast for obvious reasons and compliance is no longer optional.

1. U.S. Federal AI Policy: A Patchwork in Progress

While the United States has not yet passed a comprehensive federal AI law, several regulatory efforts are underway:

As artificial intelligence (AI) rapidly transforms industries, from healthcare and finance to law enforcement and education, questions of risk, responsibility, and trust loom large. To address these concerns, the National Institute of Standards and Technology (NIST) released the AI Risk Management Framework (AI RMF) in January 2023 — a voluntary but powerful tool designed to help organizations develop and deploy trustworthy AI systems. While the AI RMF is not a regulatory mandate, its adoption signals a growing consensus around best practices in AI governance. It provides a flexible and principle-based structure that can be used by companies, government agencies, and developers to identify and mitigate the unique risks associated with AI technologies.

What Is the NIST AI RMF?

The AI RMF is a risk-based, socio-technical framework that guides organizations in managing the many facets of AI risk — not just technical errors or security issues, but also fairness, transparency, privacy, and societal impact. It is a voluntary guidance framework developed by the NIST to help organizations identify, assess, manage, and minimize risks associated with artificial intelligence (AI) systems. The AI RMF helps organizations (1) understand and manage AI-related risks across the lifecycle; (2) build transparency, accountability, fairness, and security into AI systems; and (3) align with global AI governance trends (e.g., EU AI Act, OECD AI Principles). It is sector-agnostic and technology-neutral, meaning it can be applied to any organization building or using AI, whether in healthcare, finance, education, defense, or consumer technologies.

As digital technologies continue to permeate every facet of modern life, cybersecurity and data privacy have emerged as defining legal challenges of the 21st century. From state-sponsored cyberattacks to private-sector data breaches and government surveillance, these issues demand a coherent and constitutionally grounded response. In the United States, however, the legal architecture addressing cybersecurity and data privacy remains fragmented. While various federal and state statutes address specific concerns, the constitutional foundations—particularly the Fourth Amendment—continue to serve as both a shield and a battleground in the digital era.

I. The Fourth Amendment and the Evolution of Privacy Rights

The Fourth Amendment provides that:

Artificial Intelligence (AI) has rapidly transformed from a niche area of computer science into a foundational technology influencing nearly every sector of society. From predictive algorithms in healthcare and finance to autonomous vehicles and generative AI tools like ChatGPT, AI systems are reshaping how we live, work, and interact with technology. Yet with this explosive growth comes a critical challenge: how do we govern AI technologies in a way that fosters innovation while protecting human rights, privacy, and safety?

This question has sparked global efforts to create legal frameworks for AI. However, the pace of AI development often outstrips the speed of regulation, leaving governments scrambling to catch up. As AI systems become more powerful and pervasive, robust and thoughtful legal frameworks are essential to ensure that these technologies serve the public interest.

Understanding AI Technologies

California’s anti-doxing statute, which is codified under California Civil Code § 53.8, is a law designed to protect individuals from the intentional, malicious publication of their personal identifying information, which is commonly known as “doxing” when done with the intent to cause harm, harassment, or to incite violence.

Assembly Bill 1979

Assembly Bill 1979 entitled as the “Doxing Victims Recourse Act” discusses the relevant rules and regulations. Civil Code Section 1708.89(c) outlines the victim’s rights and states, in part, that: A prevailing plaintiff who suffers harm as a result of being doxed in violation of subdivision (b) may recover any of the following: (1) economic and noneconomic damages proximately caused by being doxed, including, but not limited to, damages for physical harm, emotional distress, or property damage; (2) statutory damages of a sum of not less than one thousand five hundred dollars ($1,500) but not more than thirty thousand dollars ($30,000); (3) punitive damages; and (4) upon the court holding a properly noticed hearing, reasonable attorney’s fees and costs to the prevailing plaintiff.

Non-Fungible Tokens (NFTs) have redefined the concept of ownership in the digital world. Built on blockchain technology, NFTs represent unique digital assets that can be traded, sold, and verified through decentralized systems. They are commonly associated with digital art, collectibles, music, virtual real estate, and more.

As NFTs have gained mainstream adoption, governments and regulatory bodies around the world have begun addressing the complex legal questions they raise. Issues include intellectual property rights, taxation, securities regulation, consumer protection, and data privacy. This article provides a comprehensive overview of NFT technology and its evolving legal context across state, federal, and international jurisdictions.

What Are NFTs?

Introduction

In the digital age, the way we perceive, transfer, and assign value to assets is undergoing a dramatic transformation. One of the most significant innovations driving this shift is the Non-Fungible Token (NFT) — a type of cryptographic asset that represents ownership of a unique item or piece of content on a blockchain. Unlike cryptocurrencies such as Bitcoin or Ethereum, which are fungible (interchangeable and uniform in value), NFTs are non-fungible, meaning each token is unique and cannot be exchanged on a one-to-one basis with another NFT. While NFTs initially gained attention for digital art and collectibles, their potential are more expansive. This article explores the underlying technology behind NFTs and how they can enhance various types of transactions.

What is an NFT?

The convergence of blockchain technology and real estate is reshaping how properties are bought, sold, and managed. Traditionally, real estate transactions are lengthy, paperwork-intensive, and costly, involving multiple intermediaries like brokers, escrow agents, title companies, and banks. Blockchain offers a way to streamline and secure these transactions, while Non-Fungible Tokens (NFTs) introduce a novel method of representing property ownership. This article delves into the evolving role of blockchain and NFTs in real estate, the legal and regulatory framework, potential use cases, benefits, challenges, and what the future holds.


What Is Blockchain and How Does It Apply to Real Estate?

Blockchain is a decentralized digital ledger that records transactions across a network of computers in a tamper-proof and transparent manner. So, each “block” contains a time-stamped batch of transactions, cryptographically linked to the previous one, forming a “chain.”

Business Email Compromise (BEC) is a sophisticated cybercrime that targets businesses and individuals performing legitimate transfer-of-funds requests. Attackers employ tactics such as email spoofing, phishing, and social engineering to impersonate trusted entities—like executives, vendors, or legal representatives—to deceive victims into transferring money or sensitive information.

Common BEC Techniques

  • Email Spoofing: Crafting emails that appear to originate from trusted sources