Stock

What Else Was Trending in State Technology and Innovation Proposals in 2022–2023?

Jennifer Huddleston and Gent Salihu

As discussed in prior posts, the 2022–2023 legislative session saw a surge in state legislation to protect kids’ online safety and state data privacy laws. But what else have states been working on during this recent legislative session?

An overwhelming majority of states have considered actions related to TikTok particularly on government devices and networks. With the rise in popularity of generative artificial intelligence (AI) applications, states have also rushed to sponsor laws to regulate technology for problems that yet remain to be defined. It is likely that these topics will continue to be part of state debates in upcoming legislative sessions, as well as the subject of continued debate at the federal level.

State‐​Level TikTok Bans

As of August, a total of 34 states have passed legislation that bans TikTok on government devices or networks. These restrictions are largely based on concerns about the national security risks posed from TikTok’s parent company ByteDance’s position in China and the Chinese National Intelligence Law. Under that law, China can require its corporations to provide data.

In some instances, however, states have enacted bans that extend beyond government devices. Oklahoma, for example, has extended the TikTok prohibition to not only government devices, but to any contractor transacting with the state. Some extensions seem to push the boundaries of justification under the umbrella of national security concerns. In Texas, TikTok restrictions were also applied to public universities and their networks. This is now being challenged on the grounds of academic freedom. In a similar vein, Florida outlawed TikTok in both public‐​school devices and networks.

Perhaps most concerning from a free speech perspective, Montana enacted a TikTok ban for all citizens. Departing from the national security rationale, Montana expanded the reasoning for banning TikTok: the conduct and mental well‐​being of children. A concern reflected in the Montana law’s preamble is that TikTok “directs minors to engage in dangerous activities.”

Such a proposal raises significant concerns about its impact on the First Amendment rights of TikTok’s American users and, unsurprisingly, the law was almost immediately challenged in court. TikTok bans like the Montana law deprive Americans of a unique forum of speech that they have chosen to use for expression. TikTok fosters a unique community of creators and followers, often sparking trends and challenges, and provides specialized features for content creation that its creators prefer to those available on other platforms like Instagram Reels or YouTube Shorts.

TikTok is a forum of speech that stands apart in its function and impact. A TikTok ban at any level faces a significant hurdle in proving it meets a compelling government interest and is being implemented using the least restrictive means to achieve that interest. There are ongoing processes that may lead to a better understanding of whether there are any concerns regarding TikTok that require government policy action, but even if so, there are many options short of a full ban to achieve such an interest.

State‐​level bans raise even more concerns regarding the realistic enforcement of such laws even if they do pass such a test. Many proposals would require the government to take more concerning steps for enforcement or are nearly impossible at a practical level. Americans in states that adopt measures to ban certain apps or websites may turn to Virtual Private Networks (VPNs) to circumvent these restrictions. This was demonstrated by the increased popularity in searches and downloads of VPNs following PornHub’s exiting Utah due to government ID requirements. These bans would also extend to control over app stores, dictating what apps they could carry within a specific state—a decision that is typically made only at the federal level. But the popularity of certain apps means that many Montanans may already have access to them, so it would only prevent those users who do not currently have the app and could even create more risks by preventing security updates to the existing app.

TikTok has been policymaker’s focus due to the unique intersection of concerns about China’s technological progress and youth social media use. But many legislative proposals at both a state and federal level would impact much more than just the app. Policymakers should tread carefully when considering the precedent such actions could set and ensure that any concerns are based on sound evidence and not just the targeting of a specific company.

AI Regulation Attempts by States

As the federal government grapples with what—if anything—should be done to regulate AI and Large Language Models (LLMs), a small set of states have already taken charge and pushed forward their own bills with the rationale of protecting their citizens. New York and California have proposed centralized frameworks, echoing the EU AI Act, while others like Louisiana, Montana, and Texas have targeted more specific concerns. A confusing patchwork of rules for developers, deployers, and end users of AI could be on the horizon if states take the lead.

New York, through A7501, seeks to follow a centralized approach to regulating AI usage, with plans to establish an Office of Algorithmic Innovation which would have the power to set standards for the usage and auditing of algorithms. The New York bill resembles the centralized structure laid down by the EU AI Act and departs from the sectoral‐​based solutions that dominate the debates at the federal level. With the New York approach, creating new institutions might only lead to red tape and confusion, instead of building on existing institutions with a sectoral approach.

While New York’s legislative proposal is still under consideration, California’s AB 331 has been recently suspended. Still, its features deserve close attention, as similar bills are likely to be sponsored in upcoming legislative sessions. California’s bill sought to expand the existing responsibilities of the Civil Rights Department to regulate automated decision tools. Utilizing an existing body is a departure from New York’s goal of creating a new agency. However, both California and New York aimed at entrusting a single agency to oversee all deployers and users of AI.

Even with the suspension of AB 331, California may consider AI regulatory action through its existing California Privacy Protection Agency (CPPA). The CPPA has the power to draft regulations on automated tools and has become a de facto AI regulator in California.

Unlike New York and California, which have taken a broad and centralized regulatory approach to AI, Louisiana has focused on more specific and tangible use cases. Louisiana’s SB 1775, which has already been signed into law, criminalizes deepfakes involving minors and defines rights to digital image and likeness. This represents a tailored response to a particular concern on AI usage for which there is solid evidence and insight.

Montana and Texas have also adopted similar targeted approaches. Montana’s SB 397, also signed into law, prohibits law enforcement from use of existing facial recognition technology, aiming to safeguard individual liberty and prevent the perpetuation of racial bias by state authorities.

Rather than rushing to enact broad regulations for technology that keeps transforming every day, Texas established an AI Advisory Council to study and monitor AI systems developed or used by state agencies. This approach could provide opportunities for deregulation as well as regulation by identifying current barriers to deployment or development. It also focuses on the state’s own use of the technology rather than on private sector applications.

As with data privacy or youth online safety, many state legislatures may be asking what they can or should do about their constituents’ concerns about AI. It is important to remember that AI is a general use and data‐​intensive product and typically concerns relate to a specific application, not the technology more generally. Over‐​regulation could limit many existing and beneficial applications. Like the internet, AI crosses borders in ways that makes a federal framework preferable for any potentially necessary regulations.

Conclusion

A wide range of tech policy issues have seen activity at the state level during the latest legislative session. In some cases, this activity may be a reaction to the perceived ability to “do something” in the absence of federal action, as evidenced by recent measures surrounding a broad array of tech debates, including new topics like AI and TikTok. Many state technology proposals are an attempt to respond quickly to perceived concerns without strong evidence of the alleged harm or thorough consideration of the consequences of government action on key values like speech. While state governments are often seen as the laboratories of democracy or more closely tied to the population they represent, the situation becomes more complex with tech policy when many proposals can have an impact beyond state borders or could create a disruptive patchwork.

What's your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

You may also like

Leave a reply

Your email address will not be published. Required fields are marked *

More in:Stock

Stock

When Hayek Came to Cato

David Boaz On December 1, 1982, F. A. Hayek became Cato’s first Distinguished Lecturer. Cato ...