Chevron Deference and Artificial Intelligence

By CHHS Extern Dallin Richardson

Those who are paying attention to Supreme Court current events know that following the oral argument in Loper Bright Enterprises v. Raimondo and its companion case, Relentless, Inc. v. Department of Commerce, the doctrine of Chevron deference is likely not long for this world. Chevron doctrine causes courts to yield to an executive agency’s reasonable interpretation of ambiguous statutory language, provided Congress has not weighed in on the precise issue in question. This doctrine is at the core of current administrative law and allows agency experts, specialized in their fields, to aid courts by virtue of their advanced technical acumen. An end to that “doctrine of humility” would place that interpretative power exclusively back in the hands of courts, although – as J. Kagan has said – “we know in our heart of hearts that . . . agencies know things that courts do not.” And as artificial intelligence continues to radically change the fabric of virtually every sector, the expertise which agencies bring to the table will be of ever-increasing importance as an aid in the interpretation of AI legal questions. Such agencies certainly have an advantage over the members of Congress: “Congress knows there are going to be gaps [in any future artificial intelligence legislation] because Congress can hardly see a week into the future with respect to [AI]”.

Looking toward the future and the unpredictable predicaments which AI is sure to cast upon the country, the core question may rather be posed this way, with some further help from Justice Kagan: Where should the balance of official interpretative weight lie? “. . . what Congress is thinking is, ‘Do we want courts to fill that gap? Or do we want an agency to fill that gap?’ When the normal techniques of legal interpretation have run out, on the matter of artificial intelligence, what does Congress want?”

It is a valid concern that, despite the woes of vacillating policy interpretation that some fear in handing the final word back to the courts, letting an executive agency dictate the final interpretation of ambiguous statutory language may have a similar vacillating effect, possibly changing every four to eight years. And of course, there is sure to be far from a uniform interpretive posture or access to expertise across US Circuit Courts. Further, although Justice Thomas now seems ready to sweep away the Chevron doctrine, in 2005 he expressed the majority opinion for National Cable v. Brand X that “agency inconsistency” is no reason to eliminate the Chevron framework.

Some would argue that it doesn’t matter what Congress wants, it matters what Article III of the Constitution says, which is that courts hold the judicial power and thus they alone handle interpretation of law. But proponents of Chevron deference would argue that the doctrine does not undermine judicial authority; rather, it guides a court in resolving legal disputes by deferring to an appointed agency who have both the needed expertise and democratic accountability to the public, making them much more reasonable decision makers in matters of technicality and scientific choice when compared to the hundreds of unelected, relatively inexperienced judges who would otherwise bear the burden.

It is important to remember that the true authority in these matters is Congress; Chevron deference arises only when congressional statutory intent is ambiguous. The guiding principle should always be congressional intent. In oral arguments for Relentless, Inc v. Department of Commerce, J. Kagan opined: “Congress knows that this court and lower courts are not competent with respect to deciding all the questions about AI that are going to come up in the future. And what Congress wants, we presume, is for people who actually know about AI to decide those questions. And also those same people who know about AI are people who . . . are accountable to the political process.”

Regardless of what Congress may or may not want for the future of AI and other legislation, the Court may be poised to speak for hundreds of lower courts across the country and make that decision for them. But how did it come to this? It bears repeating that Congress is the primary statutory authority and the first word on statutory interpretation. Congress could speak for itself and mandate specific interpretive construction. Congress could legislate interpretive authority either to agencies or to the courts. Though Congress cannot be expected to foresee the problems to which AI will give rise, is it unreasonable to expect Congress to tell us who gets to decide in a tiebreaker? Is it too much to ask Congress to indicate when they want a court to have the final word, and when, instead, the relevant agency should provide needed clarity?

The Supreme Court’s Vacation of the Injunction and What it Means for Border Security

By CHHS Extern Andrew Conn

There are 29 entry points between the U.S. and Mexico along the 1200-mile-long border. Recently, the amount of illegal border crossings has grown exponentially. In 2020, Customs and Border Patrol (CBP) agents encountered 458,000 crossings. In 2021, this number rose to 1.7 million and in 2022 this number once again rose to 2.4 million.

On October 24, 2023, the state of Texas filed suit against the Department of Homeland Security for “unlawfully” removing concertina wire (c-wire) which had been placed along the Texas-Mexico border by agents of the Texas Military Department (TMD). Texas claims in its brief that the placement of c-wire on private and government property along the border was a joint-effort between federal CBP agents and TMD agents as part of Texas’ 2021 project “Operation Lone Star.” Texas claims that CBP agents were “grateful” for the assistance by TMD officials and the parties worked cooperatively across the state. However, Texas goes on to state that this relationship was upended when on “more than 20 occasions” between September 20 and October 10, 2023, CBP agents were recorded removing the c-wire fencing along the border with bolt cutters. CBP and DHS removed the fencing since it impeded their access to the border. CBP agents later began removing the fencing by utilizing forklifts.

During the removal process, TMD agents observed hundreds of migrants from Mexico’s side of the border pour over the border. TMD agents claimed these migrants were not in distress or in need of medical attention. Because of the subsequent flood of migrants into the state, Texas sought a preliminary injunction against the removal of c-wire and fencing by CBP agents in district court. The United States District Court for the Western District of Texas granted a temporary restraining order (TRO) against CBP agents to prevent them from further removing fencing in the vicinity of Eagle Pass, TX with an exception for “provid[ing] or obtain[ing] emergency medical aid.” The TRO was later extended by the district court; however, at trial, the court found it was unable to convert the TRO into a preliminary injunction since CBP’s sovereign immunity had not been waived under 5 U.S.C. § 702.

Texas subsequently appealed this decision to the Fifth Circuit in order to seek an emergency injunction. The Fifth Circuit granted the injunction, claiming that the district court erred in its ruling with respect to the grant of sovereign immunity. The defense moved for an expedited argument in circuit court which was granted. The oral arguments were to be heard on February 7th, 2024. In the interim, DHS sought expedited relief from the Supreme Court to vacate the injunction.
In its application to the Supreme Court, DHS argued that “under the Supremacy Clause, state law cannot be applied to restrain those federal agents from carrying out their federally authorized activities.” DHS stated that if the circuit court’s ruling was sustained, states would be able to override federal agencies and decisions on how to execute their operations.

In response, Texas claimed the CBP already had access to the other side of the border via access points along the fencing and since the Fifth Circuit had already expedited the case, the Supreme Court should hold off on any ruling against the injunction. Additionally, Texas cited a three-part test laid out in Merrill v. Milligan to determine whether an injunction should be vacated by a higher court. Texas argued that an injunction should be “entitled to great deference like a decision to stay a district court’s ruling.” In doing so, the test in Merrill states that an injunction can only be vacated when the applicant demonstrates (1) a reasonable probability that the court would eventually grant review, (2) a fair prospect that the Court would reverse, and (3) the applicant would likely suffer irreparable harm absent the stay.
Ultimately, on January, 22, 2024, the Supreme Court ruled in favor of the federal government in a surprising 5-4 split. The limited ruling struck down the injunction by the 5th Circuit ahead of oral arguments in the federal circuit court. Justices Jackson, Kagan, Sotomayor, Coney-Barrett, and Chief Justice Roberts voted in favor of overturning the injunction while Justices Thomas, Alito, Gorsuch, and Kavanaugh voted in favor of keeping the injunction in place.

What Could This Mean?
By overturning the injunction, it appears as if the Court may have an appetite to rule in favor of upholding the federal government’s sovereign immunity claim should the case reach the Court. This ruling is concerning, however, in the sense that four justices voted in favor of the injunction which could indicate a major blow to the supremacy clause. Allowing Texas to counter the acts of the Federal government would upend the supremacy clause as it would essentially allow state governments to override the lawful acts of federal agents. As the DHS states in its application to the Supreme Court, “if accepted, the court’s rationale would leave the United States at the mercy of States that could seek to force the federal government to conform the implementation of federal immigration law to varying state-law regimes.” Such a ruling would deal a blow to other federal agencies as well since this new precedent would allow the state government to override the federal government in terms of environmental, commerce, and transportation regulations. Oral arguments at the circuit court level will commence on February 7th, 2024.

Is It Too Late To Convince Europeans They Can Trust The U.S. With Their Data?

By CHHS Extern Mercedes Subhani

On January 4th, Microsoft announced that it was upgrading its cloud computing service to let European customers store all their personal data only within the European Union. Microsoft claims this move, that will affect Azure, Microsoft 365, Power Platform, and Dynamics 365, is directly aimed to ease customers’ privacy fears of having their information flow into the U.S. where a federal privacy law still doesn’t exist.

This fear of letting their personal data stored in the wild west of the U.S. stems from the Edward Snowden revelations that the American government eavesdropped on people’s online data and communications. Since then, the U.S. has been trying to convince the European Commission that EU citizens’ data will be kept safe. The U.S. was finally successful on July 10, 2023 when the EU adopted its adequacy decision for the EU-U.S. Data Privacy Framework (“Framework”). The EU’s decision “has the effect that personal data transfers from controllers and processors in the Union to certified organizations in the United Sate may take place without the need to obtain any further authorisation.” Despite this transatlantic agreement, Europeans are still not convinced that their data will be kept safe in the U.S. as demonstrated by Austrian privacy activist Max Schrems’ confirmation that his group NOYB will be pursuing legal challenge.

First, although the EU adopted the decision, there is no certainty that the Framework will survive a challenge before the Court of Justice of the European Union. The framework is predicted to be invalidated like its two predecessors was by Max Schrems in Schrems I and Schrems II. Thus, it is very likely that this newly founded EU-US agreement will be invalidated. Secondly, the U.S still does not have a federal data privacy law. The level of data privacy rights an American citizen has, if any, entirely depends on which state they live in. The strongest state privacy law the  U.S has is the California’s Consumer Privacy Act which is still not as protective as the EU’s General Data Privacy Rule. Therefore, not even in the most protective U.S. state can Europeans enjoy the same amount of privacy safeguards as they do in the European Union. Lastly, when the U.S. did try to pass a federal data privacy law, the American Data Privacy Protection Act (“ADPPA”), it still did not fix the root problem Europeans are concerned with which is the U.S. government eavesdropping on people’s online data and communications. ADPPA only targeted the private sector and exempted the public sector of any privacy constraints.

In the current court of public opinion, Europeans have ruled that they cannot trust the U.S. with their personal data. For right now, they are correct in deciding so. However, as Europe presses forward with data rights and the U.S. public grows more concerned about data privacy rights, more politicians will be pressured to respond adequately. We have seen this already with The White House mimicking Europe with its Blueprint for an AI Bill of Rights and how privacy activism organizations have pushed 12 states to pass state data privacy laws with several more states expected to pass their own law in 2024. Eventually, the U.S. will fully regain Europeans’ trust in the U.S with its data.

Dawn of a Historic Election Year

By CHHS Extern Dallin Richardson

On August 10th, 2023, President Biden, through the Stafford Act, issued a major disaster declaration in response to Hawaii Governor Joshua Green’s petition for aid from the wildfires which devastated Lahaina. By August 11th, various social media posts asserted that a space-based directed energy weapon started the fire. The video “evidence” behind these claims was debunked as footage taking place in Russia in 2019. However, one wild claim was joined by others, each with an audience willing to believe fanciful anti-government stories. Regrettably, this cyber campaign trespassed beyond the cyber realm to have real-world effects: In a Department of Energy hearing, Hawaii Senator Mazie Hirono shared her concerns over victims who had been duped by online claims that signing FEMA disaster relief papers would also sign over the rights to one’s home or land. (timestamp 1:15:40 at the link to the hearing).

From this, we see what may be accomplished when a hostile nation state (Senator Hirono attributed the lies about FEMA to Russia or China) employs insidious cyber efforts to exploit an unplanned emergency. But what about the intentional, planned disruption of future events, foreknowledge of which gives a hostile party months, or even years, to plan? We will have our answer in the year 2024 as the world goes through an unprecedented period of democratic transition. China has already started us off on the wrong foot.

However, we are not helpless. Though beset by disinformation campaigns, the global population may mitigate against such insidious efforts by using media literacy education as an information tool. When disinformation clouded public opinion about Sars-CoV-2, the World Health Organization (WHO) called the problem an “infodemic”. The WHO’s recommended cure for the infodemic is building resilience to disinformation, which lines up conceptually with the medically-sound aim to inoculate against actual viruses. Content regulation and government surveillance, while they can help fight disinformation, do not meaningfully serve to inoculate the public. Content regulation and government surveillance are akin to treating a patient in a sterile environment. Though an ideal setting for caring for the ill and ailing, a sterile environment allows for only short-term intervention with no hope of providing long term prevention when the patient inevitably returns to a more typical, non-sterile setting. Likewise, national election-safeguarding efforts which neglect media literacy education offer no long-term prevention for our information-ridden society and guarantee no measurable resilience against propagated online falsehoods. Such efforts also ignore public mistrust in government “treatments”.

Unlike other problems facing this country, media literacy education does not appear to be a partisan issue; states guided by staunchly disparate political philosophies, such as Florida and California, have both enacted bills aimed at providing critical education in this regard. This is fortunate, because education policy in the United States is largely a matter left to the states. Various attempts have been made to legislate a federal approach to media and digital literacy, but the closest we have come is the Digital Equity act, which (as the name suggests) leans heavily into digital equity, which is more concerned with information access, rather than digital literacy, which is chiefly aimed at information acuity. Beyond congressional constipation, the Department of Education does not dictate curricula or standards to the State educational departments, and this administrative deference to States make the United States’ chances of developing a unified national K-12 literacy curriculum slim.

It is likely up to state policymakers, then, to innovatively legislate and set educational goals as examples for other states to follow. Several states have started stepping up, and we can hope that such efforts will be sufficient to instill proper critical thinking and media consumption skills in children in those states. The work being done in these states is vital to help us face election disinformation. The United States cannot put all hope in the algorithmic excision of online content; rather, our country must lean on good information, media, and digital literacy educational policy, which offers the best chance to “inoculate” people, teaching them how to learn, helping them to develop resilience to disinformation, and encouraging the development of robust information immune systems.

CHHS Assists Talbot County in Revising Its Emergency Operations Plan (EOP)

CHHS was proud to assist Talbot County, Maryland in revising its Emergency Operations Plan (EOP). From the Conduit Street blog from the Maryland Association of Counties:

The County contracted with the University of Maryland Center for Health and Homeland Security (CHHS) to assist with rewriting the Plan, coordinate and facilitate the tabletop exercise, and to conduct a functional exercise in the spring.

In addition to looking at national best practices, federal guidance, State and other local EOPs, the updated Talbot County EOP also integrates what was learned during the COVID-19 pandemic.

If you are interested in any of our emergency management consulting services, please visit our Consulting Services page here.

New CHHS Fall 2023 Newsletter Available with Important Update

CHHS is proud to release the Fall 2023 edition of our newsletter:

CHHS Newsletter Fall 2023 NEW

In this newsletter’s Director’s Message, CHHS Founder and Director Michael Greenberger announces that he will be stepping down at the end of June 2024, after more than 20 years in his current position. CHHS will be celebrating Prof. Greenberger’s leadership and the success of the Center he founded over the coming months.

For all recent editions of our newsletter, check out our newsletter page: https://www.mdchhs.com/media/newsletters/

 

Big Tech vs. Digital Privacy: Introduction of the Banning Surveillance Advertising Act

By CHHS Extern Alexandra Barczak

U.S. lawmakers have introduced legislation called the Banning Surveillance Advertising Act, designed to prohibit advertisers and advertising facilitators, such as Google and Facebook, from using personal data to create targeted advertisements, with the exception of broad location targeting. It further protects against advertisers using a protected class of information, such as race, gender, and religion, and personal data purchased from data brokers, by prohibiting advertisers from targeting ads based on this information. Enforcement would be through state attorneys general, private lawsuits, and the Federal Trade Commission. However, in the age of Big Tech, is this actually feasible?

Big Tech, a term coined to mean the major players in the technological world, often indicates the “Big Five” which hold the most influence in the technological market: Amazon, Apple, Facebook/Meta, Google/Alphabet, and Microsoft. While each of the Big Five has a sphere that they dominate, such as Facebook with social media, Google with a search engine, and Apple with communication devices like mobile phones and laptops, there is a common thread amongst them all – they are constantly using our data, whether that be through asking for it, tracking it on their own, or buying it from another company/data broker. Our online movements are continuously being monitored under the guise of better serving the users, with typical collection including information such as your name, email, phone number, IP address, device you are using, times you are using your device, what you are doing while on your device, location, and more. Having this data allows these companies to better predict user behavior by using it to build a profile based on past user movements to anticipate future movements by giving you the content you want to see, showing you relevant ads, personalizing your experience, etc. Such pervasive collection and tracking have thus coined the term “surveillance.”

To many, this may not be a threatening prospect, but to others, online tracking is highly concerning. As the reasoning behind the Banning Surveillance Advertising Act points out, “Personal data is abused to target ads with major societal harms, including voter suppression, racist housing discrimination, sexist employment exclusions, political manipulation, and threats to national security. Surveillance advertising also invades privacy and threaten civil liberties, such as by tracking which place of worship individuals attend and whether they participated in protests and then selling this information to advertisers.” It is even more troubling that this sacrifice in personal privacy and security is done simply for the financial gain of these already profitable giants.

The Banning Surveillance Advertising Act is notably introduced exclusively by Democrats (Representatives Anna G. Eshoo (D-CA) and Jan Schakowsky (D-IL) and Senators Ron Wyden (D-OR) and Cory Booker (D-NJ)) and is said to be supported by leading public interest organizations, academics, and companies with privacy-preserving business models. Some of those cited in support include the Center for Digital Democracy, Accountable Tech, Fight for the Future, Anti-Defamation League, and Ekō. While there seems to be strength in support, there is likely equal, if not more, strength in opposition. Big Tech itself has created monopolies in their respective fields, with use of their products and systems becoming a necessity in everyday life. This power has created concern amongst the general population and the government of what exactly Big Tech can accomplish. Such dominating digital infrastructures have the capability to influence societies, economics, national security, and politics, just as Big Oil, Big Banks, or Big Pharma did in the past and arguably still do. Thus, it is entirely plausible that the resources of Big Tech will be used against this bill. It would not be the first time. For example, in 2022, lobbyists on behalf of Amazon, Apple, Meta, and Google parent company Alphabet spent millions opposing two antitrust bipartisan bills targeting Big Tech, the Open App Markets Act and the American Innovation Choice Online Act. Though the response to a bill about advertising may not be as extreme as the response to antitrust regulation, Big Tech would still likely involve itself and its resources in advocating against passing such legislation. Money talks, and Big Tech has money to spare – money which will be directed at the individuals and organizations that will lobby to block activity that will interfere with their business models; business models which all include targeted advertising as a source of revenue.

While the introduction of this bill could be considered a step in the right direction to preserving our online privacy, it also serves as a reminder that digital privacy, though a hot topic, is becoming increasingly politicized with little concrete movement at the federal level. Just note how long it has taken for a bipartisan federal privacy law to be introduced – and that, the American Data Privacy Protection Act, still did not pass. This is already the second attempt at the introduction of the Banning Surveillance Advertising Act. In January 2022, Congresswoman Eshoo (D-CA), Congresswoman Schakowsky (D-Ill.), and Senator Booker (D-NJ), introduced a similar bill with the same title which was unsuccessful. In both the House and Senate, the bill never got past the introduction stage. The House referred it to the Subcommittee on Consumer Protection and Commerce with no further movement, and the Senate read it twice and referred it to the Committee on Commerce, Science, and Transportation with no further movement.

With the power Big Tech holds across society and politics, the bill, which threatens a revenue stream for these organizations, will likely face strong resistance, backed with deep pockets. To realistically have a chance at gaining any traction, a bipartisan push would have to be made with representatives and organizations from all political parties making this an issue to care about. It, therefore, seems like there will be a long road ahead for the Banning Surveillance Advertising Act.

The Benefits and Risks of AI in 911 Call Centers

by CHHS Extern Katie Mandarano

Across the United States, 911 call centers are facing a workforce crisis. In the U.S., there are more than 6,000 911 call centers, with over 35,543 911 operators currently employed. According to a survey by the International Academies of Emergency Dispatch (IAED), more than 100 call centers reported that at least half of their positions were unfilled in 2022. The report also found that almost 4,000 people left their jobs across the call centers surveyed. This means that about 1 in 4 jobs at these call centers remain vacant.

The reasons for such a labor shortage can likely be attributed to a combination of factors:

  • These jobs are hard to fill, applicants have to undergo rigorous background checks and screenings and once hired, dispatchers face a lengthy training process, ranging anywhere from three to eighteen months before they are allowed to take calls without supervision.
  • Dispatchers have to work long hours and are often forced to work overtime hours because these centers are so short staffed.
  • Despite the long hours and high stress, the S. Bureau of Labor reported the median annual pay for public safety telecommunicators as only $46,900 in the year 2022.
  • These jobs are incredibly high stress, with studies showing that the repeated exposure to 911 calls can lead to the development of Post-Traumatic Stress Disorder.
  • There has been an overall increase in 911 calls. On average, an estimated 240 million 911 calls are made in the U.S. each year. Moreover, with developments in technology there are now a variety of emergency features that have led to an increase in 911 misdials, such as the Apple Watch system that automatically sends a 911 call if it detects a vehicle crash.

Due to the severe labor shortage facing 911 call centers, some state and local governments have turned to artificial intelligence (AI) as a potential solution to assist 911 dispatchers, or in some cases, replace the presence of a human dispatcher. AI is essentially a machine’s ability to perform tasks that typically require human intelligence. Below are some examples of how AI could be used in 911 dispatching:

  • AI could be used to enhance the audio quality of 911 calls, allowing dispatchers to better understand callers and respond quicker to callers’ needs, in turn allowing dispatchers to field more 911 calls.
  • AI can triage incoming 911 calls based on the call’s urgency, reducing the number of non-emergency calls, and ensuring calls are routed to the appropriate dispatchers and first responders. This would free up human dispatcher availability for the most pressing 911 calls. Moreover, some states do not just have AI triaging incoming calls, but actually answering and gathering information from non-emergency calls, replacing the need for a human dispatcher.
  • AI can create real-time maps of emergencies, which can be shared with other emergency services when responding to the scene. Because dispatchers typically stay on the phone with 911 callers until first responders reach the scene, improving the speed at which first responders can reach an emergency, will in turn, allow human dispatchers to assist more callers.
  • AI can provide real-time language translation for non-English speakers, quickening a dispatchers’ response time, as well as reducing the need for translators and non-English speaking dispatchers.
  • AI can integrate 911 dispatching with other technology such as, Internet of Things devices and smart city infrastructure to provide real-time information about the conditions surrounding an emergency. This would similarly result in quicker response times, freeing up dispatchers to field more calls.

It clear from the few examples listed above, that the potential benefits of AI in 911 dispatching are significant. But using AI in 911 dispatching also poses several unique challenges. One of these challenges includes maintaining 911 callers’ data privacy and security.

Congress has recognized the need for the country to transition to a Next Generation 911 (NG911) system, which is expected to enable transmission of photos, videos, health records, cell-site location information, and other data to support first responders and emergency personnel during a 911 call. Accordingly, an unprecedented amount of data could be used to train AI systems in 911 call centers, and this practice would be encouraged, as the more data an AI system has access to, generally the more accurate its output. Additionally, 911 calls typically contain sensitive personal information and AI would likely be used to de-anonymize this personal data where necessary. Thus, call centers which use AI systems become an increasingly attractive target for cyberattacks and data leaks.

In addition to data privacy and security concerns, implementing AI in 911 call centers creates data accuracy concerns. Underrepresentation of certain groups in data sets used to train AI can result in inaccurate outcomes and harmful decisions. For example, researchers have found that smart speakers often fail to understand female or minority voices because the algorithms are built from databases containing primarily white male voices. In an emergency setting, one can see how this barrier could have serious implications, such as determinative delays in emergency services or inefficient assistance for non-white male 911 callers.

The unique risks discussed above require government protection and safeguards. Accordingly, state governments using this technology should take care to implement privacy and cybersecurity standards to ensure this information is not subject to misuse, and that the AI is built using accurate, fair, and representative data sets. Some potential insurance measures include:

  • Adopting comprehensive data minimization rules, such as a deletion requirement to ensure that call centers do not store precise location data for longer than necessary.
  • Requiring cybersecurity maturity assessments, ensuring that these call centers have procedures in place to strengthen security program efforts.
  • Implementing quality standards for data sets used to train AI to ensure datasets are broad and inclusive.

While AI has the potential to revolutionize 911 dispatching, it is important to consider the risks to data privacy, accuracy, and security when implementing these technologies. With a thoughtful and regulated approach, AI in 911 call centers can provide much needed relief to the 911 dispatcher workforce in this time of need.

Is the Threat of Lawsuits Against Powerful AI Tools Like ChatGPT a Good Thing?

By CHHS Extern Daniela Eppler

Since the launch of ChatGPT in November 2022, people’s interest in artificial intelligence (AI) has been heightened. There has been excitement surrounding the capabilities of powerful AI tools and their ability to contribute to innovations across industries, such as enhancing telemedicine, improving customer service, and predicting passenger demand to optimize public transportation schedules. This excitement has also been coupled with concerns about potential threats to cybersecurity and national security, particularly related to relying on AI generated information and data privacy risks. The undeniable potential of ChatGPT and similar AI tools like it in fields like scientific research, business, and intelligence analysis has sparked interest around the world. Although the power and novelty of these tools may be overwhelming and alarming for some, they are the future and hold the key to unlocking previously unobtainable innovations.

Despite this excitement, a California law firm recently filed a class-action lawsuit against OpenAI over their use of people’s data to train the chatbot, ChatGPT. The lawsuit claims that OpenAI violated the rights of millions of internet users by using their publicly available internet data without consent to create the bot and to generate large profits. Specifically, while OpenAI is projected to reach $1 billion in revenue by the end of 2023, the lawsuit claims that ChatGPT relies on the consumption of “billions of words never signed off on” by the owners. Companies like Google, Facebook, Microsoft have taken similar actions to train their own AI models, but OpenAI is the only company currently facing legal action. Larger technology companies, like Facebook, have faced recent lawsuits concerning their deception of users in their ability to control the privacy of their personal information shared with the company. However, the question remains whether people should be concerned about AI companies using publicly available data to develop powerful tools and generate large profits. AI developers have argued that their use of data from the internet should fall under the “fair use” exception in copyright law. The class action lawsuit will largely be centered around whether the use of the data meets the requirements for “fair use”.

As we await the outcome of this lawsuit, it is important to consider the implications of restricting AI developers’ access to publicly available internet data for the development of AI tools. Despite the disruptive nature of the release of tools like ChatGPT, it is difficult to deny the endless pathways for advances that have been unlocked across various industries, like the acceleration of drug discoveries and early disease detection in the health industry, improved fraud detection and faster and more accurate trade executions in the finance industry, and supply chain optimization and product defect detection in the manufacturing industry, to name a few. AI tools directly rely  on ingesting huge volumes of complex data and without access to the volume and diversity of publicly available internet data, it is likely that their capabilities would be curtailed. Although copyright and privacy issues are important, it is essential to consider the implications of stifling the development of AI tools like ChatGPT and impeding the development of similar tools in the future.

Extreme Heat Should be Included as a Major Disaster Under the Stafford Act

By CHHS Extern Brittany Hunsaker 

We hear about extreme heat, particularly in Arizona, every summer. In Phoenix, the average temperature in July is 106 degrees. The term “extreme heat” refers to a period of high heat where temperatures reach above 90 degrees for at least two days. Such temperatures can lead to heat disorders and can especially harm older adults, young children, and those with underlying health concerns. The heat can often be far worse in urban areas, where cities face the “urban heat island effect,” where heat is stored in asphalt and concrete, and continues to impact the temperature throughout the night.

Arizona officials want to add extreme heat to the Federal Emergency Management Agency’s (FEMA) declared disasters list. The list currently includes sixteen types of declared disasters, such as hurricanes, typhoons, tropical storms, and fires. By adding extreme heat to the list, a national emergency could be declared, which would allow for federal assistance. The funding could provide resources such as pop-up shelters, cooling centers, and additional outreach to vulnerable residents, thus preventing avoidable serious harm and death. The addition of extreme heat to the list of natural disasters was supported unanimously at the U.S. Conference of Mayors earlier this month. As stated by Phoenix Mayor Kate Gallego, “heat causes more deaths each year than most other natural hazards combined.” In addition to the roughly 702 heat deaths that occur each year, it is estimated that there are 67,512 emergency visits and 9,235 hospitalizations annually. Mayor Gallego addressed this issue during her annual state of the city address on April 12, 2023.

Two months later, on June 5, 2023, Representative Ruben Gallego (AZ-03) introduced legislation to amend FEMA’s list of eligible disasters to declare extreme heat a major disaster. The bill, known as the Extreme Heat Emergency Act, would take effect as early as January 2024, if passed. FEMA’s spokesperson, David Passey, stated that the assistance would become available once the need exceeded what the state and local resources could handle. The bill, which is still in the early stages of the legislative process, has been referred to the House Transportation and Infrastructure Subcommittee on Economic Development, Public Buildings, and Emergency Management

This is not the first time Arizona officials have introduced legislation to combat extreme heat. On April 28, 2023, Representative Gallego, along with Senator Sherrod Brown (D-OH), and Representative Bonnie Watson Coleman (NJ-12) introduced the Excess Urban Heat Mitigation Act of 2023, which would create a grant through the U.S. Department of Housing and Urban Development to provide funding that addresses the excess urban heat and heat islands. It would provide $30 million per year between 2023 and 2030 to curb the effects of excess heat through cool pavements, cool roofs, bus stop covers, cooling centers, and local heat mitigation education efforts. The bill has been referred to the Committee on Banking, Housing, and Urban Affairs. Rep. Gallego stated, “In urban areas, the effects of these rising temperatures is compounded by a lack of shade and miles of heat-absorbing concrete. And too often, it is our lower-income communities that are disproportionately impacted by this extreme urban heat. That is why I am proud to introduce this bill to address this deadly issue, keep Phoenix cooler, and ensure the hardest hit communities are prioritized.”

With temperatures continuing to increase, federal support for areas like Phoenix is vital to protect individuals from the catastrophic effects of extreme heat. If governments are able to allocate more funding toward mitigating or preventing the dangers of extreme heat, less would be needed to fund relief and recovery efforts. To learn more about the dangers of extreme heat, click here.