Spring 2026
Use the key word search function
at the left of this page to find specific events
January 27
“Rabbit holes, rumors, & online propaganda”
On January 27, the Foley Institute hosted Kate Starbird, University of Washington, who discussed the ways that online rumors and misinformation are spread during crisis events.
Professor Starbird opened her discussion with a pervasive internet rumor created in the wake of Hurricane Sandy of a manipulated image of a shark swimming through flooded freeway waters. She then talked about other rumors with potentially more serious outcomes, such as when internet sleuths incorrectly identified the assailants of the Boston Marathon bombing in 2014, marking a turn from internet volunteerism to online vigilantism. This shift, she said, means that “collective sensemaking could go awry” in the context of understanding crisis events.
Using data from social media posts following crisis events, Starbird and her team found that the online crowd is rarely “self-correcting”, and at the same time, professional journalism cannot keep up with the pace of online news. As a result, the public’s understanding of crises can be intentionally manipulated, with bad actors able to exploit attention dynamics and profit from them.
Starbird concluded by suggesting that disinformation in the modern era is participatory, advanced by social media influencers to further political goals. Ending with a call to action, she emphasized that the solution requires everyday individuals acting as “influencers in their own circles”. By engaging in discourse, speaking out, and not leaving the discussion to online actors who know how to hijack the system, individuals can leverage the participatory nature of the digital era to advocate for truth and democracy.
Kate Starbird is a professor in the Department of Human Centered Design & Engineering at the University of Washington.
January 28
“Being human in the age of AI”
On January 28, the Foley Institute hosted Chirag Shah from the University of Washington, who spoke on the evolving boundaries between human identity and artificial intelligence. He began by using historical examples, such as the “Mechanical Turk,” to highlight how humanity has long been intrigued by, and easily deceived by, machines that appear to possess human-like intelligence. Shah argued that AI today, specifically large language models and generative tools, are closing in on uniquely human territories, such as creativity, emotional intelligence, meaning-making, and ethical reasoning. Shah emphasized that the challenge is not only the technology’s capabilities but also its ability to mimic authentic human signals.
Shah’s presentation examined several moral and practical issues arising from the integration of AI. By using the famous trolley problem and the development of self-driving cars, he highlighted the challenges of programming ethics into an AI machine when humans themselves can’t agree on a universal ethical code. Shah also addressed the use of AI in therapy and education, questioning whether a lack of true understanding can provide meaningful support and help students learn. He pointed out that while AI can mimic human abilities, AI still lacks the continuity of experiences and vulnerability that define the human condition.
Shah concluded by advocating a conscious co-evolution in which humans proactively decide which tasks to retain and which to delegate. Though, he suggested that continuing to engage in some tasks that result in minor struggles may be a part of being human. He warned that outsourcing all effort to AI might lead to a loss. Shah encouraged the audience to move beyond the narrative of “Human vs. AI” and focus on what kind of humans they want to be in this new age.
Chirag Shah is a professor in the Information School at the University of Washington.
February 12
“Authoritarian populism from Hungary to the U.S.”
On February 12, the Foley Institute hosted Andrew Ryder of Eötvös Loránd University, Budapest, who spoke about how Viktor Orbán’s Hungary is influencing policies of the current U.S. administration under Donald Trump.
Ryder began his presentation by reflecting on his experience as an activist and academic, noting that the 2015 turn towards authoritarianism in Hungary, the Brexit vote, and the 2016 U.S. election compelled him to speak out. Throughout the discussion, Ryder explored the parallels between authoritarian populism in Hungary and recent political shifts in the U.S.
He focused on the “bromance” between Viktor Orbán and Donald Trump, which Ryder argues is rooted in shared ideological and stylistic traits. He observed that both leaders rely on a strongman persona to frame politics as a state of emergency. He shared that this style of leadership resonates with a specific demographic of voters in rural areas and “rust belt” regions who feel economically insecure and culturally displaced by globalization.
Ryder then explained how both movements leverage nostalgia and a sense of national victimhood. While the U.S. movement looks to “Make America great again,” Hungarian populism draws on the trauma of the 1920 Treaty of Trianon to foster a rigid, nativist identity. This identity is reinforced through “post-truth” politics, where scientific facts and rational debate are replaced by moral panics and conspiracy theories designed to keep the public in a state of high emotion.
Ryder also highlighted the international implications of this alliance. He noted that the Trump administration’s relationship in Europe has led to a reorientation away from traditional allies and toward nationalist regimes like Hungary’s. However, he provided a moment of hope for those against authoritarian populism by citing the 2021 Budapest Pride march, where over 100,000 citizens defied government threats of prosecution. Thus, signaling that the strongman narrative loses its power when the public refuses to be intimidated.
Ultimately, Ryder warned that the “Putin-ization” of Hungary, marked by cronyism and corruption, should serve as a cautionary tale for the United States’ future. He concluded that the best defense against this trajectory is the restoration of a rational public sphere and the implementation of stronger social protections to address the underlying inequality that fuels populist anger.
Andrew Ryder is the Director of the Institute for Political and International Studies, Eötvös Loránd University, Budapest.
February 25
“AI, democracy, and the problem of control”
On February 25, the institute welcomed Mark Fagiano from WSU. Professor Fagiano began by asking how do we ensure powerful systems continue to act in accordance with our intentions or objectives?
He noted that there are different types of control failures, including technical, ethical, political, and cultural failures. To prevent technical failures in AI, he said, the public needs to be aware of how AI truly works and the risks associated with AI decision-making. He stressed the importance of this if we are to maintain power over a machine that will become more powerful than humans.
Fagiano noted an issue with value alignment: AI systems act in accordance with shared value systems. He explained that this is difficult to ensure, given that values vary widely and even conflict with each other. Other ethical dilemmas relating to AI control include determining who is responsible for harm caused by autonomous entities and how the biased internet data AI draws from may negatively influence its outputs.
Lastly, he spoke on the issue of control problems over AI in the political dimension, and suggested that it is important to ask how AI will be governed. This would include questions about what democratic oversight exists for AI, but is complicated by the fact that different nations have different norms for governance. Further, the speaker noted that AI seems to be the “perfect tool for authoritarian rule”, with functions like mass surveillance, predictive policing, and the ability to monitor and silence activists. He pointed out that some of these functions may at first appear innocuous but may still reflect contrasting values. For example, mass surveillance is the conflict of the value of safety versus privacy.
To conclude, Professor Fagiano agreed that AI development is not inherently detrimental, acknowledging that there is great benefit to be gained from these systems. However, he urged the audience to consider what might go wrong, such as the erosion of the rule of law or social trust, and not to accept the changes passively AI will make to the world.
Mark Fagiano is assistant professor of philosophy at Washington State University.
February 27
“Disinformation and elections”
On February 27, the Foley Institute hosted a panel on disinformation and elections in Olympia in partnership with the Washington Secretary of State’s office. The event, moderated by Washington’s 16th Secretary of State, Steve Hobbs, was the latest in a series that has been running since 2010. The panel featured Steven Proshaka, University of Washington; Paul Gronke, director of the Elections and Voting Information Center at Reed College; and Kylee Zabel, Director of the Information Security and Response Division at the Washington Office of the Secretary of State.
Steve Hobbs addressed the packed room by framing disinformation as a national security threat, often arranged by nation-state actors like Russia and China to disrupt democratic processes through soft power tactics. Hobbs said addressing this issue would entail significant budgetary burdens, warning that cutting election office funding would hinder the state’s ability to defend against these complex information operations.
Experts Steven Prohaska and Paul Gronke clarified the different ways inaccurate information spreads. They noted that some people share mistakes by accident while others create fake news to cause harm. The level of disinformation has created an environment that makes the work of local election officials much more difficult, as they face increased public pressure. As the issue has become increasingly salient, federal support for election security has decreased, necessitating states taking more responsibility for protecting their own local voting systems and supporting the people who run them.
To address and overcome these issues, Kyle Zable noted that Washington State is taking a proactive approach in helping local counties prepare for the challenges of fake news. This includes providing millions of dollars in funding to improve security and train staff to correct false information and claims. The event concluded by emphasizing that the goal for the future is to build a strong and more transparent system where voters can feel confident that their voice is being heard and their ballot is secure.
March 11
“Artificial intelligence and civil rights”
On March 11, the institute welcomed Shankar Narayan who discussed how artificial intelligence and Big Tech has reshaped rights, equity, and justice today.
Narayan opened his lecture by discussing the narrative that often surrounds technological advancements. He suggested there is a pervasive view that technology is inevitable and crucial to solving global issues. Narayan countered this perspective by suggesting technology is rather a series of inherently unequal value choices due to structural inequality in our systems. He said that the discussion of technology and AI is about power, not progress, noting that one must examine what power interests Big Tech has in the advancement of AI.
Narayan highlighted some of the historical issues of technology, such as the often differentially applied surveillance systems resulting in disproportionate surveillance of minority groups. Today, both the lack of safeguards Big Tech companies implement with new technologies and biased data sources that train AI further complicate the ability for equitable advancement.
The speaker then transitioned to discuss what possible solutions to these AI issues there might be. He noted that self-governance among AI companies does not work. Further, there exists a lack of federal legislation and an increase in Big Tech influence in politics. As such, Narayan suggested that our solution should be a rights-based approach to technology that centers human beings and the human experience rather than AI. To conclude, the speaker urged the audience to ask what problems technology is built to solve, how this impacts human beings, and under whose rules are such technology built.
Shankar Narayan is the former director of the Technology and Liberty Project at the ACLU of Washington.

March 26
“Standing up for truth: How we build healthy communities”
The Foley Institute hosted Emily Vraga on March 26, who discussed health communication and misinformation.
Professor Vraga explained that misinformation is one of the greatest risks facing the world today. She shared that while many people blame social media platforms for this problem, these platforms offer a powerful tool to fix it, “observed correction”. This occurs when one publicly replies to a false post with accurate facts. The goal is not necessarily to change the mind of the person who posted the false information but to inform others who may see the post.
Vraga shared an example of observed correction, stating that the method is highly effective. She revealed that in dozens of studies on topics including COVID-19 and nutrition, people who saw a false claim followed by a correction were more accurate than those who saw nothing at all. Therefore, the correction serves as a safety net, catching people before they believe a myth. However, Vraga explained that for this method to be effective, the correction must be visible. If the truth is buried or ignored, people will only remember the misinformation.
Further, Vraga revealed that despite the effectiveness of observed correction, many individuals are afraid to correct others. Many people said that speaking up was useless or that they feared being attacked online. Additionally, there is a major normative gap: most individuals appreciate seeing corrections, but they assume everyone else finds them annoying. She shared that because we think society dislikes know-it-alls, we often stay silent, allowing misinformation to spread unchallenged.
To solve this, Vraga suggested that instead of trying to eliminate all misinformation, we should focus on managing it. She stated, “There will always be misinformation, and we will always need correction for that reason,” going on to say that encouraging a social norm that it is okay to be wrong and helpful to tell the truth, we can build healthier online communities. She concluded by noting that we do not need a perfect solution, we just need enough people willing to stand up for the facts.
Emily Vraga is Professor of Health Communication at the University of Minnesota’s Hubbard School of Journalism and Mass Communication.






