Critical Tech Talk

Produced by the Critical Media Lab at the University of Waterloo, Critical Tech Talks is a series of honest dialogues about technological innovation. From data harvesting to the conflict minerals in our smartphones, critical thinking is shifting the momentum towards positive change – towards Tech for Good®. Each of the university’s six faculties will co-host a techno-critical speaker and invite Waterloo students and local tech sector members to participate in an on-stage dialogue and lead a post-event discussion online. The series is sponsored by Communitech, the Office of Research at the University of Waterloo, and the faculties of Arts, Environment, Engineering, Health, Math, and Science.

Critical Tech Talk 4: Batya Friedman

October 28, 2022 | 4pm EST | Hybrid In-person and Virtual

Register Here

Shaping Technology with Moral Imagination:

Leveraging the Machinery of Value Sensitive Design

On October 28, we invite you to join us in-person or virtually via livestream for Critical Tech Talk 4: Batya Friedman.

Tools and technologies are fundamental to the human condition. They do no less than create and structure the conditions in which we live, express ourselves, enact society, and experience what it means to be human. They are also the result of our moral and technical imaginations. Yet, with our limited view, it is not at all obvious how to design and engineer tools and technology so that they are more likely to support the actions, relationships, institutions, and experiences that human beings care deeply about – a life and society of human flourishing.

Value Sensitive Design (VSD) was developed as an approach to address this challenge from within technical design processes. Drawing on over three decades of work, in this talk I will provide an introduction to value sensitive design foregrounding human values in the technical design process. My remarks will present some of value sensitive design’s core theoretical constructs. Along the way, I’ll provide some examples of applying value sensitive design to robots for healthcare and to bias in computing systems as well as demonstrate one toolkit—The Envisioning Cards—in the context of a design activity.

I will unpack these observations and their implications for artificial intelligence, machine learning technologies, and the environment. Thinking longer-term and systemically, I will bring forward a range of potential challenges and offer some constructive ways forward.

My comments will engage individual lives, society writ large, what it means to be human, the planet and beyond.

For those online, please have scratch paper and a pencil handy for the design activity.

If attending in-person, please see the link below for parking options in Uptown Waterloo: https://uptownwaterloobia.com/uptown-parking/

Batya Friedman is a Professor in the Information School and holds adjunct appointments in the Paul G. Allen School of Computer Science & Engineering, the School of Law, and the Department of Human Centered Design and Engineering at the University of Washington where she co-founded the Value Sensitive Design Lab and the UW Tech Policy Lab.

Dr. Friedman pioneered value sensitive design (VSD), an established approach to account for human values in the design of technical systems. Her work in value sensitive design has resulted in robust theoretical constructs, dozens of innovative methods, and practical toolkits such as the Envisioning Cards. Value sensitive design has been widely adopted nationally and internationally where it has been used in architecture, biomedical health informatics, civil engineering, computer security, energy, global health, human-computer interaction, human-robotic interaction, information management, legal theory, moral philosophy, tech policy, transportation, and urban planning, among others.

Additionally, value sensitive design is emerging in higher education, government, and industry as a key approach to address computing ethics and responsible innovation. Today, Dr. Friedman is working on open questions in value sensitive design including multi-lifespan design, and designing for and with non-human stakeholders – questions critical for the wellbeing of human societies and the planet.

Dr. Friedman’s 2019 MIT Press book co-authored with David Hendry, Value Sensitive Design: Shaping Technology with Moral Imagination, provides a comprehensive account of value sensitive design. In 2012 Dr. Friedman received the ACM-SIGCHI Social Impact Award and the University Faculty Lecturer award at the University of Washington, in 2019 she was inducted into the CHI Academy, in 2020 she received an honorary doctorate from Delft University of Technology, and in 2021 she was recognized as an ACM Fellow. She is also a stone sculptor and mixed media artist. Dr. Friedman received both her B.A. and Ph.D. from the University of California at Berkeley.

Student Respondents

Carl Tutton is undertaking a PhD in Sustainability Management. His background in end-of-life electronic waste policy and management systems, material flow analysis, and long-time interests and hobbies in consumer electronics led to his interest in the beginning of the lifecycle of products, the design phase. His work seeks to analyze successful implementations of, and barriers to, sustainable design changes and more efficient product lifecycles.

Sid Heeg is a PhD student in Sustainability Management. Their research focuses on mis/disinformation surrounding farming and farm practices and how to bridge the knowledge gap between urban and rural populations. They are interested in learning how social media algorithms play a role in the continued spread of mis/disinformation and how it impacts sustainable farming practices.

Critical Tech Talk 3: AI Five Ways

May 16, 2022 | 7pm | Hybrid In-person and Virtual

Couldn’t join us live? Rewatch here: https://youtu.be/ADf8CUWiyww

As artificial intelligence grows more prevalent every day, even to the point of making life-and-death decisions for humans, principles of Responsible AI must be implemented to ensure safety, dignity, privacy, and autonomy for all. In this roundtable discussion, hear from five experts across different professional and disciplinary backgrounds on their approaches to the field and perspectives on the future of responsible AI.

Panelists

Hessie Jones is a Privacy Technologist, Venture Partner, Strategist, Tech Journalist and Author. She is currently a Venture Partner at MATR Ventures and COO, Beacon, a social enterprise start-up focusing on privacy solutions. She has 20 years of experience in start-up tech: data targeting, profile and behavioural analytics, AI tech and more recently data privacy and security. Hessie advocates for AI readiness, education, ethical distribution of AI and the right to self-determination and control of personal information in this era of transformation. Hessie is also a contributor at Forbes and GritDaily, a Co-founding member of MyData Canada, Women in AI Ethics Collective member, Board member with Technology for Good Canada, a Co-founding Member of Education Reform Collective (to combat Anti-Black and Anti-Indigenous Racism in Canadian education) and a technology mentor and start-up advisor.

Kem-Laurin Lubin is a Ph.D. Candidate in English Language and Literature at the University of Waterloo, where she focuses on AI models used in apps deployed in digital citizen management, specifically judiciary, healthcare, and education-based apps. She explores how AI models are rhetorical in nature and are emergent textual forms, with an inherent discursivity that negatively informs the material outcomes for users and doing so with its built-in bias. She is also the founder of the AI-HCI Working group, as well as the Exec. Director of the NFP Organization, Canadian Tech for Social Good, focused on AI and Tech Literacy for all Canadians.

Patricia Thaine is a Computer Science PhD Candidate at the University of Toronto and a Postgraduate Affiliate at the Vector Institute doing research on privacy-preserving natural language processing, with a focus on applied cryptography. Her research interests also include computational methods for lost language decipherment. She is the Co-Founder and CEO of Private AI, a Toronto- and Berlin-based startup creating a suite of privacy tools that make it easy to comply with data protection regulations, mitigate cybersecurity threats, and maintain customer trust. Patricia is a recipient of the NSERC Postgraduate Scholarship, the RBC Graduate Fellowship, the Beatrice “Trixie” Worsley Graduate Scholarship in Computer Science, and the Ontario Graduate Scholarship.

Reza Bosagh Zadeh is founder and CEO at Matroid and an Adjunct Professor at Stanford. His work focuses on Machine Learning, Distributed Computing, and Discrete Applied Mathematics. He’s served on the Technical Advisory Boards of Databricks, and has been working on Artificial Intelligence since 2005 when he worked in Google’s AI research team. As part of his research, Reza built the Machine Learning Algorithms behind Twitter’s who-to-follow system, the first product to use Machine Learning at Twitter. Reza is co-creator of the Machine Learning Library and the Linear Algebra Package in Apache Spark. Through Apache Spark, Reza’s work has been incorporated into industrial and academic cluster computing environments. In addition to research, Reza designed and teaches two PhD-level classes at Stanford: Distributed Algorithms and Optimization (CME 323), and Discrete Mathematics and Algorithms (CME 305)

Ben Armstrong is a Ph.D. Candidate in Computer Science at the University of Waterloo where he is affiliated with the Artificial Intelligence Group. His research combines machine learning and social choice with a particular focus on using machine learning techniques to develop and evaluate novel methods of voting, such as liquid democracy or sortition. He has also helped to run several graduate and undergraduate courses on the social implications of computer science, and the intersection of artificial intelligence, ethics, and law. Ben is a recipient of a NSERC Postgraduate Scholarship and multiple Ontario Graduate Scholarships



Critical Tech Talk 2: Wendy Chun

February 10, 2022 7pm | Virtual

In her most recent work, Wendy Chun reveals how polarization is a goal—not an error—within big data and machine learning. Correlation, which grounds big data’s predictive potential, stems from twentieth-century eugenic attempts to “breed” a better future. Recommender systems foster angry clusters of sameness through homophily. Users are “trained” to become authentically predictable via a politics and technology of recognition. Machine learning and data analytics thus seek to disrupt the future by making disruption impossible. Chun will address these issues directly and engage in a disruptive conversation with two featured respondents and a live-streamed audience.

Couldn’t join us live? Rewatch here: https://www.youtube.com/watch?v=fJOntvvZ2wQ

About the speaker

WENDY HUI KONG CHUN is Simon Fraser University’s Canada 150 Research Chair in New Media in the School of Communication and Director of the Digital Democracies Institute. She studied both Systems Design Engineering and English Literature at the University of Waterloo, disciplines that combine and mutate in her work on digital media. Her recent books include Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition (2021), Updating to Remain the Same: Habitual New Media (2016) and Programmed Visions: Software and Memory (2011).

Moderator

Professor Marcel O’Gorman, University Research Chair, professor of English, and founding director of the Critical Media Lab (CML), University of Waterloo. Professor O’Gorman leads collaborative design projects and teaches courses and workshops in the philosophy of technology at the CML, which is located at the Communitech Hub. The role of the CML is to disseminate a philosophy of “tech for good.”

Student Respondents

Brianna I. Wiens (she/her) is a Postdoctoral Researcher in Communication Arts at the University of Waterloo and co-director of the qcollaborative, an intersectional feminist design lab. Her interdisciplinary work draws on her mixed-race queer activist-scholar experience to explore the digitally and culturally mediated phenomena of networked social movements and the politics of their design.

Queenie Wu (she/her) is a fourth-year undergraduate student studying Systems Design Engineering at the University of Waterloo. Her experience in digital product design influences her curiosity regarding the impacts of data and research processes on social systems through various lenses – including data journalism and urban planning.


Critical Tech Talk 1: Nicole Aschoff

November 8, 2021 5 PM | Theatre of the Arts, University of Waterloo

Silicon Valley companies have brought digital technology into every sphere of modern life. But while Big Tech garners unprecedented power and profits, everyday existence becomes ever more deeply enmeshed in the circuits of capital. To what end? What are the limits of the digital frontier?

Couldn’t join us live? Rewatch here: https://www.youtube.com/watch?v=3UE7mgcYi5A

About the speaker

Nicole Aschoff is an editor, writer and public sociologist focused on technology, labour, politics, feminism, the economy, and the environment. Her recent book is The Smartphone Society: Technology, Power, and Resistance in the New Gilded Age. She examines the complex ways that people, institutions, and big systems, intersect to forge the society we live in. Aschoff holds a PhD in sociology from Johns Hopkins University and currently works as a senior editor with Verso Books. Read more: nicoleaschoff.com

Moderator

Professor Marcel O’Gorman, University Research Chair, professor of English, and founding director of the Critical Media Lab (CML), University of Waterloo. Professor O’Gorman leads collaborative design projects and teaches courses and workshops in the philosophy of technology at the CML, which is located at the Communitech Hub.

Student Respondents

Neha Revella (she/they), MA Experimental Digital Media, Department of English, University of Waterloo. Neha is currently working as a research and project manager at Mozilla.

Nolan Dey (he/him), BASc Systems Design Engineering, University of Waterloo. Nolan is currently working as an AI research scientist for Cerebras Systems.