In writing the second article of his PhD research, Marlon Kruizinga is investigating the ethical reasoning-processes of Lombardijen (Rotterdam) stakeholders on AI-powered ‘smart’ cameras in the neighborhood.
This qualitative research is being done through participatory observation, for instance participation in neighborhood watch walks, as well as long-form, semi-structured interviews. In such interviews stakeholders are prompted to reflect on things like the presence of security cameras, potential and actual AI-applications associated with such cameras, camera data-sharing of various forms, and different types of (‘smart’) camera users (i.e. police, municipality, residents, private companies). Stakeholders are asked what they think about these topics, how certain scenarios would or do make them feel, and why. Based on this research, the eventual aim for the article is to construct a framework of ethical reasoning processes present in Lombardijen stakeholders on the topic of ‘smart’ cameras, which should help clarify and inform future ethical discourse on this topic (potentially, both inside- and outside of the localized context of Rotterdam, Lombardijen).
So far, qualitative interviews have only been conducted with residents of Lombardijen, and the article is still mostly in the data-gathering stage. However, ahead of further data gathering and analysis, there are already a two salient, but more meta-level, insights that are worth sharing from the proceedings.
The first insight is a methodological one, to do with participatory research. Almost all of the interview participants that have been recruited so far, have been recruited based on previously-made connections at earlier participatory research events. This includes participatory observation through neighborhood watch walks, participatory observation while picking up trash with local residents, as well as participatory workshops where residents were asked for their input on safety in the neighborhood and simultaneously given guidance in using ChatGPT for text-summary and image generation (more can be read about this workshop in an earlier blogpost). Overall, PhD researchers have been active and engaged with residents and other stakeholders in Lombardijen for over a year. The neighborhood Lombardijen has already had its fair share of researchers coming through over the years, and research fatigue in residents is a real possibility in such cases. Strong participatory engagement was in part a conscious decision in consideration of that fact.
A key methodological take-away from Marlon Kruizinga’s research, seems then to be that, even in heavily researched areas, a clear commitment to participatory approaches and events can still generate significant willingness of residents to engage with the research. Engagement even seems to remain beyond the point when residents are directly being given something back (like helping them in their neighborhood watch walks, picking up trash with them, or teaching them something about an LLM). This may be due especially to proving to residents a degree of trustworthiness and personal engagement with the neighborhood and its interests as they themselves see it. While this has, in the Lombardijen case, demanded significant time-investment on the parts of the researchers, the positive results still merit serious consideration of such a long-term commitment to participatory research that seeks to ‘give back’ right from the start.
The second insight from the research thus far involves answers given by residents to questions which concern ethical opinions/objections around ‘smart’ cameras. While residents do give insightful answers about their values and principles, based upon-which they reason about cameras and AI, and about where concrete limits or conditions lie for them, they also consistently indicate that they don’t deeply develop or voice certain ethical opinions or objections due to a perceived ‘futility’. This futility lies in their perception that, even if they did have a stronger opinion, or voiced said opinion, this would not affect the actual policy and technological advancement in relation to cameras and AI. Put plainly, residents seem at times to believe that tech-companies, their consumers, and/or policy makers are so committed to technological innovations like AI (and their implementation), that there is no point in formulating a coherent ethical objection to it. Best to just ‘go with the flow’, and at most to ask for clear explanations of what is being done.
This demonstrates, in the eyes of an empirical-ethical researcher, a clear bottleneck for future ethical deliberations on public safety AI, generally speaking. If certain stakeholders do not ultimately believe that their opinions can make a difference in the process of technological advancement and implementation, then the role of that group of stakeholders in helping shape an ethically responsible approach to technology is curbed from the outset. This insight may imply that, instead of merely allowing all stakeholders to voice their ethical perspectives and concerns, it is also necessary going forward for policymakers and private sector players to demonstrate to other stakeholders (such as residents) that they may change their policies or technologies in response to expressed concerns. It would seem that building this kind of system-trust is essential to ensuring healthy, deliberative, and inclusive development of responsible AI in the future.
- Related content
- Related links
- Overview blogposts | AI Maps