Abstract

As public display networks become open, novel types of interaction applications emerge. In particular, we expect applications that support user-generated content to rapidly gain importance, since they provide a tangible benefit for the user in the form of digital bulletin boards, discussion platform that foster public engagement, and applications that allow for self-expression. At the same time, such applications infer several challenges: first, they need to provide suitable means for the passerby to contribute content to the application; second, mechanisms need to be employed that provide sufficient control for the display owner with regard to content moderation; and third, the users’ expectations with regard to the posting procedure needs to be well understood. In this paper we present UniDisplay, a research prototype that enables users to post text and images to a public display. We report on the design and development of the application and provide early insights of the deployment in a University setting.

M. Greis, F. Alt, N. Henze, and M. Memarovic, “I Can Wait a Minute: Uncovering the Optimal Delay Time for Pre-Moderated User-Generated Content on Public Displays,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2014.
[PDF]

Introduction

Urban spaces are getting crowded with public displays [5], from small screens showing menus in bars to large ones covering an entire building facade. Although they are mainly singular installations that show Powerpoint slides and still images it is not hard to imagine that they will soon be connected over the Internet, to form a novel communication medium open to a variety of content and applications – so-called open display networks [3]. Previous research has shown content creation to be one of the crucial problems for public displays and this content is often expensive, both in terms of human resources and monetary value [9]. On the other hand creating content for social networking services is considered “dirt cheap”, e.g., Twitter reports staggering 340 million tweets that are posted daily. Integrating user-generated content, e.g., tweets, onto open display networks is a possible solution for content creation in open display networks and has been explored in prior work [7]. It would also allow public displays to be integrated more into users’ ”communicative ecology” [6]. Yet, posting user-generated content on public displays comes with the problem of content moderation as explicit and inappropriate content could appear. In addition, posted content might be problematic in other ways for a display’s particular physical location. For example, previous work reported on inadvertently posted corporate information [2]. Prior research suggests different ways of moderating public displays content, including pre-moderation [10], post moderation [2, 4], and moderation based on audience feedback [1]. However, these works only applied a certain strategy without closely investigating the effects. In our work we aim to gather an in-depth understanding of pre-moderation as we believe this to be a central prerequisite in open display networks to encourage display owners to allow user-generated content. Prior work looked at the impact of labelling content [8] whereas we focus on the delay times caused by the review process. We believe this to be a major challenge with pre-moderation. The fact that posts do not appear instantly raises the following questions: (1) What do users expect when their content appears? (2) What effect does the delay cause? (3) How can the user be notified that content is under review and when it will appear? If a system fails to address these challenges, users will wonder where the problem occurred and either resend the content or stop using the display. To investigate pre-moderation of content on public displays we conducted two studies. First, we distributed a survey that investigated users’ expectations of optimal timing for content that is pre-moderated. We found 10 minutes to be an acceptable delay for more than 70% of the users. Within this time-frame different forms of content moderation are possible, including (a) automatically, (b) by the owner, or (c) crowdsourcing-based approaches. We then developed an application allowing people to send tweets to a public display network. We deployed the application in the wild on five connected displays and inferred an artificial delay to investigate the effect of different pre-moderation mechanisms. The contribution of the work is twofold. First, we report on user preferences for content upload waiting time on public displays (a) with and (b) without moderation. Our results show that if users are aware of moderation, more than half of them is even willing to accept delays of one hour or more. For applications that do not communicate the moderation process, a delay of up to 10 minutes is still acceptable for the majority of the potential users. Second, we provide insights about the effect of a moderation delay on users’ behavior. Through an empirical study we found that even short waiting times of 90 seconds can confuse users. Furthermore, we show that the longer the delay time the less posts appear on a display. Finally, the delay time seems to not influence a user’s decision to continuously post to a display.

Prototye

Based on the findings from the online survey, we designed and implemented UniDisplay, a web-based public display application that enables users to post short text messages and images. In this way we wanted to provide an easy-to-use, casual application that would (a) attract an as large user group as possible, (b) enable us to incorporate different authentication mechanisms, and (c) to employ different moderation strategies.

Application

We implemented a simple application that shows the 12 most recent posts made to the display. Posts can consist of text messages (max. 140 characters) or of a square image. As new posts arrive, old posts vanish from the screen. We do not provide any other interaction techniques for the users than posting the messages to keep the application simple and concentrate on the users’ expectations. In the future we may incorporate more sophisticated interaction techniques, such as retrieving content or likes. The display client runs in a full screen browser (Figure 1). The screen layout adapts to the browser window, which allows the client to be used on displays with different resolution, aspect ratio, and orientation.

Authentication and Posting

To reflect the different authentication mechanisms, we implemented several ways of posting: through a simple web form; by sending an email; and by posting to a social network. A text message at the bottom explains how to post a message.

  1. Web Form: A simple web form allows for posting content. The form reflects cases whithout authentication.
  2. Email: Furthermore, we implemented a way to send an email to the display. The system parses the content of the email and posts it onto the display. To be able to post via email, users need to verify their address upon posting the first time. When testing, we realized that this method may infer a significant delay to the posting procedure, due to the available bandwidth and mechanisms to check for spam and viruses.
  3. Social Network: Finally, to reflect cases where users authenticate via a social network, we allow users to authenticate and post via Twitter. We created a Twitter account for the display and use the streaming API which allows to listen on a Twitter user stream. To post to the display, users simply need to mention the display account name in their tweet.

Moderation

Each post that is sent to UniDisplay is stored on the UniDisplay server together with a timestamp and, if available, a user ID (email address, Twitter ID). Hence, we can easily exclude explicit content from being shown on the screen through post-moderation. The user ID allows us to later contact the poster, e.g., to send him the URL for an online survey. The display client polls new posts from the database in regular time intervals. This intervals can be configured on the server thus implying an artificial delay. In this way we can simulate a moderation process. To simulate no moderation or post moderation we immediately show the content on the display. For pre-moderation we can set delays, reflecting the time usually required to moderate the content, for example 0 seconds for an automated moderation based on a blacklist, 30 seconds for the simulation of manual moderation, or 90 seconds for community or crowdsourcing-based moderation.

Deployment

Implementation

UniDisplay was implemented as a client-server application. The Node.js based server stores the content posted to UniDisplay via the different channels into a MySQL database. The display client was developed with HTML, JavaScript and the template language EJS (Embedded JavaScript). Communication between server and display client is realized by means of a REST API.

To enable posting via Twitter we use the streaming API which allows to listen on a Twitter user stream. The user stream receives all tweets, retweets, and mentions of the specified Twitter user. Additionally, the user stream provides deletion notices, disconnect messages, friends lists and events such as new followers or favorite tweets. Since the user streams do not provide messages with specific hashtags we created a Twitter account for the displays. By mentioning the account name in a Twitter message (“@unidisplay”) the message can be detected by the server and be posted to the display. The REST API has to be used at a restart of the application to load former messages.

To enable easy administration of the display, we implemented an admin interface that shows all posts in a table. Single posts can be selected and deleted from the database. An “emergency” button is provided that allows the content of the screen to be instantly updated with new content in case inappropriate content would be posted. This feature was implemented to be able to quickly respond to requests by the owners of the places where the displays are deployed.

We deployed the web-client on five displays across the campus (see Figure 5) where it ran 24/7 for the duration of 8 weeks. Two displays were installed in the entrance area of faculty buildings and in close proximity to lecture theaters. A third display was deployed in the vicinity of a coffee kitchen shared by two research groups in one of the university building. The fourth display is deployed in a University cafeteria. The display is mounted on the wall in close proximity to tables but is visible from almost any location inside the cafeteria. The last display is located in the main canteen building of University with a throughput of several thousand people per day. The display stands at the intersection of two aisles with tables in the vicinity. Passersby for all displays were both employees of university as well as students attending lectures and courses.

To simulate different moderation strategies, we added an artificial delay of 0-90 seconds that was changed every 2 hours. To minimize conflicts with other stakeholders due to inappropriate content we decided to only enable posting via Twitter. Additionally, we provided several employees located in close proximity of the display access to the administration interface and hence the opportunity to delete particular posts or override the content of a display if it was spammed with offensive or inappropriate content. At the same time we asked the administrators to use the mechanism carefully and doublecheck with us if in doubt. During the eight week deployment there was only one occasion were we decided to override the content of the display due to inappropriate content.

Content

During the time of deployment, 519 messages were posted from 95 different users. To analyze the content we designed a data walkthrough. We extracted all posts from our database and printed them as they appeared on the display, including the ID of the poster as well as the timestamp. We then proceeded to review, categorize, and analyze the data to find interesting patterns and relationships (see Figure 6). 82% of the posts were pictures. We categorized the posts into the following categories: statements, communication, advertising, self-expression, persons, display, test messages, information, offensive content, others. An analysis of the timestamps shows that most posts are made during lunch hours (12pm-2pm) and around 5pm when people usually leave the premises. We detected a number of practices and patterns in the data that we summarize in the following:

  1. Taking Over the Display Space: We observed a number of cases where people tried to take over the entire display space through subsequent posts. One strategy is to separate an image into 12 tiles and post them in a way such that they would be assembled into a large image filling the entire screen. This suggests that exclusive use of a display is of value to the users and could be exploited in the future to foster interaction with the display or to incorporate new business models.
  2. Digital Honeypot Effect: Another interesting observation is that sometimes a post seems to trigger what we refer to as a digital honeypot effect. After the first post appears, other display users start to post content themselves. While the trigger is usually a controversial post (e.g., about a local soccer team), we believe that replies were often fostered by the fact that other people standing or sitting in the display vicinity realize the arrival of a new post and thus more closely observe it as they usually would if simply passing by the display.

Observations

During the deployment we were also able to observe some users in front of the displays and to overhear their discussions about the display and the messages. Some of them are not sure if the display really works and they discuss together if there is a moderation process going on in the background because they suspect 90 seconds to be a very long time for a message to be displayed. Often users stand in front of the displays in groups chatting with their friends while posting content via their mobile phones but some people also sit on the floor in front of the displays with their laptop to be able to post some content while watching the display.

Interestingly also people who are not in front of the displays or even never saw them start to post content because they hear or read about our displays. This effect seems to happen if there is no need to be in front of the display when posting a message and could also lead to spamming or offensive content. We did not really think of people not being in front of the display while posting because we thought that it is not attractive for them. They do not know what the display looks like and they do not know if or when their messages are shown but they post anyway. This is an effect which has to be considered when developing applications with user-generated content for public displays.

Finally, forcing people to use a twitter account for posting seems to minimize the number of inappropriate content. Some people create a twitter account just to be able to post a message and they all seem to be highly motivated to be part of the community and do not post any offensive content. In total we deleted 2 messages. We nevertheless observed critical posts. For example, some of them were created out of frustration because the display in the coffee kitchen lost the connection to the internet and did not pull new content. This is an indicator that if displays provide a benefit, they may indeed become an important artefact in people’s everyday life.

Related Publications

M. Greis, F. Alt, N. Henze, and M. Memarovic, “I Can Wait a Minute: Uncovering the Optimal Delay Time for Pre-Moderated User-Generated Content on Public Displays,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2014.
[PDF]

F. Alt, N. Memarovic, M. Greis, and N. Henze, “UniDisplay – A Research Prototype to Investigate Expectations Towards Public Display Applications,” in Proceedings of the 1st Workshop on Developing Applications for Pervasive Display Networks, 2014.
[PDF]