Abstract

The proliferation of digital signage systems has prompted a wealth of research that attempts to use public displays for more than just advertisement or transport schedules, such as their use for supporting communities. However, deploying and maintaining display systems “in the wild” that can support communities is challenging. Based on the authors’ experiences in designing and fielding a diverse range of community supporting public display deployments, we identify a large set of challenges and issues that researchers working in this area are likely to encounter. Grouping them into five distinct layers – (1) hardware, (2) system architecture, (3) content, (4) system interaction, and (5) community interaction design – we draw up the P-LAYERS framework to enable a more systematic appreciation of the diverse range of issues associated with the development, the deployment, and the maintenance of such systems. Using three of our own deployments as illustrative examples, we will describe both our experiences within each individual layer, as well as point out interactions between the layers. We believe our framework provides a valuable aid for researchers looking to work in this space, alerting them to the issues they are likely to encounter during their deployments, and help them plan accordingly.

 

N. Memarovic, M. Langheinrich, K. Cheverst, N. Taylor, and F. Alt, “P-layers — a layered framework addressing the multi-faceted issues facing community-supporting public display deployments.,” Acm transactions on computer-human interaction (tochi), 2013.
[PDF]

Introduction

Today’s public spaces see an increasing deployment of digital displays: they list interesting facts and events at universities, display schedules and news in metro stations, present special offers in shopping malls, or advertise a product on a building facade. Yet their predominant use as simple slide presenter and video player has seen dwindling “eyeballs” and led to display blindness [Huang et al. 2008; Müller et al. 2009] – an effect where viewers ignore much, if not most, of such animated advertisements. Researchers have started to suggest a range of alternative use cases for public displays: they can allow locals to share historical photos of a place [Taylor and Cheverst 2009] (discussed in detail in section 2), display the logos of football clubs that coffee-shop patrons are supporting [José et al. 2012], or summarize the interests of people in the vicinity [McCarty et al. 2001; McDonald et al. 2008; McCarthy et al. 2009]. In all of these examples, public display technology is used to convey a sense of community to the display’s viewers by stimulating interaction with, and awareness of, other community members. In this way, public displays help to enrich the social functions of public spaces, which provide a place where people can socialize, relax, and learn something new – ultimately creating emotional connections with others [Carr et al. 1992].

The design [Memarovic et al. 2012a], deployment [Ojala et al. 2011] and evaluation [Cheverst et al. 2008] of public display systems to support community interaction is challenging. Ultimately, the goal is to stimulate some form of community interaction. This can be as simple as encouraging people in the display’s vicinity to talk to each other [Memarovic et al. 2012b] or, more indirectly, by allowing community membership to be expressed in some form, e.g., through badges [José et al. 2012]. Displays can be used to explicitly exchange information among community members [Churchill et al. 2003; Redhead and Brereton 2009; Taylor and Cheverst 2009; Alt et al. 2011b] (Alt et al. 2011b will be discussed more in section 4) or to prompt passersby to play for their community in a competitive game running on the display [Memarovic et al. 2011b]. These different types of interventions typically require different system interaction capabilities. Some need active touch-screen input; others work with short-range wireless communication devices, such as Bluetooth-enabled phones. Displays might be located outdoors in busy town centers or inside quiet village cafes. These interaction choices, in turn, have a strong impact on the type of content that is needed and/or supported. In some cases, content can be contributed by community members (e.g., classified advertisements on a bulletin board); in other cases, editorial content can be more suitable (e.g., questions for a trivia quiz game). Depending on both the source of content and the envisioned interaction with it, different system architectures are needed. Some interventions might require cross-device access (e.g., accessing classifieds from a website or mobile phone [Alt et al. 2011b]) while others need to support content caching to cope with disconnection problems [Taylor and Cheverst 2009; Memarovic et al. 2011b]. Last but not least, appropriate hardware is required. In some cases it is possible (or even necessary) for researchers to introduce their own customized hardware into a setting (e.g., a custom installation in a bus underpass [Clinch et al. 2011]) while other deployments can (or must) use preinstalled hardware (e.g., an existing display network in a city [Alt et al. 2011b; Memarovic et al. 2011b]).

The five above-mentioned factors – community interaction design, system interaction, content, system architecture, and hardware – can be arranged in a layered fashion (cf. Fig. 13) to illustrate the dependencies between them, as well as their constructive structure in the context of community-building public display deployments.

The factors – and the interplay between them – emerged from our own experiences designing, developing, and evaluating public display systems “in the wild” that supported communities [Taylor and Cheverst 2009; Alt et al. 2011b; Memarovic et al. 2011b]. We believe that these three deployments – Wray, FunSquare, and Digifieds, form a representative set of systems for supporting community interaction. Their layered arrangement, together with a set of analysis methods (described in section 5) form a framework that can be used both before and during a communitysupporting public display deployment in order to allocate resources, uncover hidden issues, and troubleshoot emerging problems. We call this the P-LAYERS framework (from “Public display LAYERS” and pronounced “players”).

The P-Layers Framework

The three summaries in sections 2-4 of our own public display deployment efforts hopefully illustrate the obdurate problems associated with stimulating, capturing, and examining community interaction effects “in the wild”. This difficulty is perhaps best captured in a quote from a FunSquare game user: “OK idea, bad execution.” In retrospect, we can identify five main challenges researchers need to address in these deployments. In many cases, the hardware hindered the smooth operation of the system. We also underestimated the complexity of the system architecture. Getting appropriate and fresh content that is appealing for the community was challenging, and offering intuitive ways of interacting with the system – in particular for passers-by – continues to be a problem. All of these factors affected what we were primarily interested in evaluating: actual community interaction. We can layer these five factors into a framework that describes challenges of building public display systems that support community interaction: the P-LAYERS framework (from “Public display LAYERS”, pronounced ‘players’), as shown in Fig. 13.

The framework attempts to capture the difficulties intrinsic to building and assessing public display systems that aim to foster community interaction “in the wild”. In the following sections, we provide a detailed explanation of the different layers of the P-LAYERS framework, starting from the bottom. For each layer we will present a joint summary from our development and deployment experiences.

Hardware

Hardware sits at the bottom of the framework, signifying its fundamental importance as a foundation for any display based deployment. If the hardware does not fulfill requirements and expectations, both from users and the researchers, higher levels will be affected. In our deployments, three main insights emerged:

  1. The importance of matching development and deployment hardware,
  2. The importance of communicating screen affordance, and,
  3. The reliability of hardware components and availability of replacement parts.

Having the same development and deployment hardware is critical since any differences between the two can lead to contrasting user experiences. For example, the Wray Photo Display had exactly the same hardware for development and deployment. In contrast, for FunSquare and Digifieds, the hardware used for development in the lab was different from that used during deployment. These differences between lab and “in the wild” setup resulted in very different user experiences in the two settings. In the case of FunSquare, once the application was developed and moved from the lab setting to the UBI-Hotspots installation, one of the most frequent complaints from users was that the touch screens were “inaccurate, hard to use” or that the application was “nice, but reacted a bit slowly”. These problems were hard to spot during our test trials in the lab, since the lab had a later version of the hardware and a more reliable Internet connection (which was required by the FunSquare application). Similarly, the novel phone-display touch interaction modality developed for Digifieds that supported the transfer of content through touching the screen with the phone (cf. section 4.1) had to be dropped due to the use of capacitive screens “in the wild” as opposed to the lab, where resistive touch screen were available. The reason was that only the resistive touch screens available in the lab were able to detect touches from a mobile phone. This immediately eliminated the potential use of this interaction modality in Oulu.

Once the system is rolled out “in the wild”, proper performance depends on the reliability of the hardware components. For example, in all three of our deployments there were considerable issues with Internet connectivity that impacted on user experience. Both FunSquare and Digifieds were using the publicly available panOulu free Wifi network. Occasionally, bandwidth decreased or connectivity broke during peak hours, i.e., when the citizens used the network most – these peak hours usually overlapped with those of the UBI-Hotspots. Since fresh content was fetched over the Internet, lower network throughput created “jittery” interaction with the system, which lead to a frustrating user experience. Similar problems were encountered in the early phases of deployment of the Wray Photo Display, where an experimental mesh network was used in the village and the early system architecture required good levels of connectivity.

The central hardware component in the system is the display itself. Therefore it is important to consider how to communicate its affordance to users. For example, the resistive touch input featured on the UBI-Hotspots in Oulu were very much in contrast to what can be found on today’s smart phones and other personal devices that have high-quality capacitive touch screens. Most users expected to get the same user experience as they had with their mobile phones and were not satisfied if the screen did not provide the same experience. User expectations might have been better aligned with the displays’ capabilities if the design was such that users were aware that the touch screens were not as sensitive as the ones they are used to [Chalmers and MacColl 2003].

Even reliable hardware has a certain lifespan. For that reason it is important to plan for replacement parts. This is especially true for long-term deployments that include full transfer of system operations to the community. For example, in case of the Wray Photo Display, a hard drive failure occurred for one of the Mac Minis. This caused issues for sustainability and handover to the community [Taylor et al. 2013]. Hence it is advisable to check hardware reliability and/or the warranty period and, for contingency, ensure that compatible hardware is still available if replacements are required.

System Architecture

The overall system architecture of a public display system for supporting community interaction can appear straightforward: a touch screen as an I/O device, a local computer running a Web server or similar digital signage software, and an Internet connection for remote administration. However, by going beyond traditional digital signage systems that only need to play pre-determined content, two major new challenges for the system architecture arise: interactivity and durability. Interactivity not only means the direct user-to-screen interaction, but also interaction between different deployment sites, or multiple interaction capabilities (e.g., via touch screen, phone, and Web). Durability refers to the fact that – ideally – such deployments will run for months, if not years, and thus need to take into account long-term maintainability. Four main issues that impact interactivity and longevity emerged from our deployments:

  1. System scalability.
  2. Agility to follow changes in third party services/browser.
  3. The challenge of finding the right level of complexity.
  4. The challenge of supporting appropriate interaction modalities.

The issue of system scalability is best contrasted with the two deployment settings, i.e., Wray and Oulu. From the beginning the system architecture of FunSquare and Digifieds had to be adapted for a citywide display network, which could potentially represent hundreds of displays. In contrast to this, the Wray Photo Display deployment did not need to consider scalability: the focus of the Wray Photo Display was more on the technology probe based deployment within an iterative usercentered design process. The Wray Photo Display’s system architecture was originally designed for a single display. However, once more than a single display was needed the system architecture had to be modified for the new conditions. In other words, the architecture had to support decision making for where, i.e., on which display to show content. Still, system scalability can go beyond just decision making of where to show the content. In the beginning all the pictures uploaded to the Wray Photo Displays were stored on the local computer. It is not hard to imagine that if the success of the system was sudden this would not scale: if there were hundreds or thousands of users uploading their pictures the local computer would quickly run out of storage. Considering such a situation from the beginning would have required a different approach where pictures would be uploaded to a cloud based service or a professionally managed one.

However relying on third party services carries its own issues. In order to access available sensors from the UBI-Hotspots, FunSquare relied on custom made RESTful APIs, one per sensor. During FunSquare development the APIs were further developed and updated. This meant that whenever there was a change in the parameters received from the respective service this had to be reflected in the code in order to ensure that content coming form the service would be received. Also, UBI-Hotspots were running on a specific browser version of the Mozilla Firefox browser (3.6). This also had to be reflected in the code. If the browser version on the hotspots was updated this change would also have to be reflected in the code as well. Upgrading to the latest browser version on the UBI-Hotspots would allow the use of the latest web technologies, e.g., HTML5. However, considering that the system architecture of the UBI-Hotspots was built when the specific version was the latest one (and that all the applications running there are built for it) upgrading to the latest version would cause major problems for the system. These examples highlight the need for agility to follow changes in third party services and software, e.g. browser versions.

The above examples also illustrate some of the choices that can influence the complexity of the system architecture. An obvious rule of thumb for finding the right level of complexity would be to start simple and add complexity later. This was most evident with our FunSquare ambient mode deployment. During development, we spent a considerable amount of time brainstorming on how to display the most appropriate “fun fact” for a given situation. The ranking system we came up with (for more details see Memarovic et al. [Memarovic et al. 2011b]) ended up using a large number of factors (unit, numerical magnitude, timeliness of the context information, overall usage of a content category, number of uses of a particular content fragment, and user feedback). This added to the complexity of the overall architecture, both in terms of the decision process (algorithm) as well as for data management (meta data). In our subsequent lab tests, the selection procedure seemed to work well. However, during observations and interviews, it turned out that most people had clear preferences towards certain categories and would have liked a simple category-selection mechanism. While our complex selection process worked, a much simpler manual system might have worked just as well, with much lower complexity and more sustainable durability.

Finding suitable input and output modalities for interaction can lead to longer durability. For public display systems this would include finding the appropriate way(s) of how (in what form) and where (on what device/display) to present content. In Digifieds, how information was presented depended on where it was accessed. For example, when digified was presented on a display client it included high-resolution images, while on the mobile phone downsized images were used. Similarly, the display would offer various controls for retrieving content, as well as a ‘like’ and an ‘abuse’ button – none of which was needed for the mobile client.

5.3 Content

“It’s the content, stupid!” one is tempted to state, slightly adapting a well-known U.S. election campaign phrase. As Clinch et al. point out, content creation is one of the most underestimated resource costs in digital signage systems [Clinch et al. 2011]. Given the envisioned long-term deployments and the strong need for content that resonates with the target community, four challenges arise:

  1. Finding and accessing appropriate sources for content.
  2. Determining a suitable content format.
  3. Identifying the meta-data requirements for the content, given a particular setting.
  4. Managing content, both by users and by system administrators (moderators).

Appropriately seeding content needs to be resolved before a public display systems rolls out into “the wild”. The three services that we worked on portray two different choices of seeding content. FunSquare represents a public display application that uses content from a service by connecting two different content items (i.e., information that is sensed within the display and information that is stored in a database). On the other hand, both Digifieds and the Wray Photo Display required people to post their own text and images, i.e., they both rely on people/user-generated content. Both approaches have advantages and disadvantages. Neither of the two choices is inherently better suited or easier to use. User-generated content requires an initial seed phase where the system is seeded with content, as users are less motivated to fill an empty system (cf. section 2.3). A service-based content system, on the other hand, needs to ensure that its content stays fresh and relevant, as it does not enjoy the benefit of community members themselves updating it.

Determining the suitable content format is equally important as resolving the appropriate content source. As mentioned at the end of the previous section, Digifieds had a high and a low resolution version of a given image to be used depending on where the digified was shown: high-resolution image for the display client and low resolution one for the mobile phone. Another important property of the content format for Digifieds and the Wray Photo Display is that they supported open and commonly used standards, e.g., JPEG. These ensure widespread use and audience reach. Considering the support for the latest content types is also important since it can have a big impact on the system architecture. For example, if an application requiring HTML5 content, e.g., audio or video through the getUserMedia tag, was about to be deployed in Oulu the system architecture would need to change to the latest browser version that support this.

Once the public display system for stimulating community interaction is out and running its content has to be dynamically selectable and adaptable for different situations and communities. In order to allow content to adapt we can augment it with meta-data. Meta-data can allow for: 1) better content distribution, i.e., the correct content appearing on the correct display; 2) expressing a community’s content preferences explicitly (e.g., FunSquare’s ‘thumbs up/down’, Digifieds’ ‘abuse’ button, or opinions posted as comments on the Wray Photo Display); 3) assessing community content preferences implicitly (e.g., in Digifieds meta-data about the number of times an ad was viewed or downloaded); and 4) allowing personalized content labeling (e.g., tagging content in Digifieds). Identifying the right set of meta-data has obvious implications for neighboring layers (system architecture, system interaction).

Last but not least, content must invariably be managed in one way or another. For both Digifieds and the Wray Photo Display, there was a need for content moderation. In Digifieds, users could report inappropriate content through the abuse button. During the initial six months of deployment, two items with unsuitable content were reported and consequently removed. This type of moderation allows community members themselves to gauge inappropriate content. In the case of the Wray Photo Display, a more centralized moderation solution was implemented. As described earlier in section 2.3, residents of the village could ‘own’ a particular content category, which entailed the responsibility of moderation. In both cases, however, a review delay will potentially block appropriate content from appearing: in Digifieds, a reported item would be taken immediately out of rotation until reviewed, while in the Wray Photo Display, all new content had to be explicitly approved. The service-generated content used in FunSquare instead required a dynamic content management module that would ensure that content would not repeat itself too often. The module would also allow explicit moderation, as users could use “thumbs up” and “thumbs down” buttons to express their preferences for particular content items. As reported in section 5.2, however, much of the content management architecture that we initially devised turned out to be of only moderate use, as users ultimately preferred to manually select content categories.

System Interaction

Interactivity is key to allowing a display to become an active facilitator of community interaction. Three main questions need to be answered when it comes to system interaction:

1. Where to place the display? The location and exact placement significantly affects how users approach and interact with a display. 2. Which level of complexity is appropriate? Complex user interfaces support more powerful applications, yet can make interaction less obvious. 3. How should interaction be triggered? Users might not directly understand the interaction capabilities of a display, in particular when it involves subtle cues or advanced technologies such as NFC or Bluetooth.

In the case of the Wray Photo Display, there was some flexibility in choosing where displays would be located. The most desirable locations were the ones most frequented by residents and visitors, i.e., village town hall, post office, and café. Activities at the locations informed the way system interaction was designed: most of the users were seen waiting for their doctor’s appointment in the village town hall or queuing to place their order in the post office. A key design decision was that interaction should be lightweight: people could simply observe the content without interacting with it and content would change every twenty seconds. After that, if they had more time they could approach the display and interact directly by browsing through categories, selecting pictures, or reading their description. This example illustrates how activities at the location can inform system interaction.

All three deployments supported lightweight interaction with content in the form of content browsing. In FunSquare’s ambient mode, users were able to click on the “next fun fact” button, while Digifieds and the Wray Photo Display allowed users to switch between different categories as well as browsing back and forth between them. Although categorizing content provided information as to how it is presented and organized, it also added to the system interaction complexity where users had to perform several additional touch-clicks in order to get to their desired content.

Not all interaction capabilities might be immediately obvious to users. FunSquare had a timer in the lower right corner that showed the time left for a particular fun fact to be displayed. However, not all users understood what the timer meant. Similarly, some people did not realize that the display was interactive, others realized that buttons were clickable but did not know what they did. Several users stated that they would prefer if some instructions about the meaning of the buttons had been present. More homogeneous communities might allow very specific or simple metaphors to be used. Yet, for a general audience, textual descriptions or explicit help buttons might be required. As a solution we tried to use a QR code in FunSquare’s ambient mode, which featured a surrounding text “Take this fun fact with you”.

Apart from the QR code itself, no other explanation of how this fact could be retrieved was offered, as we assumed that users would be familiar with the codes. However, most users ended up trying to click on the code.

One thing to have in mind when placing interaction elements is that – depending on the display’s size and position – there are display areas that users are blind to. For example, in FunSquare ambient mode (see Fig. 6-a), some people did not notice the timer in the lower right corner. In game mode, where the timer was located in the central lower area, it was similarly overlooked:

“Big screen, you have to play too close. I didn’t notice the time.”

A similar issue occurred in early versions of the Wray Photo Display, where users did not notice navigation controls in the center of the display. These examples point to a specific issue with the design of system interactions for large public displays: the interplay between the display and its surroundings.

Community Interaction Design

At the top of our framework is the intended use of our intervention, i.e., the design of a display that supports community interaction. Even if all underlying layers are successfully addressed, plenty of challenges remain at the top in order to engage a community.

The major challenges we experienced in our deployments were:

  1. Communicating the value proposition of the application to the users.
  2. Avoiding a negative impact on the community.
  3. Considering interaction between different communities/stakeholders.
  4. Designing for system sustainability.

The fact that a user can understand an application’s interaction capabilities is not enough to ensure that they can also understand the community interaction design. An example observation from our FunSquare deployment illustrates this: a father and his daughter browsed through a number of facts and voted (“thumbs up”) for almost all of them. In the subsequent interview, both stated that they understood how to interact with the application. Yet, they could not understand the meaning of the application. FunSquare’s purpose was to serve as a conversation starter and its value was in stimulating social interaction. However this type of value is obscure and has to be wrapped in a more concrete and straightforward goal. For example, the accent could have been put more on the learning potential of the application. We tried to do this through the heading text “Did you know that…”. However, having something more explicit, e.g., “Learn new facts about Oulu” might have made the value proposition clearer.

FunSquare’s game mode was much easier to understand, yet its concept of “playing for a neighborhood” also had some unanticipated consequences:

[How did you feel about your contribution to the neighborhood’s score?]: “Not good because I didn’t get any question right.”

The above quote shows how the intended community interaction might actually have a negative effect if it is not achieved. While it is unclear whether such negative experience actually lowers people’s involvement with a community, it might certainly deter frequent use of the application. One option might have been to provide some points for successfully completing the game, independent of the performance. Another user pointed out an additional unanticipated effect of the neighborhood game concept:

“Fun to see how own neighborhood is doing in comparison with the others. On the other hand, could aggravate the relation between the areas.”

A similar experience regarding such inter-community processes comes from the Wray Photo Display deployment. One of the goals of the deployment was to support exchange within the community. In April 2010, the post office installed a coffee maker and started selling coffee drinks for take away. This ‘new venture’ was advertised through the Wray Photo Display. The advertisement not only appeared on the display in the post office but also on the second display, which was installed in the café. This caused a stir in the community and between the two places. After the advertisement had been noticed by the café’s owner, its removal from the display was requested. This example shows that “in the wild” it is not enough to just consider community as a coherent entity but that attention has to be given to inter-community relationships and interests and the need to be wary of potentially divisive deployments in the community.

Finally, it is important to consider ways that will allow system sustainability and each of three systems had different approaches. For FunSquare, system sustainability was reflected with the type of content that was displayed – autopoiesic content – which was generated “on the fly”. This approach ensured fresh content in the long run. The Wray Photo Display and Digifieds systems had different approaches. System sustainability for the Wray Photo display was conceived through the participatory design process where the community and its opinion played a key role for every revision of the system. This way the community also felt a sense of ownership for the system. Allowing community members to create and own picture categories further stimulated the sense of ownership. Digifieds adopted a similar approach for achieving system sustainability. As described earlier, classifieds uploaded to Digifieds could be restricted to a certain area where displays were available. However, such geographic grouping and filtering was actually supported in a very generic fashion, potentially allowing for arbitrary grouping and filtering (e.g., all displays in the vicinity of churches). This conscious design decision was made in order to support more finegrained community information dissemination along a variety of factors. We believe that allowing for self-organization/appropriation by the community is key for an application’s acceptance and system sustainability.

Interplay Between the Layers

Issues in one layer of the P-LAYERS framework often strongly influence neighboring layers as well, i.e., choices on one layer percolate up or down and thus restrict or open up choices in the following layers. In this section we provide a number of examples that illustrate how issues at individual levels can impact neighboring levels of the framework.

Starting from the Community Interaction Design Layer In the particular case of the Wray Photo Display, one of the goals at the community interaction design level was to support a sense of ownership in a fully inclusive manner, i.e., across the whole Wray community. This in turn placed a requirement on the system interaction layer, e.g., to allow all members of the community to upload pictures to the system. However, the fact that only web forms were available for supporting this task meant that many elderly residents struggled with uploading their pictures. Some elderly residents asked the technically competent champion in the village to do this on their behalf, but clearly some felt a social reluctance to do this. One potential solution to this, which we would still like to pursue, is to provide an effective solution at the hardware layer. This could, for example, involve tailoring the photocopier in the village post office to act as a simple scanner for inputting pictures into the system. This alternative was discussed with some enthusiasm by elderly residents at one of the design workshops. While still not ideal, this approach would likely provide an alternative with a significantly lower barrier to entry for certain users

Starting from the System Interaction Layer In the FunSquare game mode, community interaction was designed around a game. The game was limited to ninety seconds and users would receive an additional five seconds for each correct answer. This time limit was introduced to raise the competitive spirit and excitement within the game. However, for some users this had a very negative consequence:

“Had to hurry up when answering. The alternatives were hard to understand.”

This aspect of system interaction had a direct impact on the community interaction, as users felt rushed and did not feel comfortable playing:

“Playing for a neighborhood is a pretty interesting idea. There could be more time to answer the questions.”

“[You] don’t want to betray your own neighborhood, but [instead] get the best points you can. An OK idea, [but] bad execution.”

These examples illustrate again the need for professional support. As none of the researchers involved in FunSquare had any experience in game design, the community interaction design did not live up to its full potential. Involving game designers prior to the deployment might have significantly altered the community interaction experience. Starting from the Content Layer. Content can strongly influence people’s opinion on how they can interact with it. One interesting observation in our Digifieds deployment was that people thought they would have to sign up for the service in order to be able to use it. We believe that the reason for this is the similarity of Digifieds to Web-based services such as Ebay or Craigslist – which require an authentication. This shows how content – particularly its design – can have a direct impact on system interaction, i.e., on people’s perception and expectation.

While the Wray Photo Display was a novel system for the respective community, both FunSquare and Digifieds were running on previously deployed hardware where users were familiar with (existing) display content. In one particular case, two occasional UBI-Hotspot users refrained from interacting with the FunSquare application because content was different from that they were used to, i.e., issue at the content layer propagated to the community interaction layer as well. This could have been potentially avoided by paying attention to the specific user group, i.e., users who have prior experience with UBI-Hotspots. For example, this could have been done through an on-display element that would state something like “Novel UBI-Hotspots service, try it out!”.

Starting from the System Architecture Layer. In the first design workshop for the Wray Photo Display, there was a request for the ability to have appropriate awareness of what content was appearing on the display at any given time – without having to be physically present at the display. The agreed solution was to have a web page that would show a screen grab of the photo display. While such a solution is trivial it created an issue within the chosen system architecture. It meant that the web server had to reside on the photo display itself – rather than a server at the university – in order to ensure that the photo content would still be visible on the display even in the event of the village losing Internet connectivity for a short period. While onscreen content would be available during a period of Internet outage, residents would not be able to access the current screen grab. As a consequence we, the researchers, would have failed in our obligation with the village and the residents: they would feel a lack of control/awareness regarding the public face of the community, i.e., the content being shown on the public facing photo display. This can be considered as a problem residing at the community interaction design level.

The above example illustrates the impact of the hardware layer (unreliable internet connectivity in the village), impacting upon system architecture (need for web server/content source to be local rather than remote), further impacting on content (during internet outage the content would be available on the display but residents would not be able to remotely view the current screen grab), and finally having an effect on community interaction design (trust relationship between the researchers and residents).

Starting from the Hardware Layer. A good example comes from the FunSquare and Digifieds deployments: one of the display locations where observations were made was outdoors (in the city center). At that particular location, the sun created a lot of glare on the screen. This in turn made it hard for people to interact with any of the applications on the display. During the FunSquare observations, we noticed several instances where people pressed the ‘+’ button repeatedly in order to see what would happen. However, because of the heavy glare they did not notice that the displayed facts changed. In other cases, people did not notice certain user interface elements, e.g., the timer. This shows how improper hardware can cause problems on content and system interaction layers. When these two are broken, it is much more difficult to stimulate community interaction through public displays.

Besides the display output qualities, some interventions also require on-screen input capabilities, i.e., touch screens. With today’s prevalence of touch-enabled devices, touch is often seen as the default interaction modality. If the quality of a touch display does not meet user expectations (which are often high, since the majority of today’s mobile phones typically feature high-resolution capacitive touch screens), it can have a significant negative impact upon user experience. For a highly interactive deployment such as the FunSquare game, we received comments that “the touch display is inaccurate, hard to use”, that the game had “stiff controls”, and that the overall experience with the game was “frustrating” or even “boring”. In other words, the hardware had direct impact on interaction and community interaction layers.

One hardware issue that had an impact on system architecture directly in all three cases was unreliable Internet connectivity that created the need for offline content access. In the case of FunSquare (in both game and ambient mode), this meant having a stock of fun facts available for each display that would have been shown until the new/fresh ones arrived. In Digifieds, we did not manage to implement such offline content management in time, so the displays only worked in online mode, i.e., if there were problems with the Internet connection, no classifieds were available at all on the display (Digifieds that were created or retrieved using the mobile client were available offline). In the case of the Wray Photo Display this meant hosting the server locally on the Mac Mini running the display rather than at the university.

Related Publications

N. Memarovic, M. Langheinrich, K. Cheverst, N. Taylor, and F. Alt, “P-layers — a layered framework addressing the multi-faceted issues facing community-supporting public display deployments.,” Acm transactions on computer-human interaction (tochi), 2013.
[PDF]